Material graph based custom shaders

This is why they don’t do that.

2 Likes

Lua 5.1 has a well known RCE with its bytecode that Luau is also vulnerable to. Do we consider Luau unsafe for this reason? No, because Luau will never compile to bytecode thats malicious.

Additionally, Luau prevents the loading of arbitrary unsigned bytecode.

The same logic applies with graph-based shaders, you dont get to write the underlying HLSL or GLSL files, and Roblox can then prevent the graph editor from creating malicious shaders.

4 Likes

This. This is exactly what I said before. Because you aren’t going to allow sending arbitrary HLSL/GLSL code, this is just a massive security risk. Instead Roblox could create their own shader graph format that stores instructions, which are just sent to their publishing servers which handle the compilation of these shaders to optimize for runtime, which could then be embedded into the published place file. This would also allow the server to reject invalid or malicious data?

4 Likes

just bumping this really quick.

lots of people want this, the more people that see it, the more replies and likes there are, and the more likely it is

please, at least one admin needs to see this!

1 Like

Reviving this topic, this is sort of coming! Albeit it’s gonna be CPU based for now. You will be able to make primitive shaders for surface appearance and material variants using EditableImage!

From the looks of the API docs, it seems that those properties are plugin-security; do we know if that is a permanent choice for the foreseeable future or whether that is planned to change? We could presumably still use AssetService:CreateSurfaceAppearance (assuming that API supports editables, and is near-instant to create the instance itself) once that is released; but it’d be quite a bit more inconvienient, if so.

You can not directly write them. The way it works is:

local MySurfaceAppearance : SurfaceAppearance = AssetService:CreateSurfaceAppearance({
	ColorMap     = Content.fromObject(EditableImage),
	MetalnessMap = Content.fromObject(EditableImage),
	RoughnessMap = Content.fromObject(EditableImage),
	NormalMap    = Content.fromObject(EditableImage)
})

HOWEVER, note that this doesnt work yet. You need to reverse engineer studio and modify some bytes with IDA or other reversing programs. Just enabling FFlags is not enough here.

very cool
though in theory, it’d be faster to pre-compute the shader values, then replay them just like a video


yknow it pisses me off that roblox just straight up refuse to add any custom shader support, or add more shaders since most of them are a decade old

instead we have…ai :wilted_flower:

3 Likes

This is the downside, shaders are not efficient on the CPU. This is basically software based.

Just to put this into perspective, a simple noise shader for normals and the diffuse on a 512x512 image will drop my CPU frametime so bad that I’m stuck on 30fps or less. (Ryzen 7 7800x3d)

1 Like

It’s so sad we have to hack our way around just to try and replicate a feature we’ve all wanted as developers for ages. Very disappointing ROBLOX tries to force all device support down our throats restricting us from features we all want. Especially when it’s shown to be possible.

5 Likes

Wholeheartedly agree, any other game engine has shaders, node based if not code based. We need shaders as roblox is evolving they’re leaving behind a crucial point in gamedev.

1 Like

I absolutely agree with this feature request.

I would love to see a response from someone at Roblox about this feature, especially since the only comprehensive response I can seem to find was from almost a decade ago.

From what I can see, we were sort of just left on how its “a very hard project with many open design questions.” It would be very nice to know if the sentiment around this feature has changed at all in the past decade, especially with the developments on both the rendering engine and the lighting engine.

Not to mention, the post used the potential additon of new post-processing effects as reasoning as to why it wouldn’t be worth it to work on a shader system. Since then, we have gotten a very negligible amount of post-processing effects, and not nearly enough to cover most use cases that would be provided by a shader system.

Kind of feels like the whole idea has been in limbo for 9+ years. If there was another more recent response to the subject I might’ve missed, it’d be nice if someone could let me know via a reply or a DM.

1 Like

I believe it is still this way, unfortunately. I spoke with some engineers about this last year. Fingers are crossed, maybe we’ll get it one day.

1 Like

It’s actually doable. I talked to an engineer I met at RDC, we continued talking far after RDC in the devforums in private.

Now he said it’s totally doable, the issue however is that particularly Android devices have problems. Not really graphics API related (albeit GLES 2.0 should be removed since it supports almost NO modern features at all, but let’s imagine GLES 3 was the minimum here).

Basically the issue stems from DRIVER related issues. Some Android devices are known to have buggy drivers. What do you do if these devices end up having issues like NaN propagation? Or what if there’s inconsistency in floating point math? Suddenly you get weird inconsistency between devices and possibly complete failure on some.

I’m sure this isn’t entirely exclusive to Android. Desktop, Console and other devices can also suffer from driver issues. But these are swiftly fixed. I’m just under the impression that Android is mostly the platform that has these issues the most just because of the sheer variety of devices.

1 Like

I don’t think it is a question of can the engine handle it or not. It can. All the ground work is already there, since we can use the engine to make games.

Instead, it is a question of how do you design this functionality (shaders), whether it is in an API changes to an existing service, or some new type of functionality, so that they can be easily integrated and still feel like developing in Roblox. There is no agreement here. Until some kind of agreement is made, I doubt it will never come.

Some of the topics I’ve seen discussed are like:

  • Do we let developers edit some “master shader”?
  • How does post processing fit in to all of this?
  • What shader language? Custom language? Use some third party service to translate graphics code in to device-native code?
  • How does the user start to edit a shader? Is it a new service? Is it a new instance type? Do they work like stylesheets?
  • How can you make certain instances affected by custom shaders and others not?
  • Shaders can be order dependent, how do you define this execution order?
  • Any issues with app stores not wanting apps that allow users to write hardware code?
  • Not every function you can access in your desktop will be available on mobile. How do you support this?

These kinds of topics all have different answers from different people.

Don’t get me wrong, I want this functionality too. I’ve studied graphics programming for 10 years, I would love to be able to enhance my games’ visuals. From my point of view, I would be surprised if this functionality comes within the next 5 years.

This right here.

There’s actually a pretty good way to enable developer created shaders, you create a node based visual shader editor, when a server is starting up the server will translate the nodes into the languages for each platform, and compile them all JIT then send those shaders along with the rest of the game data to the players. I am massively simplifying here but this is the basic gist.

This flow would work for all platforms, it would require quite a lot of engineering, but it’s certainly possible. Security measures are relatively simple in this case as the nature of the node based language inherently makes it extremely unlikely for exploits to seep their way in.

The problem is that it is basically impossible to make shaders that will work on all platforms, and when drivers get involved, it gets even worse. If there’s some kind of shader that is resulting in undefined behavior on certain devices, unless Roblox is able to insert some kind of automated workaround, they’d have to prevent that type of shader from running on that device.

In recent years, driver problems seems to have been either mostly or completely fixed on all modern phones that support OpenGL ES 3.0, at least all the ones that I can find data for. That doesn’t account for old phones though, and I personally have had issues with Vulkan behaving strangely even on a rather new phone. (This is why Roblox uses GLES on some devices that “support” Vulkan.)

1 Like

I think i had some conceptual material , its like some galaxies but deeper

End portal shaders that will blow your eyes because it’s like an optical illusion to see the profundity of the galaxies

1 Like

We need this.

For me, Roblox has always been the go-to platform for it being the easiest to get a public. But a lack of popular features in other game engines like these make me want to take the risk and switch to something like Unity or Godot.

I have a feeling that we’re a long way from having entire custom shaders in Roblox.
But a good start would be something like a fixed function pipeline or color combiners.

The earliest 3D game consoles like the Nintendo 64 did not have shaders, but it did have something called a “color combiner”.
Later game consoles like the Playstation 2 had fixed function pipelines.

These weren’t shaders, but it allowed you to perform simple math operations with textures, vertex colors, UV maps and a bunch of other things.

Implementing something like screenspace reflections in a PS2 game was nearly impossible or very hard at least, but a lot of other effects were fairly easy to do.

On the Nintendo 64 it was possible to blend 2 textures together for things like terrain and environments to reduce tiling and repetition in things like grass, dirt and cobblestone.

You can multiply vertex colors with textures, allowing you to add gradients, soft-shading and subtle discoloring details to 3D models to reduce the flat and boring look (e.g. make the arm pits and neck of a character darker or add dark spots to a corner of a room).

You also have access to camera information if I recall which could allow you to project things from view (currently also impossible in Roblox apart from billboard GUIs but that doesn’t count).

If we could use 2 or more UV maps in Roblox and layered textures, a lot of things would also be possible.
One UV map can be used for a texture atlas, dramatically reducing the need for 4K textures and could optimize games while a secondary UV map could be used to add scratches and dirt onto a model, making textures extremely reusable.

Fixed function pipelines or something like a color/data combiner would already make so much more things possible and should theoretically be much easier to implement as a first step since it could be done using a whole bunch of pre-written shader code or they could just give us actual shaders but start with primitive functions and math operations first that are supported on all hardware.

2 Likes