Selectively apply post-processing effects to parts

Post-processing effects, with the exception of sun rays, either apply to all parts are none at all. This is extremely limiting, and prevents implementations such as:

  • Blur out the background but not the NPC you’re talking to
  • Corruption magic that makes the map black/white with protection spheres where interiors are colored
  • All types of vision
  • Thermal vision where cold objects are displayed blue, while hot objects are displayed red/yellow
  • Night vision is already possible, but not making bright objects super bright
  • Electrical vision where machinery/etc glow but rocks/etc are dull
  • Underwater objects are bluish but objects inside an underwater base that can be seen through glass are normal colored
  • Apply ColorCorrection with higher contrast / orange color to highlight scannable/interactable items
  • Apply high bloom to further objects and none to closer objects in a dream-state level
  • etc

There are a lot of uses for selectively applying post-processing effects, and I’m sure I’ve only touched the tip of them. I’m not sure how this should be best implemented, whether it be through effect:Whitelist/Blacklist(parts), assigning layers to parts and applying post-processing effects to them, or what, but it’d be great if we could selectively apply them in some way.

26 Likes

It doesn’t really work like that. The scene is rendered, then post-processing effects are applied to the rendered image. You can’t selectively apply post-processing. It only works with neon because there’s an alpha channel to spare that can be used to blend in the glow on only some pixels.

All of the things you mentioned can be done if you’re developing an engine specifically for that, but it’s not really something Roblox can do because developers can’t be given access to the rendering pipeline or shaders for pretty obvious reasons.

5 Likes

In Blender, you can apply IDs to objects and materials which will turn into BW maps highlighting whatever the ID is applied to. The user can use these to add or remove post-processing effects to specific parts and materials in the scene. However, i do not know how intensive this process is in Studio. Cool stuff though.

1 Like

At best, that’s 66mb for each map at 1920x1080. It’s not a good idea.

There’s an opportunity in there somewhere

There’s really not. More g-buffers are a horrible idea and most graphics programmers spend a lot of time trying to minimize them. Not only is it more memory being wasted, you also spend performance on writing to it and reading from it later. On most desktops, it’s not a great idea, but on mobile it’s a complete non-starter.

1 Like

What about prebuilt shaders that can be assigned to parts through a Lua API?

No amount of API juggling will solve the fact that post-processing is not inherently selective.

Blender also takes up to an hour per frame to render. Roblox has to run 200 000 times faster than that.

1 Like

Actually that depends highly on the amount of samples, materials and resolution. It can takes seconds, minutes, hours or even days to render a frame. The compositing part renders way faster.

1 Like

You’re still missing the point. Even just 2 seconds is still over a hundred times slower than roblox needs to run.

1 Like

For the most part, techniques that are used for offline rendering like in Blender and techniques that are used in real-time rendering like Roblox don’t often overlap. When you have a time budget of less than 16ms and you need to continuously render frames potentially forever, you don’t have the same luxuries.

For example, offline rendering is usually raytracing, which means each pixel is a raycast into the scene that can bounce around and accumulate color. Real-time rendering uses rasterization, where you render each triangle and interpolate across its surface so you only render what you need and there’s no complex ray-sphere/box/triangle math involved.

Similarly, you can’t just throw more buffers on top of it. Even the most wasteful deferred rendering pipelines use around 4 fullscreen textures: one for diffuse color, one for normals, one for depth, and one for miscellaneous info. Each one adds a lot of overhead. This is why a lot of engines are moving away from this approach (Roblox included.)

2 Likes

Pardon me, i’m quite terrible at discussions.

@0xBAADF00D What about lighitng properties like globalshadows and ambient? Would it be possible to edit or disable those for specfic parts? Often they make pretty ugly shadows on large-scale objects.

That’s a bit more doable since those things are handled before postprocessing, but I don’t know if anyone would want to add the API bloat.

How come? Is it very time consuming, or very difficult to program?

On a more productive note.

  • Blurring the background but not the foreground is more realistically and better achieved by adding a depth of field post effect, not selective blur
  • Corruption magic would not work using part tagging. You would have to use the stencil buffer to control where the corruption’s color correction is applied (i.e. avoiding anything inside the sphere).
  • Thermal/night/electrical vision can be achieved with a simple color correction shader, coloring parts, and making it so parts can ignore depth and render on top.
  • Underwater should be a first party engine feature, not hacked together using post effects.
  • Making further away objects have more bloom would be best achieved using a custom shader, not being hacked together with part tagging.

For none of these usecases is part tagging the best option.

I think we should have depth of field, and a way to render parts on top. Being able to control post processing effects by drawing parts into the stencil buffer would be neat but it’s a lot to ask for.

One thing I would like to point out is that we should generally aim for features that you don’t pay for when you’re not using. Part tagging would add a performance damper on all roblox games, even ones that do not use the feature. It would also increase the implementation complexity and make further work on the rendering code harder.

6 Likes

Don’t take this the wrong way, but for someone with a starter robloxian you’re very intelligent.

On topic: i appreciate the time you took to explain this to us.

2 Likes

I’m not sure how this would be best implemented, but the use case that prompted me to post this was having the whole world blurred out in VR but show a 3D compass around the player and their hands not blurred. I don’t think DoF would help with this since close objects wouldn’t be blurred.

1 Like

Is this done in real-time? Like, you move the camera around, and the BW map (black-white map?) adjusts to show the new locations of the meshes?

1 Like

@OP I would love to recreate this…


(I support the request!)

1 Like