Post-processing effects

Survival game that blurs your camera during snowstorms when you’re getting pelted in the face with snow – only way to stop eye blur is to look straight down so snow isn’t getting in your eyes. Feature is meant to force-cut players’ view distance, but allow them to look up in short bursts to get a general idea of where they are… except people below the required graphics level are free to look directly into the blizzard 100% of the time.

This one isn’t appropriate for a ROBLOX game, but a Skyrim mod puts you into a “drunk” state when you drink too much mead/ale/wine, and both blurs your camera and makes it difficult to control. Want to drink mead to regain stamina in a fight? Risk losing your sight to your eyes being blurred a bit in a trade off for that stamina. If this was ROBLOX, drink the mead to your heart’s content because your vision will never be blurred at low graphics levels and all you have to deal with is awkward camera controls.

I understand it’s not possible to enable blur for every device at every graphics level, but this is what jcfc was getting at. You can’t involve blur in gameplay-changing effects because it doesn’t apply to people with low graphics settings.

4 Likes

The moon should cast light shafts too.

6 Likes

Finally, I can make my games as beautiful as they were meant to be

(this is a reminder to use shaders responsibly)

13 Likes

Beautiful :heart_eyes:




11 Likes

The new effects are amazing… <3
Great job @zeuxcg and team :slight_smile:

4 Likes

ITT: Everyone posting pictures of the sun with upward angle with partial occlusion.

2 Likes

Wow, this really helped with the lighting in our horror game.
This is before:

This is after:

Ever so slightly desaturated and colder looking, but it makes a massive difference when you’re actually playing. Not to mention I can increase the desaturation and color tint whenever something scary is going to happen and make it even scarier :evil:

4 Likes

Pffffft.

1 Like

You’re doing an awesome job ROBLOX team. This could be one of the greatest updates yet. Really enjoying messing around with these

4 Likes

no

16 Likes

The best part is messing with it and pushing it to places it should never go.

4 Likes

I’m getting some artifacts around the edges of dark pieces on lighter pieces that I think is caused by AA. This is one EditQuality 21 in Studio; it goes away when set to 20 or lower. It seems that AA isn’t taking the extra shadows into account. The place is my Mystery Shack if you want to have a look for yourself.

6 Likes

@Mr_Root, @nomer888 and @ZarsBranchkin asked about the user-created shaders. They are referring to the hack week project by @darthskrill: http://blog.roblox.com/2016/01/hack-week-2015-shaders/

Here is why you can’t have nice things (warning: this is long and technical; I’m putting it here since some people asked me to share this publically, and I don’t want to create another thread in Public Updates just for that).

The way the hack week project worked is by creating a new type of script that contained shader code written in HLSL, had some parameters tweakable by Lua scripts passed in, and then rendered full-screen passes with each shader, transforming the source picture into what you see on screen. As an input the shader got textures with color and depth information for every pixel - which is how you can do blur, depth of field, sun shafts, etc. etc. - that is, a formula that just transforms current pixel’s color is not enough, you have to have access to any pixel on screen.

When we ship a feature, we generally can’t unship it. So how do we ship this one?


Picking a shader language We need to pick a language you're writing shaders in. We have to pick one shader language that works the same across all platforms and will work the same across all future platforms. Such language does not exist.

We currently use a complex pipeline of transforming HLSL with some C-like macros into either HLSL for Direct3D 9, HLSL for Direct3D 11, GLSL for desktop OpenGL or GLSL for mobile OpenGL (actually there are 4 GLSL variants now - for OpenGL 2 and 3). This pipeline is full of complicated code we did not write. It does not always work perfectly - we sometimes have to adjust our shader code to work around bugs of this translation layer to make shaders work on Mac/mobile (OpenGL).

Also when we were shipping Direct3D 11, we had to introduce a bunch of complicated macros to work around bugs in HLSL compiler that translates D3D9-style HLSL into D3D11 bytecode. This compiler is not open-source so we weren’t able to fix the compiler. I don’t like closed-source libraries :-/

So as you can see, we don’t even have one language to start with. Sure, we could just give you access to the exact same thing that we use - which is what hack week did - and you’d have to deal with the consequences of shaders potentially miscompiling or being weird on different platforms. When we introduce support for Direct3D 12 or Vulkan or Metal this would mean more macros or even more translation code with new bugs. We currently use a pragmatic approach of dealing with bugs on a case-by-case basis - sometimes we fix the shader code, sometimes we fix the translation layer to work around the bugs.

None of these are practical if you imagine that every single user can create a shader. We don’t want people to have to be experienced graphics programmers to use any part of ROBLOX.

Compiling shaders Now it gets worse. We have to compile this shader code to something our rendering API of choice recognizes. Most APIs do not work based on source - they work based on custom binary formats that vary per API or platform (Xbox One uses Direct3D 11 but has a custom bytecode format)

This compilation process is slow and - in case of OpenGL - uses the third-party software that was never security-tested so I’m sure there’s LOTS of ways to exploit it to do bad things with the client. The compilation is also not guaranteed to be available on all platforms (some console platforms prohibit runtime shader compilation). Finally, on some platforms the compiler is proprietary - we can’t ship it with Studio so we can’t precompile shaders for all target platforms when you publish (plus this’d mean you have to republish the place to make it work with new platforms or render APIs - Unity can work like this but we can’t).

So if we compile at runtime we’re exposing our users to exploits and slowing down the game launch by potentially tens of seconds.
If we compile during publish we’d have to ship compilers we can’t ship with Studio, and we lose compatibility with future platforms.
If we compile on the server, we’re exposing our server infrastructure to exploits (which is super scary). If somebody discovers an exploit in one of the target platform compilers and we’ll report it to the platform vendor (we don’t have the source) - until they fixed it we’d have to disable user shaders for that platform.

Also - as mentioned - some of the compilers are proprietary and closed-source. This may restrict our choice of server platform if we compile on the server - what if there is no Linux version of the shader compiler?

Maybe make it visual? Some engines (like Unreal Engine) have a shader node system where instead of writing shader code you build a graph of nodes. It’s very visual and generally people who aren’t programmers love it. We could build something like this.

This solution removes some of the technical problems from above - we would not have to deal with complex text-to-text translation software we did not write that has bug and exploits.
It still has an issue that we need to generate target platform code somehow, and it’s not clear how to deal with issues highlighted in “compiling shaders” part above.

We’d have to design the node system, which is pretty big and involved process. How much functionality we expose? Are there conditions? Are there loops? How do you write a radial blur with 17 samples without using 17 nodes?
We’d have to implement the editing flow for the node system in Studio - it’s a brand new editor where you can place nodes, connect node inputs to other nodes’ outputs, etc. This is a lot of engineering.

So this is also a pretty significant effort. I feel like overall this is closer to what we could have shipped, but note that it only really removes one layer of problems - dealing with another text-based language - while introducing new ones (complicated design & implementation, potential limitations as to what kinds of shaders you could create and how efficient they can be)

Inputs - values This is pretty straightforward - we’d just map children of type Value to parameters in the shader. This would work regardless of whether we’d pick text or not.

Shader math is pretty expensive. If we had a visual node system one of the important components would have been an automatic “constant folding” process - we’d find subgraphs that are completely driven by shader inputs and precompute them once per frame.

Inputs - textures There are questions about the possible inputs we could provide to the shader.

There’s a variety of data available that we use in different passes:

  • Scene color (with transparent objects, GUI, etc.)
  • Scene color only for opaque objects (available on 7+ quality only)
  • Scene depth only for opaque objects (available on 7+ quality only)
  • Accumulated glow factor from neon

This data changes encoding release-to-release. For example, before post-effects, we had scene color and glow factor packed in one texture; scene depth was packed in two channels of another texture and reconstructed using some math. After post-effects we have scene color and glow factor packed into one texture, but scene depth is now just in one channel of another texture - no math needed for reconstruction. In the future even the scene color may become encoded in some way.

If we blindly expose these kinds of details to the shaders we’d sacrifice our ability to develop rendering changes. So we’d have to come up with a set of APIs that we guarantee to work within a shader, and extend the shading language with them.

Finally, it’s very important for shaders to have access to other textures. We only shipped 4 post-processing effects but one of them already has a custom noise texture that’s being fed in.
This is also relatively straightforward - we’d have to support Image assets as inputs to the shaders - but also there are some questions about the specific setup for this (for example, customizing the filtering type between linear and point filtering is pretty important for some use cases).

Notice that we were ONLY talking about how to make it WORK so far. Not run fast - just work. Let’s now talk about performance.

Raw shader performance Writing fast shaders is hard. Writing shaders that are fast on many platforms is harder.

When we write our shaders we generally have a balance between engineering effort and performance that we want to achieve. In some cases we can spend a week to optimize a single shader, if it’s really important. The optimization process frequently involves using complicated proprietary tools that give you precise information about how the shader executes on a given architecture and trying to optimize the shader for that - we generally pick a lowest performing target for this.

An additional important component of our shader optimization is shader code review. While we review all C++ and shader changes in ROBLOX (this involves at least one other engineer reading your changes to the code and suggesting improvements) and some of the review comments are about performance, shaders are more important - it’s very common to spot tiny inefficiencies in code review and correct them. This process requires a lot of expertise and effort. Something as simple as “a * b * c” → “a * (b * c)” can make a difference.

So with all that being said, what do we do when users create shaders? Remember - you will be writing a program that executes for EVERY SINGLE PIXEL. On Xbox One there are almost 2 million of them. Making it fast is hard.

It seems like we’d have to have performance guidance. It’d be really common for people to implement or copy&paste a really complicated shader that’d run fine on their GPU, and then most people can’t play their game for performance reasons.

We’d have to provide some tagging so that you can mark your shaders as gameplay-critical or not - so that we can disable them based on quality levels.
We’d have to provide you performance guidance in Studio - as in, how fast/slow do we think your shader set will run on less powerful graphics cards?

Multi-resolution effects We've done a lot of optimization on the current effects that utilize some tricks to make it possible to compute then in reduced resolution without sacrificing visual quality.

This is a pretty tricky problem. Reducing resolution during effect computation frequently introduces extra artifacts that you have to counter. Some effects could run fine at half-resolution and some could run fine at quarter-resolution. Running a shader at quarter resolution is 16x faster than running it at full resolution - the shader effects that we shipped are much more accessible because we put a lot of effort into it.

Some effects have to be split into multiple passes to make this possible - maybe one pass is 1/4 res and another pass is full-resolution.
Some effects have to be split into multiple passes with same resolution to make them faster - separable blur being the classical example.

You may think of this as just an optimization that we don’t need to support initially, as a potential future improvement to the system. I’d argue it’s critical. The spread in graphics hardware in ROBLOX is huge. You may be able to run your full-screen full-resolution radial blur shader with 10 taps on your NVidia GTX 970 and it’d take 0.5 ms and you’d happily release an update to your game. However on many many graphics cards this shader would take 10-20 ms and either their game will be unplayable or your effect will be invisible.

This means we need to share this responsibility with developers - we need to design a system that allows for multiple shaders to work at different resolution and to feed output to each other, and you have to learn to use it. Needless to say, this is a big additional complexity.

Folding effects and final compositing Our system is now organized around having one final compositing pass (which is where we do color-correction). Since we pretty much always do it we optimized it carefully.

It’s very very important for performance to fold multiple effects in one shader in certain cases - that is, have a shader that is capable of computing both at once. For compositing shader this means that we tried to fold application of as many effects into it as possible (and have a simpler shader if the effects are disabled) - for example, we compute glow at low resolution but we apply glow in the final compositing shader.

If we did not do this the cost of some effects would double. This is not trivial.

We also fold some effects together in interesting ways. For example, neon and bloom are really one shader. This also significantly reduces the cost.

Finally, this folding and working at low-resolution for some effects introduces some challenges we have to work around to fix. For example, imagine you have sun rays that you compute and then add them to the image in the final shader. And you also have blur that you compute and then add it to the final image. How do you make sure blur is blurring sun rays if they are computed separately? This needs custom adjustment logic and shaders.

--- So, TL;DR - this is a very hard project with many open design questions and many hard engineering problems.

I’d estimate that in the time it takes us to resolve these issues and implement the system we’re happy with we could ship 30 new polished post-processing effects with different ranges of complexity.
That is, if these issues are even resolvable.

And that is why you can’t have nice things.

149 Likes

I’ve noticed that’s a problem with mixing AA and the 10 graphics object corner shadows. Here’s a 3 frame gif showing graphics 8,9,10. AA kicks in at 9. Those ambient shadows can get pretty ugly by themselves on stuff like triangles and smooth terrain already. The ambient shadows look really nice in your house by the way, ignoring the bug with AA.

3 Likes

Yeah… We’ll try to do something about this.

5 Likes

I will also say my hack week project was done knowing it probably only worked on my computer. I have a pretty beefy gpu at ROBLOX HQ, so something like depth of field worked just fine on my comp. There is no way in hell it would work well on a 2 or 3 year old smartphone. Hack week projects are just that, hacks. I really just wanted to see what I could do with shaders because it is not my field of expertise :smile:

4 Likes

A common question is “will there be more post-effects”. You bet!

We do not have a list that we’re working on now. At some point after we ship effects everywhere, community starts playing with them and gets a hang of how they work and what’s possible, I’ll post a thread where we can discuss ideas for effects you’re missing.

12 Likes

hmmmmm

4 Likes

Maybe it’s just the things that I would have wanted to use it for - things like when a person is damaged, or effected (say, you get hit by a “drunk effect” cannon by another player, your screen is supposed to go blurred, except if you’re below a certain graphics level it’s not going to affect you at all. I don’t really understand using something that you can’t be certain will affect some players, or that some players will purposely turn down their graphics settings to avoid. Imagine a zombie game similar to Apoc Rising, you’re bitten by a zombie and you need to take medication to prevent the change, while you’re infected the blur fades in and out making you more vulnerable to attacks by other players. Why play on level 10 settings when you could avoid this?

I’ve no idea how blur works technically, but when I’m rendering in 3D it’s usually based on samples, the more you have the cleaner the blur looks, do you have something similar? An ugly blur would be better than none at all in my opinion. Perhaps an alternative could be that pixelize shader that we saw in hack week? I could check the players quality level and enable blur if they’re over the threshold, and enable the pixelize if they’re below it. Providing that it worked on low quality levels…

1 Like

So make the screen tinted green, make controls weird, make aiming miss - why rely exclusively on blurring to make it harder to play?

Blur with fewer samples is a bit faster but there’s some fixed cost you pay as well.
Pixelization could be faster depending on the approach.

But bottom line is - there is a cost and your players on really low-end hardware would appreciate your game more if it did not run at 20 FPS. And it may be impossible to get the blur to be performant enough - some cards are bad enough that they struggle to run a level with sky & baseplate & a few thousand parts at 30 FPS.

2 Likes