Roblox's occlusion culling fails if the camera is further away

When the player camera or viewer in Studio is close against occluding surfaces, parts behind that surface are correctly culled. However, when the camera gets further away from these surfaces the culled parts will render even if they are still completely behind another surface and unable to be seen directly by the player. This seems to happen with both large and small parts, though smaller parts will render at a closer camera distance than larger ones.

I tested this using two 1x16x16 parts and two 16x16 grids of 1x1x1 parts to show the difference between incorrect rendering of larger and smaller parts. Contrasting colors were used to easily identify parts with wireframe rendering turned on.

Below is a video of this issue:

1 Like

Hi!

That is a really nicely done video report. It demonstrates two known performance/quality tradeoffs.

  1. Parts that are small on the screen are rejected as occluders early on. The time they take to process isn’t usually worth the culling they give in real-world scenarios. This does mean if you make a wall of blocks, it will cull when you’re close and won’t cull when you’re far, as seen in the video.
  2. Software occlusion is conservative. If the 2D bounding box of a hidden object is too close to the edges of the object hiding it, then occlusion culling can’t quickly prove that it is hidden. Rather than taking a lot more time to get slightly better occlusion, it stops after spending enough time to get most of the benefits.

The bounding box nature of this can be seen in your example by looking at the blue/yellow blocks at an angle. You will find that you can easily get to situations where the yellow block’s 2D bounding box pokes out from behind the blue block’s edges, preventing the yellow block from being culled. This is the same time/quality tradeoff as (2).

It’s probably worth noting here that occlusion culling doesn’t guarantee what specifically it will cull. It makes internal judgments about how to spend its time towards improving overall framerate. Once the time it would take to cull something exceeds the time you’d spend drawing it, you should stop trying to cull it! We have to use heuristics to decide that tradeoff, and we may choose to tweak those heuristics at any time to improve performance overall.

Hope this is helpful and informative!

Thanks again for the nicely done video! That made understanding the situation super easy.

3 Likes

I marked this as the solution, but are there any plans to allow users to decide what size parts are being rejected as occluders early on or would this be too detrimental to performance? Many games in Roblox have voxel based rendering and physics for structures and I believe occluding small parts could vastly improve the client experience.

I don’t get why wasn’t Occlusion Culling implemented on the hardware level when I assume most devices playing Roblox support Occlusion Culling Queries other than devices that only support OpenGL ES 2.0.

Traditional GPU occlusion queries need to wait one or more frames to get results without stalling. This causes noticeable bugs when things get dis-occluded, like doors opening or walking around corners. They often have inconvenient restrictions about things like how many you can do per frame.

You can do occlusion culling fully on the GPU using compute on depth buffers. But not all GPUs we support have the needed features, and enough of the GPUs that do have the features have them on a slow path. To top it off, the GPUs that can’t do this are often the ones that need culling the most.

3 Likes