One more thing, they’re probably using the custom lighting engine but tweak it to their own need adding radiance cascade, but it seems more baked to me than real time unless they move the light sources or obstruction around which they can rebake it and keep applying, but then that isn’t real time anymore if the fps plummet, in the meantime there is other resource that you can use and maybe more performance but doesn’t have much hassle to deal with custom lighting engine. I’ll try to do it if I have the time nvm, also you mentioned reading the code, can you share it also do you know where the ray cast are calculated like from the light source or the camera?
So they just render the light by ray casting into the scene either from the light source or the camera, which I think is what they did here since there are some part of the video showcased from their lighting system shows the specular highlight flickering and based on the amounts of travel hit/indirect, detect, reflect place a point light at that location and apply properties on the info collected from the environment through all the ray traveled.
Hi! I am working on an updated version of my old implementation listed under this post with Global Illumination and will release a detailed explanation in a new post. Many people use actual lights to imitate global illumination, but this can often be an inefficient solution because of specular highlights and potential lag with multiple light sources. This means it’s often incompatible with Future is bright lighting.
A good alternative is to use Surface Space lighting instead of World Space lighting.
The only resources that roblox has for image manipulation atm are surfacegui, editableimage, and imagelabel, so I used a mix of those to project points on a 3D surface! Every surface of every object is treated as either a cube, box, and later a mesh (when Roblox releases getVertexNormals along with editablemesh). I stuck with Radiosity concepts by diving each surface into patches, then dividing those patches into pixels and drawing the results with editableimages. That way, each patch could be a uniform size across the whole world, and any pixels off the edge of a surface can be ignored. For the rendering, I really wanted to use a cube map! But this was impossible because I’d have the render the scene from every pixels perspective. In the future, maybe I can render a low quality image from the perspective of each patch and interpolate the pixels indirect based on that, but that remains to be seen! So I just went with raytracing direct lighting with several rays per pixel for soft shadows. Indirect lighting is optional using a Monte Carlo pathtracer. The results take from 30 seconds to 4-5 minutes, but don’t usually go beyond that.
This older implementation does not support global illumination, only direct lighting smoothed using anti-aliasing. A new version will be released soon using EditableImages and pathtraced GI.
This is extremely cool! One suggestion I would make to potentially reduce lag is instead of shooting multiple rays for each patch, instead shoot 1 at each patch intersection and interpolate the pixels between them using Color3:Lerp(). This could potentially reduce your rays by alot while not reducing quality by much, just a suggestion though
have you heard of the trick where you make a part and set its transparency to somrthing above 1 (like 99) and add a highlight? that is 100% what they did for the reflections, its not perfect but looks nice. no idea how they did the lighting, maybe surface guis?
This would be an amazing suggestion! It’s very close to what Radiosity Cascades do: I wanted to shoot only a few rays per patch and then interpolate the results as a sort of mock-cubemap, but only from the perspective of the patch. However, I’ve been having issues smoothing the entire surface because it is divided into patches, meaning the results have to be recompiled after rendering!