After being inspired by RetroRaster (Strongly recommend if you want the most fully-fledged rendering solution outside of Roblox’s rasteriser), I wanted to educate and challenge myself on learning how to create my own rendering solution via raytracing.
Coming into this I had no prior graphics programming knowledge, so it definitely has been an experience for me so far. If you feel inspired to make your own raytracer, please do not follow my guidance as I am merely documenting my experience as I go along and create this system for the first time. I will probably make fundamental mistakes that you shouldn’t follow.
STEP 1: CORRECT OUTPUT
Since this project uses EditableImages, it was important for me to learn how to output correctly. Following a guide I was able to confirm the ability to correctly assign a colour to the intended pixel.
STEP 2: SEEING THE WORLD
Next we make use of game.Workspace:Raycast() in order to grab colour information. We now apply the EditableImage to an ImageLabel, and treat it like a cutout of the camera’s viewport. This gives us a ‘window’ into the world.
STEP 3: CORRECT PERSPECTIVE
While not shown in images, the previous method contained warping towards the edges of the screen. This was due to the FOV being linearly spread based on how far we were from the render’s midpoint. We had to apply the same math that roblox (and many engines) use for ensuring that the render didn’t do this ‘spherical’ warping.
STEP 4: REFLECTIONS
The hallmark of a raytracing engine is the ability to simulate light in a way that rasterisation just can’t. We implement that by bouncing and continuing the ray, then mixing the colours together in order to get reflective surfaces. For performance, some raytracers may apply a render distance, a maximum number of bounces, or both.
STEP 5: INFERRING MATERIAL PROPERTIES
Now that we know the raytracer can navigate the world properly, I then play around with higher resolution renders, allowing greater inspection of how the system is running. In this I also enabled the raytracer to infer reflections from the material it hits, and it currently terminates the ray if the reflection is 0. This is good for now, but this would need to change later on in regards to lighting.
STEP 6: FOG
Since our raytracer has a render distance, we can use that to apply fog to the world. Implementing this was tricky as now that we are recurring our rays based on reflections etc, any slight change in the logic drastically changes the output. After much trial and error, I was able to add fog in a believable manner, even to rays that are reflected multiple times.
STEP 7: LIGHTING, AMBIENT
Despite making all this progress, we have not implemented any lighting yet. This is a very delicate process, and would require much research on my end, but in the meantime multiplying the color of each ray via an ambient color allows us easy manipulation of the color of the environment. It’s worth noting at this point that I have provided an FPS counter in the project representing the FPS of this system, as it is able to run in realtime in lower resolutions. Methods for improved performance will be explored at a later stage.
STEP 8: LIGHTING, SHADOWS (Experimentation)
This step will most likely be rewritten once I have made progress on this step.
I experimented with shadows to see what the world would look like with them. During this I ran into issues of rays missing the parts that the shadow was meant to be visible on, and many recursive issues due to my ray termination condition being based on render distance vs bounce depth. However seeing this to me is very exciting and makes me want to work on the project more.