Ray tracing how?

so for weeks now i have been trying to find how does raytracing work
i am talking about raytracing like shaders like in reshade something like that

if anyone knows how to make one or has code they can give so i can see for myself
please reply

3 Likes

I’d recommend starting out with this.

i know that but i dont know how you go from a raycast to realistic lighting

You don’t because you can’t modify how the Roblox Engine works.

1 Like

i want to make something like this

https://www.roblox.com/games/3941676335/Ray-Tracing-Project-v2-2-3

These shadows and water reflections can be modified by changing the properties. And the reflections on the floor are made by making the floor a little bit transparent and placing a copy of all the models from above under the floor.

4 Likes

I know it’s been 3 months but I think you should get an answer from someone who actually makes ray tracers. Do note we dont really code ray tracers in roblox cause we dont have direct access to the gpu meaning it slow. I would reccomend you start off with shadertoy. Anyways all a raytracer is is a way to represent a 3d model with ray intersections.

The algorithm works by we first have a point in space then we cast a ray for each pixel (the way we do that is by transforming all the pixels from the 2d context to a range then using that as our basis for our x, y coords then applying a offset of the zed which defines our normal of our camera then we normalize it). So then from there we use that as the parameters for our rays which get casted from our camera. Then we figure out what those rays hit (if they did at all) then we find what the position of the hit was. Then we change that pixel color accordingly. There are many ways you can code a ray triangle intersection algorithm (in fact thats the whole nitty gritty part you’ll want to optimize). A lot of the time the reason you’ll see spheres in ray tracers is they are the simplest to program. And I’d like to teach you how.

The Camera

So we need to define a way to represent are camera. Will basically represent it by using a vector3 value as the position in a sense however you may want to switch this to a transformation matrix else u might get weird errors…

Anyways we then need to basically for loop each pixel (in shader coding this actually doesnt exist and instead it all happens all at once). We need to know the pixels location and from there will change the pixels color depending on this program.

Rays

Basically we can define a ray from the parametric ray equation. I basically says a ray is defined for positive values of t, PointOnRay = origin+(direction*t)

Given this we can ignore t for now and define direciton as we know origin. Will basically just wrap our pixels position to a range of -0.5 to 0.5 then say that our direction is define as the unit vector of vector(pixel.x, pixel.y, 2). Now 2 doesnt really matter its just to give context, they will all be of length 1 either way.

Sphere

Implicit Conics tell us a sphere can be defined as x^2+y^2=r^2. Or in other words x = sqrt(r^2 - y^2). Given this we need to solve 2 values. That is r and y. Now r is pretty intuitive, its our radius. So we can manually define that as long with our spheres position.

Finding t

In a sense from the data we got what we could do is find a point which is orthogonal to our sphere but defines one of our components in our point. A cool thing about dot products is they can tell you how much something is projected onto another giving us these perfectly neat formula of that t is the dot product of vector sphere position - origin by our rays direction. So we can now get that point by just plugging it in to our ray parametrization formula. A really cool thing now is that since we know the point of it on 1 dimension we can skip a lot of math by just saying if the magnitude of the vector formed from that point to the spheres point is bigger than the radius, skip it.

Is it tangent?

Basically due to this square root function will get 2 values. Which makes sense! But we only need the one facing the camera. To do this we just subtract this scalar by our x scalar. This point is the point of intersection on the sphere!

Making sense of the data

This is really simple. All we gotta do is use what i like to call an inverse lerp function (basically maps a range percentage to a value given 2 values). Our initials will be our z on our t value we got from that dot product and the other is the difference of that to our intersection point. This basically returns us values where it gets closer to 0 the farther it is from the center of the z axis of the sphere giving us this shading effect (i dont understand light rays yet xd still trying to make sense of rotational matrices). We then set our color to be this range since hey color3 takes values from 0 to 1.

Here is some example code from me (not in lua, sorry but roblox isnt really the software you want for this!)

9 Likes

Its actually called “artificial graphics”. The RTX is actually not RTX or raytracing, basically the floor below is glass, but also has a texture on it that is invisible mostly and duplicated the same exact detail below using raycasting, and region3. The lighting is particleemitters that are just colors and have a little bit of future lighting mixed in.

4 Likes