What are you working on currently? (2017)

Are you using fractals?

Nope

Got a bit more done. Managed to render this 500x300 image after adding some more effects and optimizations :smiley:

Here’s a progress comparison of this morning vs now (rendering a tree-top-down)

20 Likes

hmph

6 Likes

Wait until supersampling, textures, depth of field, and ambient occlusion!

1 Like

The only thing you’d be missing is anti-alias because it doesn’t look great being that pixelated. Still better than what I did.

Supersampling is an anti-alias method. You shoot multiple rays through the space of each pixel (“super” = more, more samples since you’re shooting >1 ray per pixel) in some pattern (like a grid, or just randomly sampled), and average the colors that you get from those rays to form the final color of that pixel.

1 Like

That’s surprisingly fascinating to me!

1 Like

Oh I had no idea. That’s really interesting.

image

Super sampling is my next thing. It’s much more effective than AA in every way since you use more data and average it rather than interpolate between existing data. It’s just expensive :stuck_out_tongue:

Edit: Here’s a visualization of how much data I’m collecting:

They are actually both about the same thing, super sampling is a concept that makes AA possible (they are not 2 loose things, rather AA is a concept with a family of solutions which includes super sampling).

I think you meant when compared to simply running a 2D smoothing filter over the intermediate image, which is not always great for graphics because you lose detail by smoothing areas with texture etc. (or were you talking about something else?)

I meant the generic “smudge nearby pixels” thing but yeah, I’m really new rendering so I guess SS is a part of AA. whoops

Edit: Added in super-sampling! Here’s a before/after of a 10x6 stud render @ 50 data points per stud (super-sampled to 200 per stud). It helps smooth out and hide the fact that it’s made of pixels a bit (compare the elbows)

2 Likes

If the super sampling thing seems too slow once you implemented it, one thing you might be able to do is: do a simple pass over the image first, and then determine which are the pixels that are on borders of things (i.e. there’s a lot of difference in depth between neighbouring pixels, or color difference), then you could super sample only for those pixels instead of for the entire image. So a full pass with 1x sampling first, and then a pass afterwards on select pixels with 4x (or whatever) sampling, then you don’t have to do 4x sampling for the entire image.

Basically, for what you just showed, your algorithm would know not to do super sampling on the large yellow fragments on the character, because that’s all similar in color to begin with. It would only supersample for the edges between objects and background, and near shadows, etc.

If you google “adaptive super sampling” there might be a better explanation and pseudo-code for it.

I think that might be called binary searching on edges. That’s a neat idea. I would totally do it if that was my performance bottleneck. Currently the bottleneck is spawning in new frames, which I could solve by having them pre-instanced, but roblox doesn’t like to save/load 40k frames and it’s hard to pre-render them anyways since I’m constantly changing sample rate and resolution :confused:

Also just finished rendering this. It’s a before/after comparison of SS on/off

5 Likes

@ScriptOn It looks like you are missing the functionality for parts to cast shadows onto themselves.

What do you mean?

A single part, floating in the air. Is its underside as dark as the shadow it makes on the ground?

My shadows use some sketchy match using simple gradients currently.

Parts can cast shadows onto themselves, though:

I pretty much finished up the hotel room I was working on and I’m very please with how it turned out :smiley:





13 Likes

Pretty insane stuff everyone’s been posting here. o_O
Anyways, we got an update video on our third person shooter

18 Likes