Blur Rendering / Image Convolution!

I am currently working on image convolution via script. I take the data of an image and convolve it with a kernel. So far, it’s been going pretty well but as you can imagine, there is one big setback. Creating millions of frames to represent millions of pixels is, well… bad. After rendering, actually quite a large portion of the image, studio becomes an unusable mess. My progress so far is shown below:

I would like to discuss possible methods to make this faster and much more efficient so that I can even just increase the portion of the image it renders.

1 Like

Progress has been made. Renders a fraction more than it used and a higher strength! :slight_smile:

I think you’ll do much better if you instead create each pixel brick by brick with parts and then render that in a viewportframe. This would mean you can get the final output image as a static texture, meaning that you’ll have great performance. In fact, you might even try splitting up the image into several sections automatically, rendering each one out into its separate viewportframe, and then stitching them together.

No, that is one of the worst ways you can do it. I tried making something like that a few months ago and the fps goes to near 0, sometimes it even crashes.

1 Like

Really? I thought it would be the opposite. Maybe you were overloading video memory if every single changed version of the viewport as you add in each brick gets cached for some time. Did you try constructing the entire image out of parts, and then dropping that whole model into the viewport at once?

I know this reply isn’t to me but I just thought I’d give you a little update on where I’m at. I’m using Euclidean Colour Distance to check for colour similarity and rendering frames to be bigger based on that. It’s helped a lot. Unfortunately, though, there is some definite “streaking”.

It now renders 1/4 of the width (exactly) at a Strength of 10 units.

Oh wait, that guy wasn’t OP. Have you tried the viewport idea though? I think in theory you should have much better performance with parts in a viewportframe mainly because Roblox has been optimized for the gpu to render hundreds of thousands of parts, and if you are using regular old brick parts, the GPU can combine draw calls really efficiently (I believe). Should also be much less memory per pixel versus frames. And, I think Roblox does some taxing calculations to determine which order to render frames in, so it may be more CPU cost too.

I’ll look into it. I think the main thing to get right would be how I would position the parts. Thanks for the heads up.

1 Like

I tried doing it with parts… Studio shot up to 96% memory and my PC froze for about 3 minutes.

Unfortunately I don’t think there is too much optimization to be done, or shortcuts you can take, for displaying the image. You are technically not supposed to be able it do this in the first place, since this leads to being able to pull images from the internet( specifically the bad corners of internet ), a functionality that was removed a very long time ago.

Cool concept though!