Proof Of Concept - Using Pixels To Display Images

Oh my bad I missed a bit there, I meant to say “comparison color-blending”

And essentially what I mean by that is comparing the RGB (or in my case, HSV) values to one another and if they are within let’s say 15 (/255) of each other, blend them together, adding up all the values and then averaging them once you encounter a color that does not meet these conditions.

I won’t go too in depth considering this took around a year to get to that state of optimization (with changes to both the front/back ends) and I have no plans on releasing it to the community unfortunately.

1 Like

I may have posted something on those previous posts, I can’t recall.

Sure, players can stack up blocks in the shape of something, but your(and others before you) ‘tech demo’ is aimed at making it trivial to work around image moderation. I guess I don’t get the purpose of it. (if it can’t or shouldn’t be used).

Anyway carry on… it doesn’t have to make sense to me :slight_smile:

The purpose of the developers who make these is rarely just “bypass moderation” if it even includes that at all. For me, it was a test of how much optimization and compression via encoding I could manage as well as how much abuse can the Roblox engine take with these bulky tasks with its high-level focus.

Elaborating on what I mean by high-level: It’s a framework that doesn’t allow you to access the core of the environment, in this case, we must use parts and cannot individually draw pixels to the screen.

1 Like

I did that for my rasterizer it is actually fairly simple if I understand what you are saying, you just pass each channel into this formula and then bam, mapped.

Fairly similar, I did the same when I made a rasterizer a few months ago for fun (lost place files unfortunately) but for this I’m not passing all the channels, I’m mostly looking at the average of all channels and a weight value.

Weight algorithm:

local function Weight(X,Y,Z)
  local Sum = X+Y+Z
  local WeightX = X/Sum
  local WeightY = Y/Sum
  local WeightZ = Z/Sum
  local TrueWeight = (WeightX > WeightY and (WeightX > WeightZ and -1))
    or (WeightY > WeightX and (WeightY > WeightZ and 0))
    or (WeightZ > WeightX and (WeightZ > WeightY and 1))
  return TrueWeight
end

So that if one channel is RGB of (1,0,0) and the other is RGB of (0, 0, 1) or RGB or (0,1,0) they’re not considered as being within range (since they’re literally entirely different colors)

And of course it’s a lot more complex than just that but yeah that’s the generics of the code.

I used baseparts. 1920x1080 = so about 2,073,600 parts? At 160x90, the game was able to keep up at 50fps with the video being shown at 30fps.

1 Like

Oh wow that’s pretty complicated I just floor the channels so that they can only be either 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0

Well yeah I have to do this since it’s RGB, I just did simple comparisons for my grayscale rasterizer.

I’m probably overlooking something but I just did this for my rgb to color thing

import math
rBitDepth = 100
gBitDepth = 100
bBitDepth = 100
file = open("output.txt", "w")
file.write("{")
precision = 1000
for r in range(rBitDepth+1):
    for g in range(gBitDepth+1):
        for b in range(bBitDepth+1):
            file.write("\n")
            rRatio = round(r/rBitDepth * precision)/precision
            gRatio = round(g/gBitDepth * precision)/precision
            bRatio = round(b/bBitDepth * precision)/precision
            rRatio = round(rRatio * 255)
            gRatio = round(gRatio * 255)
            bRatio = round(bRatio * 255)
            idxAddress = (rRatio << 16) | (gRatio << 8) | bRatio
            file.write('['+str(idxAddress)+'] = Color3.new(' + str(r/rBitDepth) + "," + str(g/gBitDepth) + "," + str(b/bBitDepth) + "),")
file.write("}")
file.close()
--colorPallete is a module of the output.txt file
colorPallete[bit32.bor(bit32.arshift(round((round(backbuffer[dimensions.Y*(y-1)+(dimensions.X - x +1)].X*8)/8)*255), -16),bit32.arshift(round((round(backbuffer[dimensions.Y*(y-1)+(dimensions.X - x +1)].Y*8)/8)*255), -8),round((round(backbuffer[dimensions.Y*(y-1)+(dimensions.X - x +1)].Z*8)/8)*255))]

The weighting I’m referencing is done for optimizing pixel count on the lua front-end, rather than on the backend. My backend uses run-length encoding, all the RGB values are represented as single bytes (characters) since it’s 0-255.

edit: Did I mention this data is streamed in real-time from a get request with a simple JS webserver?

streamed in real-time

Is this like 30 frames per request?.

Next idea: Take very short noises from the marketplace and pitch them to match into a sound. E.g. I can upload an amogus sound and then it will take amogus.length / shortSound.length and loop through it that many times and pitch each sound to make it sound like amogus sus meme. Seems like a fun project.

Oh dear god that sounded misleading. No it’s a single request but the data is computed, written (and therefore streamed to roblox’s GetAsync’s buffer) in real time.

Suprisingly using BaseParts makes the process literally faster rather than using Frame (for me atleast), unfortunately I can’t render a 1920x1080 picture because Studio fills up the ram completely and either would crash the entire Linux userspace or else making the kernel OOM kill Studio
Screenshot_20220304_135141

Screenshot_20220304_135153

I probably need to setup a local server like how OP did, because the RGB picture data are stored in a single ModuleScript (I compile picture data with a custom Python script I wrote that turns them to Lua arrays)

Edit: 720p render

Maybe if you create a sound with a single note, technically you can create other notes depending on the pitch you change it to. Just send over a frequency table and maybe play around with it??

1 Like

If you’re using python, you can probably send it over directly using Flask. Can shorten string length by using hex.

I don’t know anything advanced as hex manipulation but thanks for the tip! I’ll work on implementing them

You should look into parallel luau.

Yup, I rewrote the renderer using parallel Lua but it’s still kinda slow so I’m trying other methods in addition to that.

I have managed to pull this off with my canvas module to create super performant and decently high quality images on a GUI or SurfaceGUI from any PNG file.

Loading this 128x128 image is pretty well instant with little to no lag at all, since my method of storing image data just consists of look up tables that can be generated via plugin which can get binary contents of a PNG file.

My module will also happily render images at 256x256, it’s just that my method of for storing images uses strings, which means image sizes are limited to 128x128 for now.

It also helps that this canvas has a really efficient frame compression method with the help of UI gradients to store many pixels in one frame, which leads to 1457 total frames used for this image above.


Only problem with using frames in a GUI, is that if the resolution is high enough, the frames will fail to render. I only seem to experience this issue at around 300x300 with a rendered image with heaps of colours on the GUI.

5 Likes