Proof Of Concept - Using Pixels To Display Images

Its possible, it could most likely be done with the same way the op has done this, but it would probably be super low resoluion and would be like 10fps

If i am correct, if i use replit to save an image with an UUID and save it to a folder which is meant to be an anchor for a video.

Too bad my brain hasnt completely learned python, so making this possible is a stab in the eye.

However i would say splitting each image frame into pixel packets would be a viable option.

Using the power of my pc and patience. I have managed to render a 1080p image and a 1440p image. (Scripts won’t be open source since it can be abused - can show videos.)
Most of this was done through c++.
1440p


1080p

Tried to make this one with depth. (As you can see, it’s can be abused)
https://gyazo.com/c67aad42457833aa0a4c9da58cb97a9e
Ram Usage:
image

7 Likes

Are you not worried about getting your account banned for creating TOS breaking functionality?

Will you feel bad at all when little kids are exposed to inappropriate content due to your shared code?

You can’t control what people do with your code but you are helping enable workarounds to image moderation. (which seems like an incredibly bad idea)

Don’t do this (but here’s the git repository for how to do it).

Can I know how many instance count you have when the image has finished rendering?
And do you use GuiObject or do you use BasePart?

I did make something similar except that I made a python script to parse images and turn them to Lua tables and I would need to embed those inside the game on a modulescript, then a normal script parses the modulescript and renders the image

Unfortunately this used GuiObjects and I keep getting hit with Roblox’s limitation

If you read my post, you’d see this is a proof of concept, not a ready, working-out-of-the-box resource. It also isn’t technically TOS breaking since it is theoretically possible to manually make frames yourself with nothing but Studio to mimic an image.

I’m not trying to control it, naturally, I can’t. I added a disclaimer multiple disclaimers because you obviously shouldn’t use it in your games unless you do want to get banned. In addition, as others have already pointed out, this exists already. Don’t know why you’d pick on the fourth or fifth post of something like this.

Also, if anyone competent spent around thirty minutes trying to do this, they could easily achieve the same result or better.

1 Like

I like your effort, but using GUIs is a no-go for image renders in roblox. They’re not efficient at all and there’s a limit to how many gui instances can exist on your screen at any given time.

Now as for how to go about it better? BaseParts and color-blending your run-length data into EDIT: less* parts that are longer with averaged color values. Here’s an old screenshot of my renderer (not even hosted locally, it’s just that efficient. It uses JS as a backend.)


That image was Full HD, 1920x1080 but was optimized to cut that pixel count down to just 29% of the original.

3 Likes

What do you mean by color-blending?

Oh my bad I missed a bit there, I meant to say “comparison color-blending”

And essentially what I mean by that is comparing the RGB (or in my case, HSV) values to one another and if they are within let’s say 15 (/255) of each other, blend them together, adding up all the values and then averaging them once you encounter a color that does not meet these conditions.

I won’t go too in depth considering this took around a year to get to that state of optimization (with changes to both the front/back ends) and I have no plans on releasing it to the community unfortunately.

1 Like

I may have posted something on those previous posts, I can’t recall.

Sure, players can stack up blocks in the shape of something, but your(and others before you) ‘tech demo’ is aimed at making it trivial to work around image moderation. I guess I don’t get the purpose of it. (if it can’t or shouldn’t be used).

Anyway carry on… it doesn’t have to make sense to me :slight_smile:

The purpose of the developers who make these is rarely just “bypass moderation” if it even includes that at all. For me, it was a test of how much optimization and compression via encoding I could manage as well as how much abuse can the Roblox engine take with these bulky tasks with its high-level focus.

Elaborating on what I mean by high-level: It’s a framework that doesn’t allow you to access the core of the environment, in this case, we must use parts and cannot individually draw pixels to the screen.

1 Like

I did that for my rasterizer it is actually fairly simple if I understand what you are saying, you just pass each channel into this formula and then bam, mapped.

Fairly similar, I did the same when I made a rasterizer a few months ago for fun (lost place files unfortunately) but for this I’m not passing all the channels, I’m mostly looking at the average of all channels and a weight value.

Weight algorithm:

local function Weight(X,Y,Z)
  local Sum = X+Y+Z
  local WeightX = X/Sum
  local WeightY = Y/Sum
  local WeightZ = Z/Sum
  local TrueWeight = (WeightX > WeightY and (WeightX > WeightZ and -1))
    or (WeightY > WeightX and (WeightY > WeightZ and 0))
    or (WeightZ > WeightX and (WeightZ > WeightY and 1))
  return TrueWeight
end

So that if one channel is RGB of (1,0,0) and the other is RGB of (0, 0, 1) or RGB or (0,1,0) they’re not considered as being within range (since they’re literally entirely different colors)

And of course it’s a lot more complex than just that but yeah that’s the generics of the code.

I used baseparts. 1920x1080 = so about 2,073,600 parts? At 160x90, the game was able to keep up at 50fps with the video being shown at 30fps.

1 Like

Oh wow that’s pretty complicated I just floor the channels so that they can only be either 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0

Well yeah I have to do this since it’s RGB, I just did simple comparisons for my grayscale rasterizer.

I’m probably overlooking something but I just did this for my rgb to color thing

import math
rBitDepth = 100
gBitDepth = 100
bBitDepth = 100
file = open("output.txt", "w")
file.write("{")
precision = 1000
for r in range(rBitDepth+1):
    for g in range(gBitDepth+1):
        for b in range(bBitDepth+1):
            file.write("\n")
            rRatio = round(r/rBitDepth * precision)/precision
            gRatio = round(g/gBitDepth * precision)/precision
            bRatio = round(b/bBitDepth * precision)/precision
            rRatio = round(rRatio * 255)
            gRatio = round(gRatio * 255)
            bRatio = round(bRatio * 255)
            idxAddress = (rRatio << 16) | (gRatio << 8) | bRatio
            file.write('['+str(idxAddress)+'] = Color3.new(' + str(r/rBitDepth) + "," + str(g/gBitDepth) + "," + str(b/bBitDepth) + "),")
file.write("}")
file.close()
--colorPallete is a module of the output.txt file
colorPallete[bit32.bor(bit32.arshift(round((round(backbuffer[dimensions.Y*(y-1)+(dimensions.X - x +1)].X*8)/8)*255), -16),bit32.arshift(round((round(backbuffer[dimensions.Y*(y-1)+(dimensions.X - x +1)].Y*8)/8)*255), -8),round((round(backbuffer[dimensions.Y*(y-1)+(dimensions.X - x +1)].Z*8)/8)*255))]

The weighting I’m referencing is done for optimizing pixel count on the lua front-end, rather than on the backend. My backend uses run-length encoding, all the RGB values are represented as single bytes (characters) since it’s 0-255.

edit: Did I mention this data is streamed in real-time from a get request with a simple JS webserver?

streamed in real-time

Is this like 30 frames per request?.