Is there any guide for using buffers for this sort of situation?
I’ve been wanting to use the WritePixelsBuffer for a while now but couldnt understand how to get the right offset or even turning a color into a series of numbers
I’ve looked in other forums and even the buffer post to understand, but couldnt find the right information for the job
Each pixel takes up 4 bytes, where each byte represents RGBA [0-255] in sequence. They are stored in a scanline pattern, left to right, top to bottom.
You can find a pixel’s starting location in a buffer at: ((y * width * 4) + (x * 4))
Also note that buffers use 0-based indexing. So your x/y values should be clamped between [0, width - 1] and [0, height - 1]. And your initial offset into the buffer should be 0, in case you were using 1-4 instead of 0-3.
Is there going to be some kind of support for bulk methods so it can be used with parallel luau? With this i mean something like this:
local editableMesh = ...
local vertices = getVertices() -- Do wacky stuff in parallel luau
local indices = getIndices()
editableMesh:SetVertices(vertices)
editableMesh:SetIndices(indices)
We really need some bulk feature like this, been trying to implement Parallel Luau into editablemesh programs since I’ve been working on oceans with many vertices.
Currently not really possible if not, quite difficult to implement this.
I might’ve misunderstood as im getting an error while using your offset
for the color, I was given a encoder for writing in u32 as I was unsure on what to consider in your explanation. But I dont think ive even used that correctly (rayResult is color3)
We want to get to bulk operations for EditableMesh as well. Those did not make the cut for v1. We are strongly considering them as priorities for future releases.
@iPeeDev what kind of “wacky stuff” are you thinking of doing in there out of curiousity?
The write methods of buffer write to per byte
→ Comment here, the number at the end of the method name is the number of BITS that it is writing to, writeu8 = writing 1 byte or a single (0-255) value, and writeu32 = writing 4 bytes, or a (0-4,294,967,295) value →
, so one RGBA color would equal 4 bytes, or 32 bits. You would have to use buffer.writeu8 for each color (representing 0 to 255 possible values each for red, green, blue, and alpha). The offset parameters, I believe, step per byte, so to go to up one number, you only need to increment the offset by one.
After figuring out what color the pixel should output, you would call buffer.writeu8 4 times total for RGBA, and those methods do require the value to be between 0 and 255 inclusive.
For when your script errors and says “buffer access out of bounds”, it either means your offset is negative, or it reaches past (width * height * 4 - 1). However, if the encoder you are using does encode in a 1 byte pattern (as in the Color3 encodes into 4 bytes which can be read as 4 number between 0-255), it might be a different issue.
If you need extra help, look up on documentation for possible answers (or this link buffer | Documentation - Roblox Creator Hub)
With wacky stuff i was thinking of some heavy tasks like generate terrain with a lot of vertices, Or updating vertices to add motion to a mesh.
btw, before the update my editable meshes werent having this error, but now that i updated my code, this appears:
Before, I created the meshpart and sized it as (1, 1, 1), Like this:
local editableMesh = Instance.new("EditableMesh")
local meshPart = Instance.new("MeshPart")
meshPart.Size = Vector3.one
-- do stuff
editableMesh.Parent = meshPart
And updated the code to this:
local editableMesh = assetService:CreateEditableMesh()
-- do stuff
editableMesh:CreateMeshPartAsync(Vector3.one)
Is this intentional or is this an error, Because if its intentional i personally dont agree adding limits to the mesh sizes, In my case i only used them as LOD so they didnt have any kind of physics or collisions, Only for the visuals.
It is quite disappointing that this is a limitation. This makes using Editable* assets for visual effects extremely difficult. Just because the feature was made for publishing in-experience meshes doesn’t mean that most people aren’t using it for that and would benefit from non-publishing support for meshes they don’t own. This is more specifically an issue for UGC clothing / accessories, which I don’t own the meshes / textures for, yet are forced into my experience anyway. This makes it so I can not apply many visual effects to these items, making them stick out and break my visual style. Please, again, re-consider this restriction for non-publishing use-cases.
I’m having a tough time converting my old code over to this. I was creating the EditableMesh, setting up the vertices, and parenting it to a MeshPart. Can we have a basic code example to do this? Also, is it mandatory to set up collision or can I just skip that for faster mesh rendering?
My biggest question is if this will apply to future Sound and Audio instances, as they still seem to use the string ID format. If this only applies to images and meshes; it can get confusing, so why not sound IDs as well?
I also do hope that the older method of using string IDs does not get removed in the future to keep backward compatibility with older content for the far future. Other than this; this seems exciting for the future and I am excited for the future of this!
Hey, about content asset moderation, and other factors concerning in-experience asset safety standards: Instead of broadly glossing over utility by threat of harming, damaging, or disrupting user experience by risk of rogue utilizers of such regarded features I think it would be best to establish deeper standards for trust, and safety regarding consistent, and proactive developers on the platform that are quite visibly developing on the platform, that is an easy trust vector to guarantee you don’t have to vehemently impose restrictions on the developer community at large.
People who proactively work on the platform are drastically different than those who create content that goes against the community standards of trust, and safety.
Honestly, there’s room for drastic improvement across the entire strata of the platform, yet again, resources don’t go there, expenditure is tight, budget fixed, and new ideas seem to arise from top->bottom.
Thank you, this managed to actually help me.
Ive got a running version of my engine in the latest update and I can definitely see the difference in framerate.
Im getting around a 25% increase in fps compared to my first recording
How do I read the pixel information?
I used this code to read the buffer but it only return 255 or 0 (black and white image) for the whole image.
for y=0,Size.Y-1 do
for x=0,Size.X-1 do
local offset = ((y * Size.X * 4) + (x * 4))
print(buffer.readu8(PixelData,offset),buffer.readu8(PixelData,offset+1),buffer.readu8(PixelData,offset+2),buffer.readu8(PixelData,offset+3))
end
end
Ready, I’ve learned to handle the new changes, it seems to me that they have improved almost everything a lot, but why have they decided that you can’t create editableimages with third-party IDs? For me, it was a regrettable change.
This is amazing, we needed EditableMesh physics for so long. One thing however that was not mentioned is whether EditableImages will ever have a ResampleMode property to allow us to create pixelated textures for meshes.
Is there an ETA for the most basic editable image functionality being available in-game?
My specific use-case I am eyeing right now is a minimap of a dynamic map.
This would not require any replication as it would be generated on a client for that client.
Right now I use frames for this purely so I can test functionality in-game, as other parts of my development (IE place teleporting) make testing in studio complicated.