[Studio Beta] Updates to In-experience Mesh & Image APIs

Hello Creators,

Last year, we introduced the in-experience Mesh & Image APIs [Studio Beta]. Since then, we’ve been hard at work addressing your feedback to improve the product.

If you have already enabled the Studio Beta by going to the Beta Features window in Studio and enabling the EditableImage and EditableMesh beta you should already have access to the updates in Studio.

While this update is still in Studio Beta, we hope to allow publishing experiences with these APIs in the near future. We are also finalizing the permissions and usage policy around these APIs, which will be announced soon in an upcoming dev forum post. We appreciate your patience as we work through all edge cases / optimizations required to launch this product. With that, let’s dive into the updates!

EditableMesh updates

:warning: Note: If you’ve already saved places that leveraged EditableMesh APIs from the Studio Beta, you might need to update your scripts since previous scripts may no longer work with this update. Please remember that we are actively working on these APIs and may introduce further breaking changes leading up to the full release.

Split attributes

We’ve updated the API to allow multiple attributes to be stored on a single vertex. For example, if you wanted to get a sharp cube, previously that would require duplicating the corner vertices so that you could get a sharp crease between faces. This would lead to more vertices, and methods like GetAdjacentVertices and GetAdjacentFaces wouldn’t work as expected.

But now, we’ve added new API functions that allow you to have split attributes (normals, UV coordinates and colors) at a single vertex. This removes the need for duplicating vertices just to get different attributes at the same location and is especially useful for sharp edges / creases.

The best example to illustrate these changes is to try and procedurally create a simple sharp cube with the EditableMesh APIs.

With a single normal per vertex (previous) With split normals on a vertex (Updated API)
24 vertices 8 vertices
24 normals 6 normals

With the updated API that supports split attributes, you now only need 8 vertices and 6 normals to procedurally create the same cube and its normals. This is a much more intuitive way of dealing with vertex attributes!

Here’s a code snippet that would create the above sharp cube using the previous API and duplicating vertices

Click here to expand
-- Given 4 points, adds 4 vertices and 2 triangles, making a sharp quad
local function addSharpQuad(emesh, pt1, pt2, pt3, pt4)
	local v1 = emesh:AddVertex(pt1)
	local v2 = emesh:AddVertex(pt2)
	local v3 = emesh:AddVertex(pt3)
	local v4 = emesh:AddVertex(pt4)

	local fid1 = emesh:AddTriangle(vid1, vid2, vid3)
	local fid2 = emesh:AddTriangle(vid1, vid3, vid4)
end

-- Makes a cube with creased edges between the sides by duplicating vertices
local function makeSharpCube_duplicateVerts()
	local emesh = Instance.new("EditableMesh")

	local pt1 = Vector3.new(0, 0, 0)
	local pt2 = Vector3.new(1, 0, 0)
	local pt3 = Vector3.new(0, 1, 0)
	local pt4 = Vector3.new(1, 1, 0)
	local pt5 = Vector3.new(0, 0, 1)
	local pt6 = Vector3.new(1, 0, 1)
	local pt7 = Vector3.new(0, 1, 1)
	local pt8 = Vector3.new(1, 1, 1)

	addSharpQuad(emesh, pt5, pt6, pt8, pt7) -- front
	addSharpQuad(emesh, pt1, pt3, pt4, pt2) -- back
	addSharpQuad(emesh, pt1, pt5, pt7, pt3) -- left
	addSharpQuad(emesh, pt2, pt4, pt8, pt6) -- right
	addSharpQuad(emesh, pt1, pt2, pt6, pt5) -- bottom
	addSharpQuad(emesh, pt3, pt7, pt8, pt4) -- top

	return emesh
end

And here is equivalent code that is now possible using the updated API:

Click here to expand
-- Given 4 vertex ids, adds a new normal and 2 triangles, making a sharp quad
local function addSharpQuad(emesh, vid0, vid1, vid2, vid3)
	-- AddTriangle creates a merged normal per vertex by default.
	-- For the sharp cube, we override the default normals with 
	-- 6 normals - a new normal to use for each side of the cube
	local nid = emesh:AddNormal() 

	local fid1 = emesh:AddTriangle(vid0, vid1, vid2)
	emesh:SetFaceNormals(fid1, {nid, nid, nid})
	
	local fid2 = emesh:AddTriangle(vid0, vid2, vid3)
	emesh:SetFaceNormals(fid2, {nid, nid, nid})
end

-- Makes a cube with creased edges between the sides by using normal ids
local function makeSharpCube_splitNormals()
	local emesh = Instance.new("EditableMesh")

	local v1 = emesh:AddVertex(Vector3.new(0, 0, 0))
	local v2 = emesh:AddVertex(Vector3.new(1, 0, 0))
	local v3 = emesh:AddVertex(Vector3.new(0, 1, 0))
	local v4 = emesh:AddVertex(Vector3.new(1, 1, 0))
	local v5 = emesh:AddVertex(Vector3.new(0, 0, 1))
	local v6 = emesh:AddVertex(Vector3.new(1, 0, 1))
	local v7 = emesh:AddVertex(Vector3.new(0, 1, 1))
	local v8 = emesh:AddVertex(Vector3.new(1, 1, 1))

	addSharpQuad(emesh, v5, v6, v8, v7) -- front
	addSharpQuad(emesh, v1, v3, v4, v2) -- back
	addSharpQuad(emesh, v1, v5, v7, v3) -- left
	addSharpQuad(emesh, v2, v4, v8, v6) -- right
	addSharpQuad(emesh, v1, v2, v6, v5) -- bottom
	addSharpQuad(emesh, v3, v7, v8, v4) -- top

	-- Because we override all of the default normals, we can remove them
	emesh:RemoveUnused()
	return emesh
end

For more complicated models with sharp edges, like the one below, the difference is even more substantial.

With a single normal per vertex (previous) With split normals on a vertex (Updated API)
30857 vertices 7252 vertices
30857 normals 6 normals

Note: You only need 6 normals if the shape above is static. You will need more than 6 normals if you would like the above shape to be deformable.

One big change is that we have more types of stable IDs. In addition to vertex IDs and face IDs, there are now also normal IDs, UV IDs, and color IDs. You can create these manually, as in the sharp cube example, above. Or if you don’t need to have split attributes on a vertex, AddTriangle will automatically create merged attribute IDs on each vertex.

For example, here is some code that creates a plane with a smooth color gradient, using the color IDs that are created by AddTriangles:

Click here to expand

-- given indices on a plane, produce a vertex color
local function colorForIndices(iu, iv, numU, numV)
	return Color3.new(iu/numU, iv/numV, 1)
end

-- create a color plane by setting vertex colors
local function makeColorfulMesh(numU, numV, size)
	local emesh = Instance.new("EditableMesh")
	
	-- Add all vertices
	local verts = {}
	for iu=1,numU do
		for iv=1,numV do
			verts[iv*numU + iu] = emesh:AddVertex(Vector3.new(iu/numU * size.x, iv/numV * size.y), 0)
		end
	end
	
	-- Add faces and set colors
	for iu=1,numU-1 do
		for iv=1,numV-1 do
			local v1 = verts[(iu  )+(iv  )*numU]
			local v2 = verts[(iu+1)+(iv  )*numU]
			local v3 = verts[(iu  )+(iv+1)*numU]
			local v4 = verts[(iu+1)+(iv+1)*numU]
			
			local t1 = emesh:AddTriangle(v1, v2, v3)
			local t2 = emesh:AddTriangle(v2, v4, v3)
			if iu == 1 or iv == 1 then
				local colorIds = emesh:GetFaceColors(t1)
				emesh:SetColor(colorIds[1], colorForIndices(iu  , iv  , numU, numV)) -- color for v1
				emesh:SetColor(colorIds[2], colorForIndices(iu+1, iv  , numU, numV)) -- color for v2
				emesh:SetColor(colorIds[3], colorForIndices(iu  , iv+1, numU, numV)) -- color for v3
			end
			
			local colorIds = emesh:GetFaceColors(t2)
			emesh:SetColor(colorids[2], colorForIndices(iu+1, iv+1, numU, numV)) -- color for v4
		end
	end

	return emesh
end

Other notable fixes

  • EditableMesh preview now works under Humanoid models. “FastCluster” rendering is now supported
  • EditableMesh now works on devices running Mac lower than MacOs 11.0
  • Cloning an EditableMesh with 0 triangles will no longer cause a crash.

EditableImage updates

Drawing APIs now support different blending options

The following EditableImage APIs now support setting the optional ImageCombineType argument to specify how to blend the pixels:

By setting ImageCombineType, you can choose between the following blending options:

  • BlendSourceOver: Uses source over alpha blending (previous behavior),
  • Overwrite: Overrides all pixels
  • AlphaBlend: Uses alpha blending
  • Add: Adds pixel values
  • Multiply: Multiplies pixel values

Here are some examples of using the DrawCircle API to draw the same circle onto the same background but with the various blend types options selected:

BlendSourceOver (left), Overwrite (right)

AlphaBlend (left), Add (right)

Multiply

New ReadPixelsBuffer and WritePixelsBuffer APIs

We’re adding ReadPixelsBuffer API, a version of the ReadPixels API that returns a Luau buffer object. Additionally, we’re adding WritePixelsBuffer API, a corresponding version of the WritePixels API which takes a buffer object as an argument. This is much more memory-efficient than the table version of these APIs because each pixel can be represented by 4 bytes in the buffer rather than 4 * 4 byte doubles in the table.

The example below shows a comparison between using ReadPixels/WritePixels and ReadPixelsBuffer/WritePixelsBuffer. Both sets of APIs will still be available in this Studio Beta but we are considering dropping the ReadPixels/WritePixels for the full release. We would love to get specific feedback on performance / memory characteristics as you are trying out both versions of the APIs.

Click here to expand
-- Inverts an EditableImage using the ReadPixels API
local function invertImage(editableImage : EditableImage)
   local pixelsArray = editableImage:ReadPixels(Vector2.new(0, 0), editableImage.Size)

   local index = 1
   for _ = 1, editableImage.Size.X * editableImage.Size.Y do
       pixelsArray[index] = 1 - pixelsArray[index]
       pixelsArray[index + 1] = 1 - pixelsArray[index + 1]
       pixelsArray[index + 2] = 1 - pixelsArray[index + 2]
       index = index + 4
   end

   editableImage:WritePixels(Vector2.new(0, 0), editableImage.Size, pixelsArray)
end

-- Inverts an EditableImage using the ReadPixelsBuffer API which is more memory efficient
local function invertImageBuffer(editableImage : EditableImage)
   local pixelsBuffer = editableImage:ReadPixelsBuffer(Vector2.new(0, 0), editableImage.Size)

   local index = 0
   for _ = 1, editableImage.Size.X * editableImage.Size.Y do
       buffer.writeu8(pixelsBuffer, index, 255 - buffer.readu8(pixelsBuffer, index))
       buffer.writeu8(pixelsBuffer, index + 1, 255 - buffer.readu8(pixelsBuffer, index + 1))
       buffer.writeu8(pixelsBuffer, index + 2, 255 - buffer.readu8(pixelsBuffer, index + 2))
       index = index + 4
   end

   editableImage:WritePixelsBuffer(Vector2.new(0, 0), editableImage.Size, pixelsBuffer)
end

EditableImage Performance and Memory improvements

EditableImage is an incredibly powerful API since it gives you direct Lua access to the pixels of a texture. That level of access can result in suboptimal performance and memory overhead if not used carefully.

In this update, we spent a lot of effort squeezing out every bit of performance and optimizing the memory overhead as much as we could within the Engine so you could have the freedom to do more within your scripts. You should notice improved performance especially when using EditableImage with SurfaceAppearance instances.

We are continuing to add performance and memory improvements to both the EditableImage and EditableMesh APIs over the next few months to prepare for the full release of this API so keep an eye out for those as well.

What’s Next

With these updates to the EditableMesh and EditableImage APIs we hope to have addressed some of the major workflow issues that were brought up from the original Studio Beta.

We are still working on a few more API changes to address feedback and other issues. Keep an eye out for these in future updates since they will likely introduce breaking changes to any existing scripts that utilize this Studio Beta:

  1. Memory management: EditableMesh and EditableImage objects are very memory-heavy since they give you direct access to vertices and pixels. To ensure using these APIs can scale down to low-end mobile devices with limited memory, we are considering a workflow that will first ensure the device has enough memory overhead before creating a new EditableImage or EditableMesh object when requested.

  2. Multi-owner references instead of EditableMesh / EditableImage as children: As mentioned in the original Studio Beta post, live-previewing vertex or pixel changes today works by parenting the EditableMesh / EditableImage under the instance you want to override. This prevents using the same Editable* data across multiple instances. We are working on a new workflow for previewing Editable* data as well as updating the way multiple instances (MeshPart, SurfaceAppearance) reference these asset-like objects with shared ownership.

Do keep an eye out for an upcoming DevForum post that outlines the permissions and usage guidelines that will be in place at launch. We really appreciate all the feedback and excitement around these APIs so far and hope you continue to provide us your valuable insights.

Thanks,

@L3Norm, @TheGamer101, @ContextLost, @FarazTheGreat, @syntezoid, @portenio, @FGmm_r2

124 Likes

This topic was automatically opened after 10 minutes.

For a second I saw “Write Pixels” and I thought, we can write pixels anywhere not only EditableImages??

But then I realized that it’s for EditableImages, and from what I remember, they have a limited size and you’d have to create a new Canvas otherwise to continue it, which is 1024x1024. So you’d have to continue a new canvas next to the other.

 

So :person_shrugging:

Canvas on 3D space can end up confusing, you’ll realize there’s Offset X and Offset Y and a Top and Bottom and more. And then you’ll realize that there’s 3D space on a surface and a size.

And a Canvas can also be expensive to calculate if you try to fill up an entire surface with it that is very large.

4 Likes

Sigh… looks like this buzzword is still being used. Once again, please quit optimizing our games for us and limiting the engine because of it. Not cool.

People with low-end devices are a small minority and delaying engine features and/or not adding them entirely is stupid. Imagine if Unreal Engine limited tris due to the possibility of running on the iPhone 6…

31 Likes

1024x1024 is very much sufficient for most use cases and especially since you can supplement it by adding more and more canvases. If you can figure out how much each pixel takes up in 3D space you can get the tiling working pretty perfect.

1 Like

I will provide you an effective demonstration.

image

 

Sorta reminds me of, that I was implementing logic of a very cool Editor called Hammer Editor:

 

This is what it became:

 

This is why it needs math.

image

I think I ended up fixing it, I haven’t touched that for a very long time.

Just a note, it’s a Scripted Tool that dynamically places Canvas on surfaces, not something that is manually made.

 

Now, let’s say you’d want to continue this 3D Painting Canvas Era onto the Baseplate…

It’s either you fill up the Baseplate with 1024x1024 from start to bottom, or not at all. And doing that raises a question: Would every device remain alive?

You could do complex math to only place one, maybe split all up into chunks mathematically to place a Canvas faster at the right location. But if parts change size dynamically it becomes very complicated.

The point on that test was to have every single pixel editable to paint across surfaces. Something that is very crazy, but also sounds very cool.

 

You can of course, probably create a 1024x1024 texture and use it anywhere, but that wasn’t the point. The point was to have unique canvas to paint on it dynamically. Since it’s called EditableImages and not EditableRenderingBuffer, it’s probably questionable to what even happens to the memory if you create too many unique EditableImages.

4 Likes

Can we get something like this for the basic GUI objects like ImageLabel or ImageButton too any time please? :pray: :sob:

9 Likes

This is my biggest complaint about the roblox engine. It’s frustrating having to resort to hacky or non ideal methods because the entirety of the engine ends up being handicapped. Too many things are trying to be automated and it severely hinders our ability to optimize our game how we see fit. It’s easy enough for us to detect if a device is mobile or low end and let us optimize for those scenarios alone instead of not being able to have a feature at all.

11 Likes

tbh i’d suggest that they hook into this with graphics levels but I won’t because of how it works. It’d probably be good if we had graphics controls like we’ve all been begging for but no so i don’t see that happening and frankly i don’t want it to.

3 Likes

Can we please get a faster way to access vertex positions?
To clarify, I am asking for a function along the lines of EditableMesh:GetPositions() which returns an array structured like this: {[VertexID]:VertexPosition}. This makes it much faster to get vertex positions in tight loops where performance is critical. For reference, it about doubled the speed of a complex calculation in testing.

Why am I asking for this to be built-in?
I ask because creating this array manually takes a shockingly high amount of time just due to how costly it is to do all of the :GetPosition() calls, but that initial cost still pays off. Regardless of whether this is added, having an array set up this way is the fastest way to get positions of vertices, it would just be much better (and definitely faster) if this was built-in.

I have already asked for this, but this would be very helpful and it would probably be best if it were added before the full release.

9 Likes

This would be super cool

2 Likes

Again, I just wish to bump this as a possibly less than ideal setup given previous allusions to asset privacy restrictions being very strict. Ideally we should be able to load any mesh/image that we can currently load in our experience into an editable but rather not be permitted to publish any mesh/image which originates from an asset we don’t own. One of the strongest use-cases for editables is visual effects, and that includes visual effects on player characters.

It could be argued that someone could just copy over the pixels / vertexes of an editable generated from an asset you don’t own into a blank editable to bypass those proposed restrictions I just listed; however that could be done anyway with the use of external APIs to load mesh/texture data and parse them into an editable. Ideally, please don’t add any restriction that harms perfectly valid use-cases to mitigate the few bad actors since those bad actors will just find a different way anyway and it will only end up stunting creative uses of the APIs. :pray:

3 Likes

While being on the topic of meshes, are there any plans to add a ResampleMode for textures, similar to what ImageLabels have to properly display pixelated or low resolution textures?
This feature would be incredibly valuable for terrain generation and custom meshes, allowing us to use low-resolution images as textures or custom texture packs, greatly reducing memory usage because currently we’re forced to upload 1024x1024 images. It would pair perfectly with the new Editable Meshes.

It’s a feature we’ve all wanted for a long time and it has been asked before here.

5 Likes

Very excited for this! Would it be viable to have this batch the draw-calls? I’ve noticed currently even the same EditableImage texture won’t batch, and that has been causing some bottlenecks. Thank you!

I don’t understand this. Are you advocating for Roblox to crash on low end devices with no way for your game to respond to it? This is adding a feature for you to be able to handle the failure case… you aren’t losing functionality here.

12 Likes

@FGmm_r2 Is there any chance that EditableImage instances will support additional formats for WritePixelsBuffer? Currently for my minimap system, I store my EditableImage data in run-length encoded buffers to save space and it’d be nice if I could just plug these buffers directly into the EditableImage without decompressing them, since decompressing them takes a fair bit of CPU time.

External Media

That’s not what I said. I said Roblox should stop holding the engine back in favor of a minority (low end phones). Old/cheap phones are good now and there’s no reason Roblox should be refusing features, nerfing them, or delaying release due to “think of the people with 2gb memory!” If I want my game to run well on a phone with 2GB memory, I’ll do that.

Even the cheapest/oldest phones now are more than capable of running games normally…


image

I am talking specifically about the passage you highlighted from the OP. Roblox want to let you take what’d otherwise be a crash, and respond to it more gracefully, with no change in functionality for devices that are more powerful.

Can you be more specific about how this is holding you back?

10 Likes

This is maybe an inappropriate place to ask about this, but one use case for editable mesh is creating meshes that have animated texture scrolling via incrementing and updating all UV . However, constantly invalidating the cluster for each animated mesh seems expensive. Ideally, this would be done in the shader with a simple UV offset parameter, akin to the current Texture instance.

Is there any chance we can get some UV offset parameter for mesh parts? Or will we have to wait for custom luau shaders to get performant animated textures :sob:

4 Likes

i agree with this
unfortunately I could never see roblox adding shaders, since they’re so focused on ai and whatnot…

4 Likes