[Client Beta] In-experience Mesh & Image APIs now available in published experiences

Welllllllllll It did break one of my things. Specifically my ray tracers. To get around the 1024x1024 resolution limitation was to just create multiple smaller editable images and stitch them together. When i actually try rendering, only one editable image will update at once while all the rest remain stuck. Here is a clip of that:

I get the idea behind the performance issues though my only alternatives here are ImageLabels which we can all agree that those would be a million times more demanding.

Is there at least an fflag i could modify right now?

1 Like

Sorry this broke your code, this limit did break backwards compatibility with the previous studio beta. We want to avoid any further breaking changes after releasing EditableImage to clients which is why this limit is so strict right now. We realize that there is room for improvement and want to make it better in the future.

1 Like

Because recalculating the collisions can take a while, it is not done automatically. Depending on your situation, MeshPart:ApplyMesh might be the easiest way to do this.

-- assume that previewMesh is the MeshPart that's in the datamodel, and emesh is the EditableMesh
local newPreviewMesh = AssetService:CreateMeshPartAsync(emesh, { CollisionFidelity = PreciseConvexDecomposition }) -- or whatever CollisionFidelity works for your case

-- to make sure that the scale is the same after ApplyMesh
local oldRenderScale = previewMesh.Size / previewMesh.MeshSize
local newSize = newPreviewMesh.MeshSize * oldRenderScale
previewMesh:ApplyMesh(newPreviewMesh)
previewMesh.Size = newSize
1 Like

We will make the above method work, it does seem like the cleanest near term way to support clone and should work intuitively since this takes Content.

As a temporary work around, you can create a new EditableImage and use DrawImage to draw the image you want to copy into that.

4 Likes

Is there a way to bake the EditableMesh onto a MeshPart if I donā€™t intend on changing any of its vertices again?

Iā€™m generating some meshes at run time, which I never intend to change the vertices of, however, I quickly run into the Failed to create empty EditableMesh that was requested due to reaching memory budget limits. warning (as I am making quite a few of these).

If I do emesh:Destroy(), it removes the visible mesh, even though the collision box is still there, but I want to be able to see the mesh and collide with it; is there a way to mark an editable mesh as non-editable in order to free up more memory?

Thank you.

3 Likes

You should be able to do MeshPart:ApplyMesh(editableMesh) and then remove the editablemesh afterward

Are there any plans to lift PluginSecurity from MaterialVariant & SurfaceAppearance? Not being able to use EditableImage on those objects severely limits its use case for me, to be honest I cant think of a single reason to use EditableImage in my experience because PBR editing is impossible.

We plan to add support for using EditableImage with SurfaceAppearance.ColorMap soon. Full PBR support is quite a challenge, but itā€™s still something we are investigating and want to support in the long term.

1 Like

My main use case is something like procedural normal maps for things like water, but it could also just be used to create a wide variety of animated materials. I also understand the performance of EditableImage still needs some work.

3 Likes

This is just a developer convenience note, but for all the APIs that take multiple IDs (SetFaceUVs, SetFaceColors, etc.) when they receive an invalid ID, it would be amazing if the runtime error would indicate which ID was invalid (face ID, color ID, vertex ID, etc).

The current error of ā€œInvalid idā€ does not help much, especially when it is some obscure edge case that is difficult to reproduce.

Will we ever get the option to change Sampling mode on EditableImages applied to EditableMeshes? It would be great to apply pixel art onto cubes, like in Minecraft, without having to upscale each texture, or having the texture blurred.

MeshPart:ApplyMesh() only takes another MeshPart as its parameter, you canā€™t use it for editable meshes.

Also I tried this in studio and it didnt work.

Still having this issue, Iā€™d like to know if this plans on being fixed so I can know whether or not to continue using editable meshes in production.

Are there any plans to allow developers to draw a subsection of an EditableImage onto another?

A fix is planned, but it requires reworking how the data is stored in the EditableMesh, and may take some time. I wish I had better news, but I canā€™t give a good estimate for when the fix will be available.

2 Likes

Hello! destroy() will set EM fixedSize to be true

Are there any plans to allow developers to draw a subsection of an EditableImage onto another?

Hi, could you provide a use case for this request? That would help us to understand how to support it better or how we can adopt other functionalities that are on the roadmap.

1 Like

Sorry for just blurting out this whole thing but Iā€™ve been thinking about and discussing this privately with friends for a while and would appreciate this. If anything is confusing Iā€™m more than happy to discuss further as well.

Iā€™m proposing to allow developers to draw a subsection of an EditableImage onto another EditableImage rather than requiring full-image transfers. This would enable more efficient memory use and better support for tile-based rendering techniques.
Currently, EditableImage:DrawImage only allows drawing an entire image onto another which presents challenges for tile-based games. In one of my 2D projects where the bulk of rendering is tile-based where rendering is handled using an EditableImage, a common design question arises:

  • Should developers upload each tile as a seperate image, consuming fewer resources but quickly becoming impractical with large tilesets?
  • Or should they upload a single tileset and extract individual tiles into their own EditableImages at runtime, which effectively doubles memory usage due to maintaining both the tileset and extracted tiles?
    By enabling the ability to draw a subsection of an EditableImage, develoeprs can efficiently use tilesets without excessive memory duplication which improves both flexibility and performance.
    Since EditableImage:DrawImage does not support drawing subections, I currently use the second method- extractng indidiaul tiles at runtime into seperate EditableImages. However, as mentioned, this approach is inefficient, consuming unnecessary memory and adding extra processing overhead. This inherently limits support on the platform as less devices will be supported.
    There are currently no APIs in Roblox that allow the drawing of a subsection of an EditableImage, making this a gap in functionality that could be improved.

In a traditional workflow, developers typically use ImageLabel or ImageButton instances to display textures, including tilesets. However, EditableImage is a better fit for 2D game rendering.
Unlike standard UI rendering, EditableImage grants direct access to pixel data, enabling procedural modifications, custom effects, and dynamic texture generationā€”key features for creating unique 2D visual styles.
Using instances for tile-based rendering introduces significant overhead due to the sheer number of unique UI elements required. In contrast, EditableImage allows all rendering to be handled within a single texture, drastically improving performance.
Additionally, EditableImage offers the flexibility to implement custom rendering techniques, such as auto-tiling, making it a powerful tool for efficient and dynamic 2D game development.

A modification to DrawImage or the introduction of a new method would allow for developers to efficiently use tilesets without excessive memory duplication.
Option 1: Modify DrawImage to accept a window region
EditableImage:DrawImage(position, windowPosition, windowSize, image, combineType)

  • windowPosition: A Vector2 specifying the top-left corner of the region in the source image.
  • windowSize: A Vector2 defining the width and height of the subsection to be drawn.
    With the other parameters staying the same.

Option 2 (Preferred): Use a configuration table
An alternative and more flexible approach would be to modify the DrawImage method to accept a configuration table:
EditableImage:DrawImage(position, image, config)
Where config is a table:

{
    WindowPosition: Vector2,
    WindowSize: Vector2,
    CombineType: Enum.CombineType
}

Why this is a preferred approach:

  • The EditableImage API is already in production, so adding new required parameters to DrawImage would break existing code
  • Using a config table allows new functionality to be added without forcing developers to update all existing usages of DrawImage
  • If config is omitted, it defaults to {}, ensuring full backwards compatibility
  • Aligns with the existing trend of config-based APIs like AssetService:CreateEditableImage, maintaining consistency in API design.

Option 3: Introduce a new method, DrawSubsection
Rather than modifying DrawImage another approach would be to add a dedicated method:
EditableImage:DrawSubsection(position, image, windowPosition, windowSize, combineType)
This method would function similarly to DrawImage but explicitly target drawing subsections.
The benefit of this approach is:

  • It avoids modifying DrawImage, keeping it simple for cases where full-image drawing is still preferred.
  • Developers who need subsection drawing can adopt DrawSubsection without affecting existing workflows.
  • This makes it clear in the API that DrawSubsection is specifically for partial image transfers to discoverability.
1 Like

Hello there. Any plans to allow using assets uploaded by other users? We have a cool feature that allows mapping clothing textures to R15 character meshes without humanoids using EditableImage, but sadly seems there is no way to load assets uploaded by other users into EditableImage, so we canā€™t use playersā€™ clothings.

Alternatives would be to manually upload a few clothing textures ourselves, or download the images via HttpService and decode them into jpegs or pngs and load them into EditableImages.

image
image

1 Like

This method will be very useful, thanks for looking into it!

Can we expect this to also work for editable meshes?