[Client Beta] In-experience Mesh & Image APIs now available in published experiences

Hello! destroy() will set EM fixedSize to be true

Are there any plans to allow developers to draw a subsection of an EditableImage onto another?

Hi, could you provide a use case for this request? That would help us to understand how to support it better or how we can adopt other functionalities that are on the roadmap.

1 Like

Sorry for just blurting out this whole thing but I’ve been thinking about and discussing this privately with friends for a while and would appreciate this. If anything is confusing I’m more than happy to discuss further as well.

I’m proposing to allow developers to draw a subsection of an EditableImage onto another EditableImage rather than requiring full-image transfers. This would enable more efficient memory use and better support for tile-based rendering techniques.
Currently, EditableImage:DrawImage only allows drawing an entire image onto another which presents challenges for tile-based games. In one of my 2D projects where the bulk of rendering is tile-based where rendering is handled using an EditableImage, a common design question arises:

  • Should developers upload each tile as a seperate image, consuming fewer resources but quickly becoming impractical with large tilesets?
  • Or should they upload a single tileset and extract individual tiles into their own EditableImages at runtime, which effectively doubles memory usage due to maintaining both the tileset and extracted tiles?
    By enabling the ability to draw a subsection of an EditableImage, develoeprs can efficiently use tilesets without excessive memory duplication which improves both flexibility and performance.
    Since EditableImage:DrawImage does not support drawing subections, I currently use the second method- extractng indidiaul tiles at runtime into seperate EditableImages. However, as mentioned, this approach is inefficient, consuming unnecessary memory and adding extra processing overhead. This inherently limits support on the platform as less devices will be supported.
    There are currently no APIs in Roblox that allow the drawing of a subsection of an EditableImage, making this a gap in functionality that could be improved.

In a traditional workflow, developers typically use ImageLabel or ImageButton instances to display textures, including tilesets. However, EditableImage is a better fit for 2D game rendering.
Unlike standard UI rendering, EditableImage grants direct access to pixel data, enabling procedural modifications, custom effects, and dynamic texture generation—key features for creating unique 2D visual styles.
Using instances for tile-based rendering introduces significant overhead due to the sheer number of unique UI elements required. In contrast, EditableImage allows all rendering to be handled within a single texture, drastically improving performance.
Additionally, EditableImage offers the flexibility to implement custom rendering techniques, such as auto-tiling, making it a powerful tool for efficient and dynamic 2D game development.

A modification to DrawImage or the introduction of a new method would allow for developers to efficiently use tilesets without excessive memory duplication.
Option 1: Modify DrawImage to accept a window region
EditableImage:DrawImage(position, windowPosition, windowSize, image, combineType)

  • windowPosition: A Vector2 specifying the top-left corner of the region in the source image.
  • windowSize: A Vector2 defining the width and height of the subsection to be drawn.
    With the other parameters staying the same.

Option 2 (Preferred): Use a configuration table
An alternative and more flexible approach would be to modify the DrawImage method to accept a configuration table:
EditableImage:DrawImage(position, image, config)
Where config is a table:

{
    WindowPosition: Vector2,
    WindowSize: Vector2,
    CombineType: Enum.CombineType
}

Why this is a preferred approach:

  • The EditableImage API is already in production, so adding new required parameters to DrawImage would break existing code
  • Using a config table allows new functionality to be added without forcing developers to update all existing usages of DrawImage
  • If config is omitted, it defaults to {}, ensuring full backwards compatibility
  • Aligns with the existing trend of config-based APIs like AssetService:CreateEditableImage, maintaining consistency in API design.

Option 3: Introduce a new method, DrawSubsection
Rather than modifying DrawImage another approach would be to add a dedicated method:
EditableImage:DrawSubsection(position, image, windowPosition, windowSize, combineType)
This method would function similarly to DrawImage but explicitly target drawing subsections.
The benefit of this approach is:

  • It avoids modifying DrawImage, keeping it simple for cases where full-image drawing is still preferred.
  • Developers who need subsection drawing can adopt DrawSubsection without affecting existing workflows.
  • This makes it clear in the API that DrawSubsection is specifically for partial image transfers to discoverability.
1 Like

Hello there. Any plans to allow using assets uploaded by other users? We have a cool feature that allows mapping clothing textures to R15 character meshes without humanoids using EditableImage, but sadly seems there is no way to load assets uploaded by other users into EditableImage, so we can’t use players’ clothings.

Alternatives would be to manually upload a few clothing textures ourselves, or download the images via HttpService and decode them into jpegs or pngs and load them into EditableImages.

image
image

1 Like

This method will be very useful, thanks for looking into it!

Can we expect this to also work for editable meshes?

Yes, we are also making this change to CreateEditableMeshAsync. Both these changes have been completed and will be rolled out in a few weeks.

2 Likes

Are you using decals for this? For some reason using TextureContent on decals with a fully working image doesnt display anything but once I changed it to an ImageLabel for example the image is fully visible

Disheartened that I can’t use this for meshes that I don’t own when I really just wanted to “get” info about said mesh - in my case, the mesh’s face colours (since the mesh has been vertex painted and I want to retrieve those colours).

Is there any chance of implementing a “safe” (for lack of a better term) version of CreateEditableMeshAsync that allows you to use getters only (e.g. GetFaces, GetFaceColors)?

No, We’re only using EditableImages.

well duh but you have to apply them to a decal, imagelabel or a mesh texture

Yes, we’ve used MeshPart.TextureContent. I thought you were talking about Decal and


As far as I remember, Decal.TextureContent is not supported yet.

2 Likes

I see, that’s a bit unfortunate, i believe they’ve also mentioned editable image support for color maps on surface appearance, although you cant even change color map during runtime via code so i dont know when that will even happen at this point

Im once again asking if we could please get a way to Clone a meshpart and its EditableMesh. I’ve made a custom solution to work around any possible memory issues by dynamically deleting EditableMeshes that aren’t in priority, but now am dealing with rate limit issues using AssetService to regenerate them each time.

Some kind of caching for EditableMeshes so they don’t have to be regenerated via AssetService everytime would solve all my problems

2 Likes

Decal.TextureContent is now supported! This has been released.

5 Likes

Are there any plans to extend the API permissions to any assets published on the marketplace?

I am currently working on an ad system which would allow people to display their images on billboards in my game for a small fee. In such case I’d find utilizing the EditableImage API very useful as it offers the ability to get an image size/resolution while to my knowledge there is no other way to do so.

Eventually a new universal AssetService method which returns the image resolution by asset ID could fit my needs.

I can imagine some more possible scenarios when I’d prefer to know the image resolution before applying it to an EditableImage or ImageLabel.

2 Likes

Is there any way to debug editable mesh collisions? I am not sure if this is broke due to a recent studio update or something that I changed in the mesh generation, but my character should be colliding with the mesh (instead I can just walk through it).

@vfx_1 Could you clarify why you need to access different image resolutions? For instance, if ImageLabel could automatically select the optimal resolution based on the final on-screen result, would that meet your needs, or are you aiming for something more specific?

Hi @SuperDadV724, normally you would enable “Show Convex Decomposition” in Studio Settings to debug collisions. However, there’s a known bug that prevents EditableMesh collisions from rendering properly, and we will be working on this fix soon.

At the moment, EditableMeshes don’t fully support collisions. This means that after modifying an EditableMesh, you will need to recreate its collision geometry by calling CreateMeshPartAsync() with the collision fidelity that best suits your terrain chunks, even though you won’t be able to visualize the collisions right now.
Keep in mind that Roblox currently doesn’t support open or other non-manifold meshes. So, make sure your terrain chunks are closed and free of self-intersections.

1 Like

Hi, @portenio

I am looking for an efficient way which would allow me to read the image asset dimensions/resolution before applying it to an ImageLabel. As I mentioned earlier, I am creating a dynamic advertisement system where people would be able to display their own images by AssetID on billboards.
However including but not limited to aesthetic reasons I’d prefer to only allow images which fit the specific billboard display size rather than cropping or trying to fit an image which doesn’t really fit the specific display.

Either a method to return the image size by AssetID or a read-only ImageLabel property would fit my needs. I am looking forward to utilise the EditableImage API in the future but as of now due to the current limitations users would not be able to display their own images even if they published them on the marketplace.

After looking around the DevForum I can easily see that such feature is quite demanded and I believe other creators would benefit from it as well.

There is a way to get around this which involves the assetdelivery API as well as a private web service which achieves the desired result, however at the end of the day I believe that Roblox could provide us with a faster and more efficient way.

Hello

Editable meshes have proven to be very useful, however, unless I’ve missed something, they have one huge drawback. You can’t easily make a lot of small editable meshes without overloading the network. If I were to make blank editable meshes with AssetService:CreateEditableMesh(), I could only make around 10 on the client. If I instead use AssetService:CreateEditableMeshAsync(content, { FixedSize = true }) with a premade small mesh as content, then I can make a whole bunch of those due to memory no longer being the limiting factor, however this requires extensive use of network calls, which can very quickly exhaust the allowed limits for clients.

Are there any plans to add a fixed size parameter (or even better, an upper bound parameter) when creating blank editable meshes? Or at least a way to copy editable meshes without the need for a network call to be made?

Currently it seems to me that whenever you create a non-fixed size editable mesh, it is assumed that it will be as big as possible when it comes to memory, causing you to very rapidly run out of space when working with multiple editable meshes.

Also, in order for me to actually use every editable mesh that I create, I must make a new MeshPart with AssetService:CreateMeshPartAsync(meshContent), which is yet another network call, though at least these meshparts can actually be cloned properly without needing extra network calls for that.

Such an amount of network calls for things like this seem very unnecessary and would make things a lot easier for custom generation if they were made more local.

If I’ve misunderstood how something about editable meshes works and I can actually do these things without too many network calls, then please tell me.

Edit: Didn’t mean to reply to you @vfx_1, my bad

4 Likes