[Studio Beta] Introducing CreateDataModelContent: Convert editable mesh and image data into static content

Sure, however would this still be the case if the mesh was handled for purely visual purposes?

For my case with the planet specifically, I was not interested in having collisions or querying enabled on the larger mesh parts at all, and was actually more interested in having collision be handled in a single, smaller EditableMesh that only surrounds the player (or even possibly parts, if thats the best I can get).

Most cases I’ve seen of ocean meshes calculate the height of a point on the mesh and apply a force to the player directly based on that, also an instance where I believe physics or querying is directly required.

Id imagine distant terrain meshes likely wouldn’t use querying/collisions either.

Now of course I don’t know how the backend handles all of this when just toggling on and off switches (CanQuery, CanCollide, Anchored, ect), but I’m just assuming that removing as much of those physical and querying properties of the mesh would reduce this issue (or even, for instance, adding to a collision group that has no collision at all).

Again, not an expert in this at all but I see this more so as a way to handle generating purely visual meshes :slight_smile:

EDIT: Also wanted to note that the goal is to still indeed use chunks to handle this (by using an octree with the meshes at different resolutions), just again doing so with meshes that are purely visual and are larger than MeshParts allow right now

There’s also the issue of frustum/occlusion culling. Not that it may matter much in your case, but having everything in one mesh forces all geometry in the mesh to be sent to the GPU since it evaluates the entire large bounding box as being in the camera frustum.

You are inevitably going to run into polygon limits enforced on each mesh. All in all, you just need to split it into chunks. It doesn’t matter if you’re doing the collision math yourself.

1 Like

Agreed 100%. As stated, I would definitely be splitting the mesh into chunks still, and even when on the ground of the planet, I would have to do trial and error to see what the maximum resolution/depth could be to still keep performance smooth sailing. But I suppose my concern is that I would rather this be in my own hands than have the engine put a limit on it by default :slight_smile:

Frustum culling is an interesting point. Personally, I was also planning to implement this to a degree from scratch to handle rendering certain parts of the planet in view for instance but If the engine struggles with calculating its bounding box at such a large scale, then I can see why that could be a hiccup to some degree.

However, still, the actual large scale meshes are expected to be very low resolution (for instance, at a certain distance the planet will be the smallest amount of tris I can get it to be while maintaining a spherical shape), so polygon limits are not something I’m all too worried about.

Should note again that this is working nearly exactly the way I want it to right now with SpecialMeshes, and I have little performance difference from basic testing rendering the planet at a lower scale (say, 10 stud radius), vs a larger scale (150,000 stud radius). It just seems like Roblox’s plans are to move away from the trick I was using to get the mesh content to work on them, and thus Id like to see the MeshParts adopt the scalability of SpecialMeshes. This would also come with the benefits of MeshParts as a whole, like material integration.

All in all, goal isn’t to reach perfect detail everywhere, its just to see how far can I push it!

Appreciate your feedback!

Hi, a static meshpart should be able to use EditableImages. Just to help narrow this down: are you creating the planet’s EditableImage in a LocalScript or on the server? Since EditableImages don’t replicate yet, creating it in the serverScript means it won’t be visible to clients.

AssetService:CreateSurfaceAppearance: ColorMap value should be an EditableImage Content
aaaaaaaaaaaaaaaaaaaaaa please

1 Like

When can we expect this to be useable in published experience?
image

As for feedback, I’d like to see a way to make animated SurfaceAppearance on meshes, without needing to update an EditableImage every frame with the same animated frames in a loop. The key here is unlocking more use-cases that don’t reserve more memory than they need to.

Got it. Didn’t realize the lack of replication in that case.

Was intending to eventually move the animated images on meshes down to clients anyways. I guess this just means I need to do it earlier. Thanks.

Is there any solid timeline for it being enabled for published experiences? from what I see it’s been in the pipeline for a while so it makes me worried, but it would be perfect for a voxel-based game I’m working on, because even with greedy-meshing, culling, object pooling, spatial hashing, multi-thread actors, computation caching, and a million other optimizations I’ve tried I just know it’s not going to scale well with 20+ players in a server even with streamingenabled on, and I’ve already implemented editablemeshes+editableimages (for the texture atlasing) but now the whole project hinges on the api actually being implemented for live play.

We desperately need some EditableImage APIs for simple things like translation of a seamless texture. I am animating the ocean on my little planet, but having a repeating texture use ReadPixelsBuffer/WritePixelsBuffer and manually translate all those pixels in Lua is just … pain at the client. We have a DrawImageTranslated, but it doesn’t handle the seamless texture case. And even trying to do a four-way “bit blit” to do translation with wrapping using this API is too slow in script.

Next step… See if the right way is UV translation but that’ll require an actual Editable instead of a staticly converted mesh, and then we’re back to harsh limits.

Another alternative would be if UV offsets could be added to special meshes similar to how they are already on Textures. I can’t simply apply a texture instance here because Texture instances use faces.

On a side note: I can confirm that automatic normal computations (Vector3.zero) do seem to be incorrect sometimes. I was having deeply weird normals on some of my bridges between hexes (usually narrow triangles) and converting to manual normal computation completely resolved it… and made my normals weirdly more precise. Such that I could even see differences between triangles in quads (oh floating points…). Which just tells me I need a drawQuad routine that reuses the computed normal.

If you have a repro where you’re seeing incorrect normals, that would be appreciated :slight_smile:

Information here on how to send a repro privately if that’d be better:

Some additional debugging, and now I understand the issue. Perhaps a misunderstanding on my part do to underdocumentation. When you provide Vector3.zero to AddNormal() it computes the normal at that time. Probably for the last face it’s gotten, applicable or not. If you ALWAYS call AddNormal after creating your triangle, everything works out fine. If you try to just share the “zero” normalID between multiple faces (because it’s Vector3.zero, right!!!)… things look approximately close… but it’s not quite right.

So manual computation works, but auto computation also works… you just have to be sure to call it after each AddTriangle(), not specify it once and assume it’ll get autocalculated for each face.

It’s very likely that the documentation should be clearer. I’ll try clarifying a bit here.

There are normal ids and normal vectors. Both normal ids and normal vectors can be automatically created or manually controlled. There is one normal vector per normal id.

Normal ids control the topology of the normals - how they’re connected with the vertices and face corners. For example, you might want a single normal per vertex (e.g. in a smooth sphere), or you might want multiple normals per vertex (e.g. the sharp corner of a cube). You might also want a single normal to be shared among multiple vertices (for example, you might want all vertices in a plane to share the same normal).

One part that could probably use additional documentation is how AddTriangle creates normal ids. When you call AddTriangle it creates 3 new face corners on the new triangle. If a vertex is newly used, it will make a new normal id. If a vertex is already part of another face, it will reuse whatever normal id is in use on that face. That’s how we get the smooth normal ids by default. I wrote up a bit more about this with diagrams here.

Automatic normal vector computation is whether, for a particular normal id, the normal vector is recomputed after mesh changes, or manually set. The automatic normal computation is pretty straightforward - it does a weighted average of the face normals of every vertex that’s using that normal id. Manually set normals are even easier - no matter what else is changed about a mesh, we just use that normal vector for that normal id.

If you make a mesh just using AddVertex and AddTriangle, you’ll get automatically created normal ids. These will be a single normal id per vertex, so you’ll get smooth normals at every vertex.

You’ll also get automatically computed normal vectors, so the normal vectors rendered will be reasonable.

Automatically computed normal vectors are dirtied on every mesh change, so if you move vertices using SetPosition, then the automatic normal vectors will be updated next time the mesh is rendered or GetNormal is called.

For many use cases, that’s fine. But you might want things to be manual in two ways - manually set normal ids or manually computed normal vectors.

Here’s a piece of example code to create a sharp quad:

local function addSharpQuad(editableMesh, vid0, vid1, vid2, vid3)
	local nid = editableMesh:AddNormal()  -- This creates a normal ID which is automatically computed

	local fid1 = editableMesh:AddTriangle(vid0, vid1, vid2)
	editableMesh:SetFaceNormals(fid1, {nid, nid, nid})

	local fid2 = editableMesh:AddTriangle(vid0, vid2, vid3)
	editableMesh:SetFaceNormals(fid2, {nid, nid, nid})
end

This is creating a single normal id and sharing it for all 6 face corners. Both triangles will have the same normal vector for all of their vertices, so you won’t see a crease between them. But, that normal vector is automatically computed. If you want to use your own normal vector for some reason (maybe the automatic normal vector computation isn’t giving you what you want), you could change the code to:

local nid = editableMesh:AddNormal(Vector3.new(0, 1, 0))  -- This creates a normal ID which is manually specified

You can also call SetNormal(nid, vec) to change a given normal id from automatic to manual computation, or ResetNormal(nid) to change it from manual to automatic computation.

If I just simply turn off my “SetFaceNormals()” calls, I get something like this:

Note those darker rectangles on the bridges between the hexes. That’s actually relatively flat terrain (it follows the curve of the sphere), and normally you’d barely be able to detect any difference there.

If, however, I call

local fid = editableMesh:AddTriangle(vv1, vv2, vv3)
local triangleNormalId = editableMesh:AddNormal()
editableMesh:SetFaceNormals(fid, {triangleNormalId, triangleNormalId, triangleNormalId})

Then the rectangles go away, and I get a nice flat surface.

Note that it doesn’t always happen and it almost seems like it might be based on the orientation of the normal somehow.

I see similar weird normal behavior when I call AddNormal() earlier and share that between various faces.

You can see this effect strongly in my first colored globe pic above as well.

Could we see the ability to pass our own convex collision hulls into the mesh geometry when converting it to a static mesh? Or somewhere during that process?

I know this behavior can be replicated with using parts to model the colliders + using the mesh as a visual, but parts are very memory heavy and welding lots of parts together can be slow and suboptimal. Something like this would be a gamechanger for high fidelity voxel games.

When can we expect this to release out of beta?

3 Likes

yeah i know and this is exactly my point, why haven’t they removed the cap if you can already bypass it LOL??? it’s so stupid

While this is great for replication across client/server… will there ever be a way for EditableMesh to be FULLY replicatable…

Like my use case requires:

  1. Player tells server to create a Mesh
  2. Server creates it (EditableMesh, CreateDataModelContent to make it replicate to all players)
  3. Then I want a client to be able to “preview” changes on that EditableMesh. Think like your donut example with the inflate brush. But instead of sending it to server and having server replicate it for everyone, I want players to individually edit it. Then they can submit and that would send update to the server for server to update.

Atm I’m having to do some hacky work around where
Server > Creates EM > CreateDataModelContent > Mesh in world using content (replciates) > client then when they wanna edit has to create a new EM on their client, hide the server/content version > do their edits > tell server > delete their EM demo > bring back the original server version

Think like live CSG or whatever, I want players to be able to preview like cutting holes in an object without actually “cutting” a hole on the server yet, till they happy (that way they not sending a million updates to the server to update cut locations etc)

Hi i would like a clarify on this matter.

Suppose the server in a server script, CreateDataModelContentAsync some contents and store into an array. The server then tell the clients their content ids, the clients use them in ImageLabels, thus they are replicated to clients.

At some point, if the server script decides to clear the content array, will the content be destroyed and memory freed up? what happen to the client ImageLabels?