I wrote an importer for Source Engine model files using this beta!
It’s a bit clunky since they require 3 files to be selected to import properly, plus the format is very… archaic to put it mildly. But it works for the most part
I wrote an importer for Source Engine model files using this beta!
It’s a bit clunky since they require 3 files to be selected to import properly, plus the format is very… archaic to put it mildly. But it works for the most part
one of these announcements is going to be shaders programming i know it
i agree with this! bulk vertex move operations would be greatly appreciated, things like ocean simulations that rely on updating each vertex per-frame will get large optimisation boosts
Is it too much to ask for editing the size of the EditableMesh in terms of vertices? Could we get EditableMesh::AddVertex
and EditableMesh::RemoveVertex
methods? This is a useful instance object which could end up being handy for many of us.
To clarify a bit on this, what would the ideal goal be for permissions long-term? Would we be able to load UGC avatar accessories (or toolbox assets) into an editable solely for local-editing purposes (IE: not for anything publishing related)?
If so, this would be incredibly helpful for color-saturation effects which, while currently possible; currently must either affect the entire Workspace at once (via ColorCorrection) or only assets which we have editable access to. Opening up this limitation would also open up the stage for some cool distortion / morphing effects for player characters even while custom UGC is equipped!
Also, just in-case a feature request would be most ideal in order to reply to this question; you can find a feature-request which I wrote about this unfortunate current limitation here. Can’t wait to hear more about how this limitation could change in the future!
This is great to hear!! Is there currently a way to get the position / general vertex data of a skinned mesh with this?
I haven’t had time to see if it’s an issue within my own code but it seems whenever I get the data of the vertex that’s being animated it seems to use the rest data instead of the skinned data.
It’s a T-posed model with an animation that has the arm downward, But when I get the position of the hand, the data I get back doesn’t seem to reflect where the hand would be because of the animation.
This is normal, the API doesn’t give you the animation relative position, only the rest pose one.
There is definitely still some room for optimization in the transcoding that you’re seeing.
It seems like the main use case you have for shape keys is facial animation, is that correct?
One thing that we’ve been looking at is the possibility to expose a way of converting an EditableMesh with multiple sets of vertex positions into a skinned EditableMesh with bones and FACS poses (e.g. Dem Bones). That way you could produce mesh assets that would work with the existing FaceControls / camera facial control, and publish them as avatar heads that would work in any experience. Updating the engine to directly support shape keys would be a much bigger lift.
Would the ability to convert shape keys into FACS poses meet your use case?
Yes, FACS + Dem Bones is a great way to solve facial animation especially for avatar systems and camera-based expression tracking. But my use case (and many others), this goes beyond faces. What if shapekeys affect things like clothing deformation, creature morphs or even damage variations on props and vehicles?
In those cases:
FACS is inherently a limited expression system (roughly 50 possible expressions), and while it covers a wide range of facial movements, morph targets are a foundational animation technique used for much more than facial expression. They’re standard in every major real-time engine for a reason.
Because of my above points, I believe native shapekey/morph target support is 100% worth the engineering effort, even if it’s complex to implement. It’s something that’s highly requested by artists across the platform and would open the door to far more expressive, dynamic, and creative content.
Even in the FACS context: using shapekeys instead of bones to drive poses would be a huge improvement. FACS driven by morph targets allows for blended, nonlinear, and exaggerated expressions much more natural and performant than rigging 50 micro-bones which have clamped values ranging from 0-1 strength. (Also note that the engine has a limit of 4 bones per vertex!)
In fact, the best outcome would be supporting a hybrid system: FACS via bones and morph targets. That would give creators the flexibility to choose whichever method fits their asset and performance needs best.
And about the performance concerns:
If morph blending happened on the CPU in C++ internally (where GPU morphing isn’t available like OpenGL ES 2.0 falling back to using CPU skinning), it would be fast enough for most real-time use cases especially if it ran in the same engine code that handles the existing CPU skinning system. The key thing is letting us pass vertex deltas once, then just update blend weights per frame, which is trivial from a performance standpoint. But this should ideally also work for regular MeshPart
instances, not everyone wants to create a complex system using EditableMesh
, if this can just be natively imported upon mesh import (meaning the .mesh format needs to store some extra data when its serialized.)
Ah darn, I hope there’s a way to add this functionality in the future then
Thanks for the post, and making the request clear. Since it would require many changes and optimizations, it would be difficult to make any guesses about if or when this would be implemented, but it’s always very helpful to have these kinds of requests.
While FACS has 50 possible expressions, it should be possible to use them in a way that doesn’t match the semantics. For example, if you’re implementing car damage, raising the left eyebrow could map to damage on the back left bumper. So, if you can convert shape keys to FACS poses, you can combine up to 50 different poses per mesh, with additional control over combination poses (correctives) if needed.
Would it be possible to get more information about what you’re hoping to use EditableMesh for? There is an EditableMesh:AddVertex
method, and vertices that aren’t included in any faces can be removed with EditableMesh:RemoveUnused
. Is the hope to use this as a point cloud or something similar? Or is this more about being able to query the number of vertices in a mesh without having to deal with permissions?
No problem, I’m glad I got a response regarding this. I’m curious however how this would fully work in the C++ side of things!
Does Roblox currently exclusively use CPU based skinning? Because I know the shaders as of now still likely use CBuffer
objects meaning there’s some constraints with how much data can be bound at once (I believe future lighting also has a limit how many lights can be rendered at once because CBuffers
are limited in size, which StructuredBuffer
wouldn’t suffer from?) That’s just the nature of forward rendering, which to my knowledge is still what is exclusively used on Roblox due to it just working on all devices.
An immediate use case would be custom triangulated terrain. Currently it is possible to add vertices, but if you try simulating digging or terrain destruction, new vertices must be added and old ones must be removed.
Other use case is fragmentation mechanics. For example, shooting a projectile that explodes and opens a hole in an EditableMesh
. Things like these could be done more efficiently if we had access to EditableMesh::RemoveVertex
.
For both of these situations, what I would suggest is to add any vertices you want to add, update all of the faces, and then call EditableMesh:RemoveUnused
. Any vertex that isn’t part of a face will be removed as a single bulk operation.
How long until editing an editable mesh updates it’s collision box. This would be super useful for raycasting
It depends on how you’re planning on calling the raycast.
If you’re using EditableMesh:RaycastLocal
, the results for that are immediately available.
If you want to use a workspace raycast, you’ll need to update the physics data / collision geometry of the MeshPart. This can take a few frames, depending on the complexity of the mesh. Here’s a utility function you can use to swap it out. Note that you can change the CollisionFidelity to meet your needs.
function swapMeshPartEditableMesh(meshPart:MeshPart, emesh:EditableMesh)
local previewPart = AssetService:CreateMeshPartAsync(Content.fromObject(emesh), {CollisionFidelity = Enum.CollisionFidelity.Default})
previewPart.TextureContent = meshPart.TextureContent
local renderScale = meshPart.Size / meshPart.MeshSize
meshPart:ApplyMesh(previewPart)
meshPart.Size = renderScale * meshPart.MeshSize
end
If all you need is a box, you can call EditableMesh:getCenter()
and EditableMesh:getSize()
, which always return up-to-date results
Is there any way currently to get the skinned / animated position of vertices on EditableMesh?
And if not, how difficult would it be to implement that?
I’d like to be able to do some jello / soft-body effect on a creature that I had, and while it would work when they physically move around (the models CFrame being updated)
if I gave it an animation (like an idle or emotes) then it wouldn’t activate there due to it not recognizing anything change in the vertex positions.
There’s not currently a way to get the vertex positions that takes into account the skinning and animation, unfortunately. Thanks for the request, and for the information about your use case.
Ahhh I see so I would need to wait a few more updates for the collision fidelity to update in realtime for some hit detection.
Although a different way around I suppose could be to have multiple different objects at once 3-5 that will switch over or use normal raycast and raycast local