When importing a UGC avatar bundle or dynamic head to Roblox Studio using ‘Import 3D’, sometimes face bones would end up displacing from its intended position. It is always the same ones upon every import. The displacement of the face bones also affect the rest of poses in the facial animation as well.
__
For example, this is a dynamic head in its intended position in Blender.
This is what it looks like when imported to Roblox Studio using ‘Import 3D’. Three bones are not in their usual positions, thus three vertices will show unintentional displacement in the facial animations. I’ve tried to fix it using methods such as freezing/applying transformations and completely re-rigging the face.
This only happens when the dynamic head is rigged/skinned and when it has the FaceControls in it. Deleting the FaceControls object returns the vertices back to their intended positions, so something might be wrong with the FaceControls that the importer generates.
Bump, I’m having this bug right now as well. Everything is perfect, I checked the weights, the bones, the naming conventions, the custom properties and such. When I remove FaceControls from the Head this issue is gone, however I can’t animate the face if I don’t have FaceControls inside the Head. Looking for a workaround if anyone has one.
Quick update on this one:
Unfortunately it’s not just an issue with dynamic heads, importing the same models as simply a skinned mesh causes the same displacement issues.
The reason the issues is resolved when the face controls instance is removed is because the face is then treated like a static mesh rather than a skinned one.
If my diagnosis is correct I would wager this displacement issues is just showing up on models with a close to 1-1 bone to vertex mapping (But please let me know if you’re having the same issue with models that don’t match this description!).
From digging through our systems unfortunately this might just be a limitation with how we handle bones. As far as I can tell there’s some small precision loss that occurs when we calculate a bones transform (think around 0.01 - 0.001 studs).
For most skinned meshes this is totally acceptable. Each vertex is a combination of it’s bones weights: and weights often smoothly taper between bones, so even if the vertex closest to a bone is fully reliant on said bone it’s neighbor vertices which are only 50% reliant on that bone help mask the offset. Every skinned vertex on the platform suffers from a similar fate, but it’s imperceptible… unless…
You have a small (< 1.5 Stud) high poly (like you’d need for a face) skinned mesh where each vertex is mapped 1-1 with a bone. In which case the error would start to be noticeable, and would be exasperated by the fact that there’s no in between vertices so the error you could see would be 2X the max offset of an individual transform.
I’m going to chat with some other folks and see if there’s anyway we can reduce this effect, but I wouldn’t hold my breath. Skinning is quite an expensive operation, and I suspect we’re trying to save cycles where ever we can potentially even at the loss of some accuracy.
Potential work arounds:
Use fewer bones which drive more vertices
It might be harder to achieve squash and stretch effects on high poly models, but it would remove the tearing/displacement
Use larger or lower poly models if you really need the 1-1 bone to vertex mapping
Increasing the space between polygons will make this effect much less noticeable, and on large scale meshes (>5 studs) it was invisible even with high poly counts
Sorry I don’t have better news for ya’ll, but I’ll keep you posted if there’s a break through.
Best,
OriginalSleepyhead
Thank you so much for your response @OriginalSleepyhead! I appreciate the detailed explanation.
Yeah, unfortunately my workflow is quite dependent on 1-1 vertex mapping since dynamic heads utilize bone weights instead of mesh blendshapes. I’d like to have as much control on my vertices as possible, so I’m hoping the effect is able to be reduced. The workarounds give me less control and resolution/fidelity to work with.
Though, it has me thinking: is there an exact precision loss? Let’s assume it’s ~0.001, would the errors reduce if I snapped all vertices to 0.001 increments? Or would there still be offsets when transforming?