Right now it’s impossible to interface with a user’s Kinect. (Edit: I know some users may not have a kinect, the way I word this will assume everyone does. Just keep that in mind.) I feel that with some of the upcoming changes on roblox, and especially with VR support being present with a small user-base, being able to interface with a kinect to have control to a VR headset (Hand movement, tracking fingers to pick up items, etc.) would be absolutely amazing.
I do not have any immediate major use cases, leaving this change on more of the “Quality of life” side of things, but I thought it’d be really fun to interface with a kinect so that I can wave to my friends in-game or something, you know?
This would primarily be an alternative to VR gaming on roblox since it has the hand controls at the loss of head controls. I guess the needed features would be interfacing with the user’s skeleton generated by the kinect, and that’s really it. Store them as a CFrame.
Also, adding support for kinects connected to PCs would be nice. I have the adapter for both the controller and the kinect.
In my experience the Kinect skeleton library output is not really accurate unless you have good spacing / lighting conditions / aren’t wearing any fancy clothing that messes up recognition, it can get a little finicky. Some versions of the Kinect software also have trouble detecting skeleton structure at all if part of your body is blocked (i.e. there’s a coffee table in the way, or you’re sitting at your desk with it). Would be great as a gimmick, but I don’t know if that warrants including API for it. I think I only used my Kinect a handful of times for gaming myself.
That’s why we wouldn’t go as far to allow finger tracking such as OP suggested. Any other pose that at all would be suggestive wouldn’t be that big of an issue because technically crawling and crouching wouldn’t be allowed because they can be seen as explicit if you think about it that way. Also what paul said about the bullet holes.