Full body tracking in Roblox (And how you can make your own!)

Hi! I am thebigreeman, and I have made the world’s first full body tracking in Roblox!

(NOTICE: If you cant access the videos, its most likely Roblox preventing embeding.)

External Media External Media

This project is probably one of my most craziest and difficult things I have ever done on the Python / Roblox platform. Full body tracking on Roblox has MANY possible applications that it can be used for, such as giving players that dont own a VR headset that usually costs 500 - 1000 dollars a chance at games that require VR, truely live Roblox concerts / events, new games with different mechanics using this and more. Now what are you waiting for? So, lets begin creating your very own full body tracking system.

Before you begin, please note these things.
This project is EXTREMELY DIFFICULT and is completely NOT FOR PEOPLE WHO AREN’T FAMILAR WITH HTTP AND PYTHON.
My estimate of how long this project will take for you to complete is about 2 - 3 hours. So make sure you have enough time on your hands.
This tutorial was made for Windows users so you may have to go off track if your doing this as a Mac user.
This tutorial gives you only the BARE basis of body tracking to start with and there is many things you can do including improve the update rate of the HTTP requests.
Finally, please note that you WILL need a webcam (camera) for this to work.

STEP 1: PREPARATION.

Open up Roblox Studio and make a new “Classic Baseplate”, publish the game and enable HTTP requests so Python will be able to communicate with Roblox.

Open up Python (IDLE) and make a new file. Save the file somewhere in your directory where it can be accessed for later.

Open up Command Prompt as administrator so we will be able to install the required modules.

STEP 2: INSTALLATION.

In your Python file you will need these modules.
2023-04-24 (3)
The Threading module should be installed be default when you download Python so we dont need to worry about that. Lets install Flask, Mediapipe and Opencv!

In the Python terminal, first run import sys then sys.executable. It should give you the directory where the main Python executable is.
2023-04-24 (19)
Open up File Explorer and paste the directory in, BUT DONT PRESS ENTER YET!


Change all the double lines into one singular line then remove “pythonw.exe” from the directory. NOW you must press enter. Open the Scripts folder then copy the directory above. Now lets go to command prompt! In the command prompt first execute cd (directory you copied) which will change the command prompt’s directory to the Python script folder. Then execute pip install mediapipe opencv-python to install Mediapipe and Opencv. Finally, the last line of code to execute is pip install flask which will install Flask.

Once its finally completely all installed, its time to get to work!

STEP 3: DEFINING INITIAL VARIABLES.

This part should be easy as there is only one line of code.
2023-04-24 (6)
This will be the dictionary that will contain all the body key points in an x, y, z axis.

STEP 4: CONVERTING OBJECT (Python) TO DICTIONARY.

Before we begin on the big part, we need to convert something called Objects which is a Python thing into a dictionary as since they are a Python thing only, JSON wont accept it. Imagine it like being unable to put CFrame values in datastores so they have to be converted into a table.

STEP 5: SETTING UP A HTTP CONNECTION.

The only way we are going to get these body key points sent to Roblox we are going to need to set up a HTTP connection.
2023-04-24 (5)
With threading (aka Coroutines), it will allow us to run a HTTP server in the background that will send the body key points on request on localhost. Since its being ran on localhost, unless you use a server you bought or expose your computer to the entire internet, it will only run on Roblox Studio. If you want, you can use a tunneling service such as Cloudflare Tunnels or hire your own webserver such as Amazon Web Services to be able to do full body tracking IN GAME!

STEP 6: OPENNING THE CAMERA.

Its time for the camera to shine! We are going to define some module variables that will be needed and start the camera!
2023-04-24 (7)
mp_drawing will be used later for when you will see yourself in the camera and overlay the body key points on your body, mp_holistic will be the main module for scanning your body and making the body key points! With cap = cv2.VideoCapture(0) we are defining the camera variable to open the camera!

STEP 7: MAIN SCANNING AND VIDEO.

Get ready, the next two are going to be a tough one.

This huge chunk of code will start reading the camera, you will know its on if you have the light indicator near your camera. Then it will use the Mediapipe Holistic module to process your body with an slightly inverted frame of your body into body key points. It will then revert the image back to the original state then begin to draw the body key points on your body. I have left FACEMESH_TESSELATIONS as a comment as it renders every single triangle on your body and if you dont have a GPU fast enough to render it it will lag alot. All the HAND_CONNECTIONS aren’t being sent to Roblox but are being rendered, if you want you can also spend more time and also send the HAND_CONNECTIONS to Roblox which is a reason why I had “POSE” part of a dictionary and not a variable itself.

STEP 8: PACKING THE BODY KEY POINTS AND SENDING IT TO ROBLOX.

Take a close look at this before you begin as its alot to think about.

What this entire chunk of code do is update the dictionary with new body key points so Roblox can work at its own pace and keep sending HTTP GET requests and get the dictionary with the newest body key points. At the top there is an IF statement. This statement makes sure this entire chunk of code doesn’t fail by not updating the dictionary at all if the camera couldn’t find a single area where to place a body key point.

STEP 9: CLOSING UP THE LOOP AND ADDING A TERMINATION KEY.

Wow! I didn’t expect you to make it this far! Give yourself a pat on the back as this is some very difficult stuff.
2023-04-24 (8)
These few lines of code do one thing. When you press the key “Q” it will close everything and practically shutdown the entire program like a “termination key”

STEP 10: THE MOMMENT OF TRUTH…

Its time for the scariest part, has all this work paid off? Lets find out, run the Python program and what should appear is window with live feed from the camera with body key points drawn on your body. If this didn’t happen or you got an error, go back through all the steps and try to find what you did wrong. Else, move on to step 11.

STEP 11: TESTING THE HTTP CONNECTION BETWEEN PYTHON AND ROBLOX.

We are almost done with the Python side of the entire code! Run the Python program and wait for the camera window to appear, switch to Roblox Studio and run this code in the command line
image
Now open up the output and jump up in down in joy that it finally actually worked!


If we open one of the body key points you may realise that they are like Color3 values because they have a maximum of 1 and a minimum of 0.
2023-04-24 (13)

STEP 12: END OF PYTHON SIDE OF CODE.

Congratulations! Now for the rest of this tutorial you dont need to edit the Python side of code anymore! You can take a break for now if you want or you can keep on pushing forward to the LUA part. It should be easy now so when your ready, lets get right to it!

(drawn by me in ms paint with tears, sweat and blood)
ohhhh why
(keep going you got this)

STEP 13: MAKING THE BODY TRACKING RIG.

We need a rig so we can turn these values into, well, you! I have uploaded a model of my rig to use so you dont have to suffer naming each part a different name. BodyDotModel - Roblox
Simply drag it into workspace and you should be done.

STEP 14: MAKING THE RIG CONTROLLER SCRIPT.

Instance a new script into ServerScriptService and name it “BodyTrackingMain”, you will also need to parent this asset to the script and it will make sense later. ConnectionBeam - Roblox

STEP 15: DEFINING THE CONTROLLER SCRIPT’S VARIABLES.

Open the controller script and we are going to define these variables.
image
Nothing else to really say here.

STEP 16: RENDERING CONNECTIONS BETWEEN PARTS IN THE RIG.

Now you will see why we need that beam instance in the script.


This chunk of code will render a beam between two parts each from the dictionary.
Without this chunk of code, it will be hard to make out a person as the rig will just be a bunch of floating dots.

STEP 17: ANIMATING THE RIG WITH THE BODY KEY POINTS.

Its time for the rig, to come to LIFE!

Every single 0.1 seconds, we get the newest body key points from the Python program, then we go over all dots in the rig and match the dot’s name with the body key point’s name. We have to invert the value or else it will turn up backwards, and upside down. Then we expand the dots away from each other by multiplying the value by 10 giving us quite a normal sized rig. You can increase the update rate by lowering the wait() delay time, such as 0.05. Also, move your baseplate down to 0, -42, 0 so the rig wont drown in the floor.

STEP 18: TESTING THE ENTIRE CODE ALL TOGETHER.

Can you feel it? This is the last step, isn’t it? Well get ready because we are gonna run it ALL TOGETHER! Run the Python program, wait for the live camera window open up and the HTTP message in the Python terminal. Then stand up from your chair, run studio and fit your entire body in the camera.

External Media

CONGRATULATIONS! YOU HAVE FINALLY FINISHED THIS ENTIRE TUTORIAL AND NOW WHAT YOU HAVE IS A FULLY FUNCTIONAL BODY TRACKING SYSTEM!

I hope you enjoyed this showcase / tutorial! I put alot of effort making this topic and the script itself. (2 days to be precise!) Please comment on what you think of this system!

NOW, ITS TIME TO GET ACTIVE ROBLOX!

Rate this topic, how do you think of this topic?
  • Incredible!
  • Amazing!
  • Good!
  • Its fine…
  • Not good.

0 voters

18 Likes

For some reason, it appears Roblox has broke the topic, videos for me are appearing as “external media” so you will need to click on it to download the mp4. I hope this gets fixed!

6 Likes

Simply amazing, I’d love to see you as a backrooms entity now!

4 Likes

Could you send the streamable links inline with text so it doesn’t embed like this?
I don’t want to download videos.

2 Likes

It would be funny chasing people as a backrooms entity… And it is POSSIBLE!

2 Likes

I am thinking about connecting an xbox / ps4 (via ds4) to Roblox to be able to move around and interact with objects. Infact you can fully replicate a full VR setup with just a camera and a controller which has many benefits!

1: 1000$ price tag (full VR setup) becomes 20$ (A camera and a controller)
2: Tracks mory parts on your body that a headset cant capture (Legs, torso, elbow, feet)
3: More realistic expierience (Make hand gestures in game with your real hands, grab things in game with your real hands, jump irl to jump in game)
4: Allow people with poor access to VR try it
5: Allow people without VR to play with people who have VR (no more being unable to play a game with your friends just because you dont want to pay that 1000$ price tag)

if I ever do, i’ll make a optional tutorial how to make one (or just give the model itself if your THAT lazy.)

2 Likes

You’re almost there. Mediapipe only gives you positioning of the body but not rotation. You would still need a kinematics solver for that part which requires both world and normalized landmarks. Something like KalidoKit (JS) or Moetion (C#) does this and you could probably adapt the solver code so you can send position and rotation.

Theres a few other projects out there that might work better. ToucanTrack is another VR camera full body solution. The difference with it is you use 2 or more PS eye cameras (they’re about $10/pc) which are depth cameras. This allows you to calibrate the multiple cameras together and get more coverage of different angles of your playspace.

If you are a diy nerd you could probably build a set of ok slimevr trackers and still keep the cost fairly affordable if you can grab the data from the server.

2 Likes

There is actually an easier way of obtaining rotation.

CFrame.lookat()

Fun fact: When I rotated the camera in studio to the side, the body actually wasn’t that distorted aka leaning (compared to older mediapipe versions) despite me rotating my body a full 360. So it is entirely possible to do everything with one single camera. Plus most of the time we are only gonna be tracking the arms and heads which can be connected on its on via a BasePart. But hey, great suggestion! I will think about it.

1 Like

I might make an optional lua module that can be used for those people who just want to squeeze a bit more… Possibilties out of their body tracking system.

2 Likes

Well, it appears now the videos are just sending an “Access Denied” popup and this for EVERY video from streamable ON the devforum. Why wont it just embed!??!?!

1 Like

Are you referring for the looking point to be a connecting joint? That would probably be less than ideal with a more complicated character when you can do the math outside of roblox without a ton of API calls slowing down processing time. (For true 10+ point full body tracking I mean)

oh yeah mediapipe doesn’t require you to face it or anything but things will get weird if you start blocking body parts that another camera could see. Being limited to a single camera fov and the standard 30 or 60hz refresh rate is less than ideal when PS eyes can do 187hz and ToucanTrack can push all the heavy processing on the gpu with their custom implementation of BlazePose.

1 Like

I might make an option to use dual cameras or PS eyes. But the goal here was to achieve body tracking with the minimal steps and equipment required.

1 Like

Anyways, im planing to release a full module called “BodyTrackingService”. It will be so easy to configurate, that you dont even need a Python program for yourself as the Python program will be hosted on a website you just need to visit and get the code.

4 Likes