Hi! I am thebigreeman, and I have made the world’s first full body tracking in Roblox!
(NOTICE: If you cant access the videos, its most likely Roblox preventing embeding.)
External Media External MediaThis project is probably one of my most craziest and difficult things I have ever done on the Python / Roblox platform. Full body tracking on Roblox has MANY possible applications that it can be used for, such as giving players that dont own a VR headset that usually costs 500 - 1000 dollars a chance at games that require VR, truely live Roblox concerts / events, new games with different mechanics using this and more. Now what are you waiting for? So, lets begin creating your very own full body tracking system.
Before you begin, please note these things.
This project is EXTREMELY DIFFICULT and is completely NOT FOR PEOPLE WHO AREN’T FAMILAR WITH HTTP AND PYTHON.
My estimate of how long this project will take for you to complete is about 2 - 3 hours. So make sure you have enough time on your hands.
This tutorial was made for Windows users so you may have to go off track if your doing this as a Mac user.
This tutorial gives you only the BARE basis of body tracking to start with and there is many things you can do including improve the update rate of the HTTP requests.
Finally, please note that you WILL need a webcam (camera) for this to work.
STEP 1: PREPARATION.
Open up Roblox Studio and make a new “Classic Baseplate”, publish the game and enable HTTP requests so Python will be able to communicate with Roblox.
Open up Python (IDLE) and make a new file. Save the file somewhere in your directory where it can be accessed for later.
Open up Command Prompt as administrator so we will be able to install the required modules.
STEP 2: INSTALLATION.
In your Python file you will need these modules.
The Threading module should be installed be default when you download Python so we dont need to worry about that. Lets install Flask, Mediapipe and Opencv!
In the Python terminal, first run import sys then sys.executable. It should give you the directory where the main Python executable is.
Open up File Explorer and paste the directory in, BUT DONT PRESS ENTER YET!
Change all the double lines into one singular line then remove “pythonw.exe” from the directory. NOW you must press enter. Open the Scripts folder then copy the directory above. Now lets go to command prompt! In the command prompt first execute cd (directory you copied) which will change the command prompt’s directory to the Python script folder. Then execute pip install mediapipe opencv-python to install Mediapipe and Opencv. Finally, the last line of code to execute is pip install flask which will install Flask.
Once its finally completely all installed, its time to get to work!
STEP 3: DEFINING INITIAL VARIABLES.
This part should be easy as there is only one line of code.
This will be the dictionary that will contain all the body key points in an x, y, z axis.
STEP 4: CONVERTING OBJECT (Python) TO DICTIONARY.
Before we begin on the big part, we need to convert something called Objects which is a Python thing into a dictionary as since they are a Python thing only, JSON wont accept it. Imagine it like being unable to put CFrame values in datastores so they have to be converted into a table.
STEP 5: SETTING UP A HTTP CONNECTION.
The only way we are going to get these body key points sent to Roblox we are going to need to set up a HTTP connection.
With threading (aka Coroutines), it will allow us to run a HTTP server in the background that will send the body key points on request on localhost. Since its being ran on localhost, unless you use a server you bought or expose your computer to the entire internet, it will only run on Roblox Studio. If you want, you can use a tunneling service such as Cloudflare Tunnels or hire your own webserver such as Amazon Web Services to be able to do full body tracking IN GAME!
STEP 6: OPENNING THE CAMERA.
Its time for the camera to shine! We are going to define some module variables that will be needed and start the camera!
mp_drawing will be used later for when you will see yourself in the camera and overlay the body key points on your body, mp_holistic will be the main module for scanning your body and making the body key points! With cap = cv2.VideoCapture(0) we are defining the camera variable to open the camera!
STEP 7: MAIN SCANNING AND VIDEO.
Get ready, the next two are going to be a tough one.
This huge chunk of code will start reading the camera, you will know its on if you have the light indicator near your camera. Then it will use the Mediapipe Holistic module to process your body with an slightly inverted frame of your body into body key points. It will then revert the image back to the original state then begin to draw the body key points on your body. I have left FACEMESH_TESSELATIONS as a comment as it renders every single triangle on your body and if you dont have a GPU fast enough to render it it will lag alot. All the HAND_CONNECTIONS aren’t being sent to Roblox but are being rendered, if you want you can also spend more time and also send the HAND_CONNECTIONS to Roblox which is a reason why I had “POSE” part of a dictionary and not a variable itself.
STEP 8: PACKING THE BODY KEY POINTS AND SENDING IT TO ROBLOX.
Take a close look at this before you begin as its alot to think about.
What this entire chunk of code do is update the dictionary with new body key points so Roblox can work at its own pace and keep sending HTTP GET requests and get the dictionary with the newest body key points. At the top there is an IF statement. This statement makes sure this entire chunk of code doesn’t fail by not updating the dictionary at all if the camera couldn’t find a single area where to place a body key point.
STEP 9: CLOSING UP THE LOOP AND ADDING A TERMINATION KEY.
Wow! I didn’t expect you to make it this far! Give yourself a pat on the back as this is some very difficult stuff.
These few lines of code do one thing. When you press the key “Q” it will close everything and practically shutdown the entire program like a “termination key”
STEP 10: THE MOMMENT OF TRUTH…
Its time for the scariest part, has all this work paid off? Lets find out, run the Python program and what should appear is window with live feed from the camera with body key points drawn on your body. If this didn’t happen or you got an error, go back through all the steps and try to find what you did wrong. Else, move on to step 11.
STEP 11: TESTING THE HTTP CONNECTION BETWEEN PYTHON AND ROBLOX.
We are almost done with the Python side of the entire code! Run the Python program and wait for the camera window to appear, switch to Roblox Studio and run this code in the command line
Now open up the output and jump up in down in joy that it finally actually worked!
If we open one of the body key points you may realise that they are like Color3 values because they have a maximum of 1 and a minimum of 0.
STEP 12: END OF PYTHON SIDE OF CODE.
Congratulations! Now for the rest of this tutorial you dont need to edit the Python side of code anymore! You can take a break for now if you want or you can keep on pushing forward to the LUA part. It should be easy now so when your ready, lets get right to it!
(drawn by me in ms paint with tears, sweat and blood)
(keep going you got this)
STEP 13: MAKING THE BODY TRACKING RIG.
We need a rig so we can turn these values into, well, you! I have uploaded a model of my rig to use so you dont have to suffer naming each part a different name. BodyDotModel - Roblox
Simply drag it into workspace and you should be done.
STEP 14: MAKING THE RIG CONTROLLER SCRIPT.
Instance a new script into ServerScriptService and name it “BodyTrackingMain”, you will also need to parent this asset to the script and it will make sense later. ConnectionBeam - Roblox
STEP 15: DEFINING THE CONTROLLER SCRIPT’S VARIABLES.
Open the controller script and we are going to define these variables.
Nothing else to really say here.
STEP 16: RENDERING CONNECTIONS BETWEEN PARTS IN THE RIG.
Now you will see why we need that beam instance in the script.
This chunk of code will render a beam between two parts each from the dictionary.
Without this chunk of code, it will be hard to make out a person as the rig will just be a bunch of floating dots.
STEP 17: ANIMATING THE RIG WITH THE BODY KEY POINTS.
Its time for the rig, to come to LIFE!
Every single 0.1 seconds, we get the newest body key points from the Python program, then we go over all dots in the rig and match the dot’s name with the body key point’s name. We have to invert the value or else it will turn up backwards, and upside down. Then we expand the dots away from each other by multiplying the value by 10 giving us quite a normal sized rig. You can increase the update rate by lowering the wait() delay time, such as 0.05. Also, move your baseplate down to 0, -42, 0 so the rig wont drown in the floor.
STEP 18: TESTING THE ENTIRE CODE ALL TOGETHER.
Can you feel it? This is the last step, isn’t it? Well get ready because we are gonna run it ALL TOGETHER! Run the Python program, wait for the live camera window open up and the HTTP message in the Python terminal. Then stand up from your chair, run studio and fit your entire body in the camera.
External MediaCONGRATULATIONS! YOU HAVE FINALLY FINISHED THIS ENTIRE TUTORIAL AND NOW WHAT YOU HAVE IS A FULLY FUNCTIONAL BODY TRACKING SYSTEM!
I hope you enjoyed this showcase / tutorial! I put alot of effort making this topic and the script itself. (2 days to be precise!) Please comment on what you think of this system!
NOW, ITS TIME TO GET ACTIVE ROBLOX!
- Incredible!
- Amazing!
- Good!
- Its fine…
- Not good.
0 voters