Machine Learning in Roblox with Tensorflow and webservers

Introduction

Machine learning can be an overwhelming topic especially on a platform such as roblox.
This tutorial will guide you though how to set up a web server and use it to predict the mood from player messages. Something similar to a webapp I made a while ago here.

A little about me, I’ve been a roblox developer for around 6-7 years now and have multiple years of experience in the machine learning field and web development.

Examples
Shown here the player sends the message “This is so very sad” which then the machine learning model predicts to be a sad message.

Shown here the player sends the message “Today is a great day.” which then the machine learning model predicts to be a happy message.

Shown here the player sends the message “This makes me frustrated.” which then the machine learning model predicts to be an angry message.

This video shows the model working in real time. If for any reason the video is not loading, the video is also hosted on youtube.

Prerequires

This tutorial may appear overwhelming at first and is targeted at more advanced scripters but I tried as best as possible to write it in a way that a beginner would understand.

This tutorial assumes that you have either, found a preexisting model available either via api or as a file, or made an tensorflow machine learning model yourself and want to add it to roblox. This course will not teach you how to make a machine learning model.

Several apis can be found simply by searching the phrase ‘ai api’ or similar on a search engine. However some apis might require extra data or an api key.
If you want to use the same model as I’m using on the tutorial then feel free to download the necessary files on Github, you would need the v1-js folder and the v1_tokens.json file. If you plan on using this in a game, make sure you credit me for the model.

This tutorial will be using lua for roblox, and the javascript node framework for the web server. A good understanding of both languages is recommended for this tutorial but fully understanding javascript is not strictly necessary.

If you’re making your own model, I highly recommend using the python version of tensorflow and converting it to a javascript compatible format.

Preparation

Since we’ll be using HTTP requests make sure to enable allow HTTP requests options in the game settings.

image

If you’re using a preexisting api, great, you already have most the preparation completed.
If you’re using your own tensorflow model, then you would need to convert it to a tensorflow.js compatible model. This guide on the official tensorflow page should help you out in that reguard.

Step 1 : Setting up the webserver

If you’re using a preexisting api, you can skip this step.

Since we’re using javascript, we’re going to be using express to create a restful api. If you have another preferred way to communicate through the web weather if it’s using graphql or another framework for http communication then feel free to use it. As long as it can transfer json data, it should work.

Extra Credit : JSON Serialization

JSON is a way to store data much like storing data in lua tables. But what’s different from lua tables is that JSON is supported by pretty much every single programming language. Since we’re communicating lua to javascript, what we can do is turn the javascript data to a json object, then convert it to lua data. This process is called serialization and is commonly used to store data or as like this case communicate between two separates environments.

Create a npm project normally.

# command line
npm init

After, let’s install and import the express frame work.

# command line
npm install express
// index.js
const express = require("express");

Creating an express app is easy simply call the module as if it were a function, it would then return an express app object which can be used to host a server. To host the server simply listen to a port.

// index.js
const express = require("express");

const app = express();

app.listen(3000);
console.log("Listening to port 3000");

Running the code and going to localhost:3000 is you’re hosting on your machine or the environment address’ 3000 port if you’re using a 3rd party service will return Cannot GET '/' or a similar message. This is because we have not set up the communication.

We’ll actually be leveraging the use of post requests for this particular project, luckily express also have an easy way of implementing post requests via the app.post.

// index.js
const express = require("express");

const app = express();

app.post('/api', function (request, response) {

});

app.listen(3000);
console.log("Listening to port 3000");
You might notice that the code that we added looks very similar to an event :Connect() on lua. This is because it is an event connection, we’re listening to post request event on the /api directory.
Extra Credit : REST Calls

There’s different types of rest requests, each with their own use.
GET - Used to get data from the server
POST - Used to send data to the server
PUT - Used to update data to the server
DELETE - Used to delete data from the server

Extra Credit : Arrow functions

In java script programmers often use arrow functions to quickly define a function.

// js (example)
function FunctionName (request, response) {

}

Would then be defined like

// js (example)
FunctionName = (request, response) => {

}

The rest of the tutorial will use arrow functions for ease of use.

It is also important to note that most web developers use (req, res) => {} instead of (request, response) => {}. However, that might make the tutorial a little more confusing so I opted to write out the full name of the arguments.

Finally make sure to use the body-parser, it’s a module that’ll make dealing with JSON easier.

# command line
npm install body-parser
// index.js
const bodyParser= require("body-parser");

app.use(bodyParser.json())

Step 2 : Setting up the tensorflow.js model

If you’re using a preexisting api, you can skip this step.

The package to run a tensorflow model in javascript is @tensorflow/tfjs. This can be imported like so.

# command line
npm install @tensorflow/tfjs
// index.js
const tf = require("@tensorflow/tfjs");

Tensorflow.js prefers if you have hosted your model on the web and reference the model to initialize it. However since we’re using node js, initializing directly in the same directory can be more preferable. To this this you would need import another package, @tensorflow/tfjs-node along side the original @tensorflow/tfjs.

# command line
npm install @tensorflow/tfjs-node
// index.js
const tf = require("@tensorflow/tfjs");
const tfn = require("@tensorflow/tfjs-node");
Extra Credit : Tensors

Since numpy, a python library that handles matrixes and matrix operation is not aviable in js (officially, there are user created libaries that are similar), we would need to employ the use of custom tensors to pass in data to the model.

So an example of a tensor is.

// js (example)
let tensor = tf.tensor2d([data])

Which is equivalent to this in normal numpy.

# python (example)
tensor = np.array([data])

Then use tfn.io.fileSystem and tf.loadLayersModel to load the model from the directory. Making sure to change "./v1-js/model.json" to the path of your model’s model.json file.

// index.js
let model;
let handler = tfn.io.fileSystem("./v1-js/model.json");
tf.loadLayersModel(handler).then(x => {
	model = x;
});

If you are using a model hosted somewhere else you can use. Make sure to replace the url with the hosted model.

// index.js
let model;
tf.loadLayersModel("https://cdn.jsdelivr.net/npm/@tensorflow-models/toxicity").then(x => {
	model = x;
});
Since tf.loadLayersModel is made to get a model hosted on the web, it uses a async structure, thus the .then().

This is what the code should look like after this step.

Code
// index.js
const express = require("express");
const tf = require("@tensorflow/tfjs");
const tfn = require("@tensorflow/tfjs-node");
const bodyParser= require("body-parser");

const app = express();
app.use(bodyParser.json())

let model;
let handler = tfn.io.fileSystem("./v1-js/model.json");
tf.loadLayersModel(handler).then(x => {
	model = x;
});

app.post('/api', (request, response) => {

});

app.listen(3000);
console.log("Listening to port 3000");

Step 3 : Integrating the model to the web server

If you’re using a preexisting api, you can skip this step.

Now it’s time to combine the model with the server.

We need to take in an input, clean it then return the prediction based on the input.
If we remember, the request listeners returns two arguments, request and response.

Assuming that we send a json object via http request from roblox, we can simply just use request.body as a json object. So if want to print out the input property of the input object we would just do.

// index.js
app.post('/api', (request, response) => {
	console.log(request.body.input);
});

If you’re familiar with machine learning, you would know how important preprocessing or cleaning data is. The model I made for predicting the mood of the text requires the following steps.

  • The input to be lowercased
  • All special characters to be removed
  • The input to be split by each space into an array
  • Each word of that array be assigned a dictionary index, placing Out Of Vocabulary values otherwise
  • Padding the array to a length of 32, add empties when lower, or removing values when higher.

This is a lot to take in so let’s split it up into chunks. The first step is to define our regex and dictionary tokens. The regex or regular express can be used to remove all special characters. The token dictionary is a dictionary of words that the model was trained with. Or the words that the ai has learned and are able to use. This dictionary is simply exported and placed in the same directory as the javascript file.

// index.js
const fs = require('fs');

const special = /[\`|\~|\!|\@|\#|\$|\%|\^|\&|\*|\(|\)|\+|\=|\[|\{|\]|\}|\||\\|\'|\<|\,|\.|\>|\?|\/|\""|\;|\:|]/g

let tokens;
fs.readFile('v1_tokens.json', (err, data) => {
	if (!err){
		tokens = JSON.parse(data);
	}
});
Side Note

FS, or file system is a node module that is used to interact with the file system of the host machine. It’s included as a standard library so you don’t have to worry about installing it.

Lowercasing a string is pretty straight forward in js, using the toLowerCase() method. Using the replace method, similar to string.gsub, we can apply a regex replacement. Splitting the string is also pretty simple with the split method, although be aware that it return a array of strings rather than a single string.

// index.js
app.post('/api', (request, response) => {
	let data = request.body.input;
	data = data.toLowerCase();
	data = data.replace(special, '');
	data = data.split(' ');
});

Using the map method we can replace each word with their respective index. Afterwards using the slice and concat methods we can pad the data.

// index.js
app.post('/api', (request, response) => {
	let data = request.body.input;
	data = data.toLowerCase();
	data = data.replace(special, '');
	data = data.split(' ');
	data = data.map(x => tokens[x] || 1)
	
	if (data.length > 32) {
		data = data.slice(0, 31);
	}
	else if (data.length < 32) {
		data = data.concat(Array(32 - data.length).fill(0))
	}
});

Then, we can create a tensor from the input data, feed that into the model then send the output out as a response.

I’ve also defined a emoji list that correlates with the output. The model is trained with labels of moods, so a label of 2 would represent happiness, 3 for sadness, and 4 for anger.

// index.js
const emoji = {
	1: '😕',
	2: '😀',
	3: '😥',
	4: '😠',
	5: '😧',
	6: '🥰',
	7: '😮',
}
// index.js
let tensor = tf.tensor2d([data])

let max;
let prediction = model.predict(tensor);

prediction.data().then(x => {
	max = x.indexOf(Math.max(...x));
	response.send(emoji[max]);
}).catch(() => {
	response.send(emoji[1]);
});
Side Note

Because of how the model is set up, it outputs an array of probabilities, x.indexOf(Math.max(...x)) just takes the index of the biggest probability in that array, otherwise the most probable prediction.

The function prediction.data() is also an async function since tensorflow.js is made for running predictions for models via the web.

All together the code should look like this.

Code
// index.js
const express = require("express");
const tf = require("@tensorflow/tfjs");
const tfn = require("@tensorflow/tfjs-node");
const bodyParser= require("body-parser");
const fs = require('fs');

const app = express();
app.use(bodyParser.json())

let model;
let handler = tfn.io.fileSystem("./v1-js/model.json");
tf.loadLayersModel(handler).then(x => {
	model = x;
});

const special = /[\`|\~|\!|\@|\#|\$|\%|\^|\&|\*|\(|\)|\+|\=|\[|\{|\]|\}|\||\\|\'|\<|\,|\.|\>|\?|\/|\""|\;|\:|]/g

let tokens;
fs.readFile('v1_tokens.json', (err, data) => {
	if (!err){
		tokens = JSON.parse(data);
	}
});

const emoji = {
	1: '😕',
	2: '😀',
	3: '😥',
	4: '😠',
	5: '😧',
	6: '🥰',
	7: '😮',
}

app.post('/api', (request, response) => {
	let data = request.body.input;
	data = data.toLowerCase();
	data = data.replace(special, '');
	data = data.split(' ');
	data = data.map(x => tokens[x] || 1)
	
	if (data.length > 32) {
		data = data.slice(0, 31);
	}
	else if (data.length < 32) {
		data = data.concat(Array(32 - data.length).fill(0))
	}

	let tensor = tf.tensor2d([data])

	let max;
	let prediction = model.predict(tensor);

	prediction.data().then(x => {
		max = x.indexOf(Math.max(...x));
		response.send(emoji[max]);
	}).catch(() => {
		response.send(emoji[1]);
	});
});

app.listen(3000);
console.log("Listening to port 3000");

Part 4 : Connecting to the API on Roblox.

After hosting the code on a server, making sure that you have the address of the server for later.

Alright, let’s make a new roblox place.
Then, add an anchored part with some surface text on it.

Put a server script as a child of the frame like so.
image

We’ll be using the HTTP Service to send a POST request to the webserver and the Chat service to filter results. We might as well define the api link here too. Make sure you replace mine with your own hosted webserver otherwise it won’t work.

-- Script
Http = game:GetService("HttpService")
Chat = game:GetService("Chat")

api = "https://emoji-from-text.lynnlo.repl.co/api"

The way that I’ll be integrating the model is listening to the player’s chat events, sending that to the model, taking the output, filtering it, then display it to the text label.

Let’s start by listening to chat events from each player.

-- Script
game.Players.PlayerAdded:Connect(function(player)
	player.Chatted:Connect(function(message)

	end)
end)	

Since the text will be filtered on the webserver side, all we need to do is encode the data to a JSON format like so.

-- Script
local data = Http:JSONEncode({["input"] = message})

Then simply pass it on to the server and wait for a return.

-- Script
local response
		
pcall(function()
	response = Http:PostAsync(api, data)
end)

Making sure to filter it first via the Chat service.

-- Script
response = Chat:FilterStringAsync(response, player, player)

Afterwards set the text of the text label to the filtered output.

-- Script
script.Parent.TextLabel.Text = "Your message is " .. response

Together it should look like this.

Code
-- Script
Http = game:GetService("HttpService")
Chat = game:GetService("Chat")

api = "https://emoji-from-text.lynnlo.repl.co/api"

game.Players.PlayerAdded:Connect(function(player)
	player.Chatted:Connect(function(message)
		local data = Http:JSONEncode({["input"] = message})
		local response
		
		pcall(function()
			response = Http:PostAsync(api, data)
		end)
		
		response = Chat:FilterStringAsync(response, player, player)
		
		script.Parent.TextLabel.Text = "You message is " .. response
	end)
end)

There you have it, you have successfully implemented a fully working AI in roblox. Be sure to test it out to make sure it works.

Closing remarks

I hope you guys found this useful since, like my previous tutorial, took a very long time to make.

Here’s the roblox place file if anyone wants it.
AiWebserver.rbxl (26.5 KB)

All of the code for the webserver and the front end can be found on Github and the machine learning model itself can be found on Google Collab if you’re interested on how the model works.

If there’s any flaws in the code or tutorial please let me know. Don’t be afraid to post questions as well I’ll do my best to answer them.

If you really enjoy this post all I will ask of you is to share it with a friend who might find it interesting. I’ve been lynnlo and thanks for reading.

60 Likes

This is pretty cool!
Imagine walking up to an NPC, talking trash, and then getting attacked. This isn’t quite there yet, but still very cool!

10 Likes

Very creative way to implement tensorflow with Roblox. I wonder later on if there will be actual support for tensorflow.

3 Likes

This is very interesting. This has me wondering if someone could do something with GPT-2 and make stories from AI generated text (like AI Dungeon in a way) on different web servers. And as long as we pass it through roblox’s filter we shouldn’t have to worry about the AI generating bad things either. You could also have dynamic npc responses to player input as well.

1 Like

This is awesome. It’s a great beginning for potentially interactive higher self aware NPC’s on the platform!

2 Likes

I have plans with this in mind. In this example we will use the word John.
Player:
John, where is the block?

If the script detects the NPC keyword name, it will choose a list of dialogues for the NPC,
I will use artificial intelligence for emotion detection and I will use the key word detection in the sentence that corresponds to the emotion so that the game will select the appropriate response dialog that is pre written.

John:
The block is inside the barn.

Player:
Thank you.

John:
No problem.

I will use a combination of logic scripting and machine learning.
My inspiration for this project is watching the Sword Art Online anime with NPCs created from artificial intelligence.

I have the skills to script on Roblox but I don’t know how to create artificial intelligence.

2 Likes

A bit off-topic but can you explain how GetAsync works for HTTPS Service and express?

2 Likes

To use GetAsync Instead of PostAsync with HTTPS Service, simply use the express code below.

// Javascript
app.get('/api', (request, response) => {
	// Handle Request
});
2 Likes

Sorry for the bump but it looks like the API is down?

1 Like

Seems like the owner changed the API a bit. It’s now /api/emoji instead of /api.

2 Likes

I highly recommend creating your own api with repl or heroku since mine will not be online 24/7.

1 Like

Good advice. Thank you. Might take a while but will definitely do…

Great tutorial! Gonna try to create an API in Flask and use Heroku.

Edit: Could you also create a github repo so we can check all the code?

Great tutorial! Just one minor typo.

The string should be "Your message is " when it is "You message is ".

That’s the only thing I found incorrect with the tutorial.

1 Like

Yep, it’s here GitHub - lynnlo/Emoji-From-Text: Takes in text as input and outputs an emoji., it’s already in the tutorial near the end.

Thanks for catching that, it’s been fixed.