Introduction
EasyML is a simple yet powerful module (API + core modules) that includes some famous machine learning features inside of roblox. My goal with this project is to bring powerful machine learning techniques to the Roblox platform in a way that’s accessible and flexible for all developers.
Why EasyML
Integrating machine learning into Roblox games can be hard due to the platform limitations or because you don’t necessarily have the time to learn about it. EasyML is made to save your time, offering an API that lets you:
- Train models on your own data
- Predict outcomes based on your trained models
- Switch easily between different machine learning models without having to change your code logic
And above all, it’s easy to use!
Features
3 models are avaible within this API:
- Linear Regression: Perfect for simple predictions and trend analysis.
- Decision Trees: Great for classification tasks where you need a model that can make decisions based on multiple input features.
- Neural Networks: A more advanced model suitable for tasks requiring more complex patterns, such as classification problems. (Supports the AND logic gate, most likely used for AI driven NPCs)
Getting Started
To add EasyML to your game, it’s really easy!
You can either use the custom plugin EasyML to directly add EasyML in your Roblox game and in the right service!
Or download the model here: https://create.roblox.com/store/asset/18872412147/EasyML
How to use
This is a complete guide on how to use the different available models in EasyML.
Testing the Linear Regression model
In this section you’ll learn more on how to use the Linear Regression model.
Tests and results
Testing script:
local MachineLearningModule = require(game.ServerStorage.MachineLearningModule) -- Requiring the main module
-- Example for the Linear Regression model
print("============== Linear Regression =================")
local lr_model = MachineLearningModule.new(MachineLearningModule.ModelType.LinearRegression)
-- Training data
local lr_data = {1, 2, 3, 4, 5}
local lr_targets = {2, 4, 6, 8, 10}
-- Training the model and then using it to predict a few simple outcomes
lr_model:train(lr_data, lr_targets)
local lr_prediction = lr_model:predict(6)
local lr_prediction2 = lr_model:predict(7)
local lr_prediction3 = lr_model:predict(8)
-- Printing in the console the results of these outcomes
print("Linear Regression Prediction for 6 is", lr_prediction)
print("Linear Regression Prediction for 7 is", lr_prediction2)
print("Linear Regression Prediction for 8 is", lr_prediction3)
Expected results:
Explanations
Now that we have seen how does it works, let’s break down the results step by step and analyze the Linear Regression Module:
Training data
The training data we have set are:
- {1, 2, 3, 4, 5} for the
data
(which are the input values) - {2, 4, 6, 8, 10} for the
target
(the target values)
During the training process, the model uses these data points to learn the relationship between data
and target
. The relationship between the input values and the target values is linear, as shown by the pairs (1, 2), (2, 4), (3, 6), (4, 8), and (5, 10). The equation of a straight line that perfectly fits this data is:
y = 2x
where y is the target value and x is the input value.
Model parameters and predictions
After training, the model should ideally find the slope (m) to be 2 and the intercept (b) to be 0, resulting in the equation:
y = 2x + 0
The model uses the learned parameters (slope and intercept) to make predictions on new input values. When we tested the model with new data points (6, 7, and 8), it used the learned parameters to compute the predictions:
- The prediction for 6 is y = 2 × 6 + 0 = 12 and the model predicted exactly 12, which is the value we expected to have.
- The prediction for 7 is y = 2 × 7 + 0 = 14 and the model predicted exactly 14, which is the value we expected to have.
- The prediction for 8 is y = 2 × 8 + 0 = 16 and the model predicted exactly 16, which is the value we expected to have.
Conclusion and explanation of the results
The predictions matches the expected values, indicating that the model has successfully learned the linear relationship from the training data.
The output we got:
Proves and confirms that the model has learned, is performing well and is able to generalize from the training data to make accurate predictions on new data points.
Great! Now you know how to use the Linear Regression model.
Testing the Decision Tree model
In this section you’ll learn more on how to use the Decision Tree model. While this model is harder than the previous one, we’ll see that it’s not impossible to understand!
Tests and results
Testing script:
local MachineLearningModule = require(game.ServerStorage.MachineLearningModule) -- Requiring the main module
-- Example for the Decision Tree model
print("============== Decision Tree =================")
local dt_model = MachineLearningModule.new(MachineLearningModule.ModelType.DecisionTree)
-- Training data
local dt_data = {
{2.7, 2.5, 0}, {1.4, 2.3, 0}, {3.3, 4.4, 0},
{1.3, 1.8, 0}, {3.0, 3.0, 0}, {7.6, 2.7, 1},
{5.3, 2.0, 1}, {6.9, 1.7, 1}, {8.0, 3.0, 1}
}
-- Training the model with the tree parameters (basically the size of the decision tree) and then using it to predict a few simple outcomes
local dt_params = {maxDepth = 2, minSize = 1}
dt_model:train(dt_data, {}, dt_params)
local dt_prediction = dt_model:predict({1.5, 1.7})
local dt_prediction2 = dt_model:predict({3.5, 2.8})
local dt_prediction3 = dt_model:predict({8, 3})
-- Printing in the console the results of these outcomes
print("Decision Tree Prediction for (1.5, 1.7) is", dt_prediction)
print("Decision Tree Prediction for (3.5, 2.8) is", dt_prediction2)
print("Decision Tree Prediction for (8, 3) is", dt_prediction3)
Expected results:
Explanations
Now that we have seen how does the Decision Tree model works, let’s break down the results step by step and analyze the Decision Tree Module:
Training data
The training dataset we have set in dt_data
consists of pairs of input values and a class label (0 or 1):
{
{2.7, 2.5, 0}, {1.4, 2.3, 0}, {3.3, 4.4, 0},
{1.3, 1.8, 0}, {3.0, 3.0, 0}, {7.6, 2.7, 1},
{5.3, 2.0, 1}, {6.9, 1.7, 1}, {8.0, 3.0, 1}
}
Here, the first two numbers in each entry represent features (attributes), and the last number represents the class label.
Decision Tree construction and predictions
The decision tree algorithm splits the dataset into smaller subsets to find the best way to classify the data. The splits are made based on the attribute values that minimize the Gini impurity, which basically measures the “impurity” of a split. The goal is to create pure nodes where most or all instances belong to a single class.
For the given test data, the decision tree uses the splits it learned during training to classify each new instance.
Prediction for (1.5, 1.7):
- The model classified this pair as class 0.
- This point is closer to the points in the training data that are labeled as 0, that is why it’s classified as 0.
- So basically values around (1.5, 1.7) will fall in the region classified as 0.
Prediction for (3.5, 2.8):
- Same thing here, the model classified this as class 0.
- This point is closer to the points in the training data that are labeled as 0, that’s why it’s classified as 0.
- So basically values around (3.5, 2.8) will fall in the region classified as 0.
Prediction for (8, 3):
- This time, the model classified this as class 1.
- It’s again, because this point is closer to the points in the training data that are labeled as 1, that’s why it’s classified as 1.
- So basically values around (3.5, 2.8) will fall in the region classified as 1.
Conclusion and explanation of the results
-
Our model has now learned how to split the data. That is because the decision tree algorithm splits the data based on attribute values that best separate the classes. The split points are determined by minimizing the Gini impurity.
-
It has also learned how to classify data. Because each test point is classified by traversing the decision tree from the root to a leaf node. The path taken is determined by the attribute values of the test point.
For instance:
- For (1.5, 1.7), the decision tree routes it through splits that lead to class 0.
- For (3.5, 2.8), the decision tree routes it through splits that lead to class 0.
- For (8, 3), the decision tree routes it through splits that lead to class 1.
So finally, our model has learned to separate the data into regions where each region predominantly contains points from one class. The predictions for the test points reflect the learned patterns from the training data, showing the ability of the decision tree to generalize and classify new instances based on these patterns.
Good job! Now you know how to use the Decision Tree model.
Testing the Neural Network model
In this section you’ll learn more on how to use the Neural Network model. This is the hardest model to understand within this API. But I’m sure you will be able to understand it. So whithout further ado, let’s dive in!
Tests and results
Testing script:
local MachineLearningModule = require(game.ServerStorage.MachineLearningModule) -- Requiring the main module
-- Example for the Neural Network model
print("============== Neural Network =================")
local nn_model = MachineLearningModule.new(MachineLearningModule.ModelType.NeuralNetwork)
-- Training data
local nn_data = {
{0, 0}, {0, 1}, {1, 0}, {1, 1}
}
local nn_targets = {
{0}, {0}, {0}, {1}
}
local nn_params = {
numInputs = 2,
numHidden = 48,
numOutputs = 1,
epochs = 20000,
learningRate = 0.01,
debugMode = false -- adds additional prints in the console for each epochs
}
-- Training the model and then using it to predict the results of each rows of the AND logic table
nn_model:train(nn_data, nn_targets, nn_params)
local nn_prediction = nn_model:predict({0, 0})
local nn_prediction2 = nn_model:predict({0, 1})
local nn_prediction3 = nn_model:predict({1, 0})
local nn_prediction4 = nn_model:predict({1, 1})
-- Printing in the console the results of these outcomes
print("Neural Network Prediction for (0, 0) is", math.floor(nn_prediction + 0.5))
print("Neural Network Prediction for (0, 1) is", math.floor(nn_prediction2 + 0.5))
print("Neural Network Prediction for (1, 0) is", math.floor(nn_prediction3 + 0.5))
print("Neural Network Prediction for (1, 1) is", math.floor(nn_prediction4 + 0.5))
Expected results:
Explanations
Now that we have seen how does this model works, let’s break down the results step by step and analyze it together:
AND logic gate and Training data
The training dataset we have set in nn_data
is basically the truth table of the AND logic gate. But before seeing and comparing our results to what we expect, we will first have a look at what exactly is an AND logic gate.
An AND logic gate (or AND gate) is a basic digital logic gate that outputs 1 only when both of its inputs are 1. In all other cases, it outputs 0. The truth table for an AND gate is as follows:
(We won’t take care of the first row after input and output (“A”, “B” and “AAND B”) as it is not relevant for now.)
Neural Network results and conclusion
So now that we know how is the AND gate supposed to work. We’ll see if our model learned it correctly.
-
First input, (0, 0):
- The prediction made by our model is
0
. The neural network correctly predicts that the output is0
when both inputs are0
. It’s exactly what we expected.
- The prediction made by our model is
-
Second input, (0, 1):
- The prediction made by our model is
0
. The neural network correctly predicts that the output is0
when the first input is0
and the second input is1
. Again, that’s we expected, so our model is doing pretty well so far!
- The prediction made by our model is
-
Third input (1, 0):
- The prediction made by our model is
0
. The neural network correctly predicts that the output is0
when the first input is1
and the second input is0
.
- The prediction made by our model is
-
Last input (1, 1):
- The prediction made by our model is
1
. The neural network correctly predicts that the output is1
when both inputs are1
. So our model learned the AND logic gate correctly, hooray!
- The prediction made by our model is
The neural network has successfully learned the AND gate function. It predicts the correct output for each possible input pair, showing that it has effectively modeled this basic logic gate. This is a good indication that our neural network is functioning as expected, and the training process was successful.
We won’t go too far in details for this model because it is not really necessary for the sake of this guide. But if you are curious about how it works, go ahead and check the module script!
Impressive! Now you know how to use the Neural Network model.
Making a script to test every models at once
In this section we’ll make a script to combine every testing scripts we saw above.
Main script:
-- All in one testing script
local MachineLearningModule = require(game.ServerStorage.MachineLearningModule)
-- Example for the Linear Regression model
print("============== Linear Regression =================")
local lr_model = MachineLearningModule.new(MachineLearningModule.ModelType.LinearRegression)
-- Training data
local lr_data = {1, 2, 3, 4, 5}
local lr_targets = {2, 4, 6, 8, 10}
-- Training the model and then using it to predict a few simple outcomes
lr_model:train(lr_data, lr_targets)
local lr_prediction = lr_model:predict(6)
local lr_prediction2 = lr_model:predict(7)
local lr_prediction3 = lr_model:predict(8)
-- Printing in the console the results of these outcomes
print("Linear Regression Prediction for 6 is", lr_prediction)
print("Linear Regression Prediction for 7 is", lr_prediction2)
print("Linear Regression Prediction for 8 is", lr_prediction3)
-- Example for the Decision Tree model
print("============== Decision Tree =================")
local dt_model = MachineLearningModule.new(MachineLearningModule.ModelType.DecisionTree)
-- Training data
local dt_data = {
{2.7, 2.5, 0}, {1.4, 2.3, 0}, {3.3, 4.4, 0},
{1.3, 1.8, 0}, {3.0, 3.0, 0}, {7.6, 2.7, 1},
{5.3, 2.0, 1}, {6.9, 1.7, 1}, {8.0, 3.0, 1}
}
local dt_params = {maxDepth = 2, minSize = 1}
-- Training the model with the tree parameters (basically the size of the decision tree) and then using it to predict a few simple outcomes
dt_model:train(dt_data, {}, dt_params)
local dt_prediction = dt_model:predict({1.5, 1.7})
local dt_prediction2 = dt_model:predict({3.5, 2.8})
local dt_prediction3 = dt_model:predict({8, 3})
-- Printing in the console the results of these outcomes
print("Decision Tree Prediction for (1.5, 1.7) is", dt_prediction)
print("Decision Tree Prediction for (3.5, 2.8) is", dt_prediction2)
print("Decision Tree Prediction for (8, 3) is", dt_prediction3)
-- Example for the Neural Network model
print("============== Neural Network =================")
local nn_model = MachineLearningModule.new(MachineLearningModule.ModelType.NeuralNetwork)
-- Training data
local nn_data = {
{0, 0}, {0, 1}, {1, 0}, {1, 1}
}
local nn_targets = {
{0}, {0}, {0}, {1}
}
local nn_params = {
numInputs = 2,
numHidden = 48,
numOutputs = 1,
epochs = 20000,
learningRate = 0.01,
debugMode = false -- adds additional prints in the console for each epochs
}
-- Training the model and then using it to predict the results of each rows of the AND logic table
nn_model:train(nn_data, nn_targets, nn_params)
local nn_prediction = nn_model:predict({0, 0})
local nn_prediction2 = nn_model:predict({0, 1})
local nn_prediction3 = nn_model:predict({1, 0})
local nn_prediction4 = nn_model:predict({1, 1})
-- Printing in the console the results of these outcomes
print("Neural Network Prediction for (0, 0) is", math.floor(nn_prediction + 0.5))
print("Neural Network Prediction for (0, 1) is", math.floor(nn_prediction2 + 0.5))
print("Neural Network Prediction for (1, 0) is", math.floor(nn_prediction3 + 0.5))
print("Neural Network Prediction for (1, 1) is", math.floor(nn_prediction4 + 0.5))
Expected results:
Applications in Roblox games
Applying machine learning concepts to a roblox game can brings up a new type/genre of game, where the player will be truly immersed in the game and where he will be able to either RP with NPCs that acts like humans and have the same behavior as classic humans, but also having intelligent npcs (either enemy AIs or friendly NPCs) who can take decisions by themselves!
It can also make obbies that directly adapts to the player’s level where the neural network model we saw above, could be used to analyze the player performance and change the game difficulty in real-time (game example: an auto-evolutive obby that adpats to the player’s level). This would create a new personalized in game experience for the players which won’t be experiencing boredom as much as they did before!
The possibilites are endless, and there are a lot of examples like procedural terrain generation or personalized recommendation system that could also be worth checking out.
Updates
This is the first version of this API/modules. I will keep updating this to include more features and make it even easier to work with or include in your Roblox games.
Potential new features:
- AI driven NPCs model (so you can use this model and adapt it easily for your games)
- Personalized user experience
- AI powered procedural content generation (to generate personalized storylines or dialogs trees based on player’s decisions)
Feedback and questions
If you have any questions or if you are having any issues, let me know!
Any feedbacks or ideas are welcomed too, so don’t hesitate and tell us!