EasyML - An easy way to use machine learning in your Roblox games!

Introduction

EasyML is a simple yet powerful module (API + core modules) that includes some famous machine learning features inside of roblox. My goal with this project is to bring powerful machine learning techniques to the Roblox platform in a way that’s accessible and flexible for all developers.

Why EasyML

Integrating machine learning into Roblox games can be hard due to the platform limitations or because you don’t necessarily have the time to learn about it. EasyML is made to save your time, offering an API that lets you:

  • Train models on your own data
  • Predict outcomes based on your trained models
  • Switch easily between different machine learning models without having to change your code logic

And above all, it’s easy to use!

Features

3 models are avaible within this API:

  • Linear Regression: Perfect for simple predictions and trend analysis.
  • Decision Trees: Great for classification tasks where you need a model that can make decisions based on multiple input features.
  • Neural Networks: A more advanced model suitable for tasks requiring more complex patterns, such as classification problems. (Supports the AND logic gate, most likely used for AI driven NPCs)

Getting Started

To add EasyML to your game, it’s really easy!

You can either use the custom plugin EasyML to directly add EasyML in your Roblox game and in the right service!

Or download the model here: https://create.roblox.com/store/asset/18872412147/EasyML

How to use

This is a complete guide on how to use the different available models in EasyML.

Testing the Linear Regression model

In this section you’ll learn more on how to use the Linear Regression model.

Tests and results

Testing script:

local MachineLearningModule = require(game.ServerStorage.MachineLearningModule) -- Requiring the main module

-- Example for the Linear Regression model
print("============== Linear Regression =================")
local lr_model = MachineLearningModule.new(MachineLearningModule.ModelType.LinearRegression)

-- Training data
local lr_data = {1, 2, 3, 4, 5} 
local lr_targets = {2, 4, 6, 8, 10}

-- Training the model and then using it to predict a few simple outcomes
lr_model:train(lr_data, lr_targets)
local lr_prediction = lr_model:predict(6)
local lr_prediction2 = lr_model:predict(7)
local lr_prediction3 = lr_model:predict(8)

-- Printing in the console the results of these outcomes
print("Linear Regression Prediction for 6 is", lr_prediction)
print("Linear Regression Prediction for 7 is", lr_prediction2)
print("Linear Regression Prediction for 8 is", lr_prediction3)

Expected results:

Explanations

Now that we have seen how does it works, let’s break down the results step by step and analyze the Linear Regression Module:

Training data

The training data we have set are:

  • {1, 2, 3, 4, 5} for the data (which are the input values)
  • {2, 4, 6, 8, 10} for the target (the target values)

During the training process, the model uses these data points to learn the relationship between data and target. The relationship between the input values and the target values is linear, as shown by the pairs (1, 2), (2, 4), (3, 6), (4, 8), and (5, 10). The equation of a straight line that perfectly fits this data is:

y = 2x

where y is the target value and x is the input value.

Model parameters and predictions

After training, the model should ideally find the slope (m) to be 2 and the intercept (b) to be 0, resulting in the equation:

y = 2x + 0

The model uses the learned parameters (slope and intercept) to make predictions on new input values. When we tested the model with new data points (6, 7, and 8), it used the learned parameters to compute the predictions:

  1. The prediction for 6 is y = 2 × 6 + 0 = 12 and the model predicted exactly 12, which is the value we expected to have.
  2. The prediction for 7 is y = 2 × 7 + 0 = 14 and the model predicted exactly 14, which is the value we expected to have.
  3. The prediction for 8 is y = 2 × 8 + 0 = 16 and the model predicted exactly 16, which is the value we expected to have.

Conclusion and explanation of the results

The predictions matches the expected values, indicating that the model has successfully learned the linear relationship from the training data.

The output we got:

Proves and confirms that the model has learned, is performing well and is able to generalize from the training data to make accurate predictions on new data points.

Great! Now you know how to use the Linear Regression model. :tada:

Testing the Decision Tree model

In this section you’ll learn more on how to use the Decision Tree model. While this model is harder than the previous one, we’ll see that it’s not impossible to understand!

Tests and results

Testing script:

local MachineLearningModule = require(game.ServerStorage.MachineLearningModule) -- Requiring the main module

-- Example for the Decision Tree model
print("============== Decision Tree =================")
local dt_model = MachineLearningModule.new(MachineLearningModule.ModelType.DecisionTree)

-- Training data
local dt_data = {
   {2.7, 2.5, 0}, {1.4, 2.3, 0}, {3.3, 4.4, 0},
   {1.3, 1.8, 0}, {3.0, 3.0, 0}, {7.6, 2.7, 1},
   {5.3, 2.0, 1}, {6.9, 1.7, 1}, {8.0, 3.0, 1}
}

-- Training the model with the tree parameters (basically the size of the decision tree) and then using it to predict a few simple outcomes
local dt_params = {maxDepth = 2, minSize = 1}
dt_model:train(dt_data, {}, dt_params)
local dt_prediction = dt_model:predict({1.5, 1.7})
local dt_prediction2 = dt_model:predict({3.5, 2.8})
local dt_prediction3 = dt_model:predict({8, 3})

-- Printing in the console the results of these outcomes
print("Decision Tree Prediction for (1.5, 1.7) is", dt_prediction)
print("Decision Tree Prediction for (3.5, 2.8) is", dt_prediction2)
print("Decision Tree Prediction for (8, 3) is", dt_prediction3)

Expected results:
image

Explanations

Now that we have seen how does the Decision Tree model works, let’s break down the results step by step and analyze the Decision Tree Module:

Training data

The training dataset we have set in dt_data consists of pairs of input values and a class label (0 or 1):

{
{2.7, 2.5, 0}, {1.4, 2.3, 0}, {3.3, 4.4, 0},
{1.3, 1.8, 0}, {3.0, 3.0, 0}, {7.6, 2.7, 1},
{5.3, 2.0, 1}, {6.9, 1.7, 1}, {8.0, 3.0, 1}
}

Here, the first two numbers in each entry represent features (attributes), and the last number represents the class label.

Decision Tree construction and predictions

The decision tree algorithm splits the dataset into smaller subsets to find the best way to classify the data. The splits are made based on the attribute values that minimize the Gini impurity, which basically measures the “impurity” of a split. The goal is to create pure nodes where most or all instances belong to a single class.

For the given test data, the decision tree uses the splits it learned during training to classify each new instance.

Prediction for (1.5, 1.7):

  • The model classified this pair as class 0.
  • This point is closer to the points in the training data that are labeled as 0, that is why it’s classified as 0.
  • So basically values around (1.5, 1.7) will fall in the region classified as 0.

Prediction for (3.5, 2.8):

  • Same thing here, the model classified this as class 0.
  • This point is closer to the points in the training data that are labeled as 0, that’s why it’s classified as 0.
  • So basically values around (3.5, 2.8) will fall in the region classified as 0.

Prediction for (8, 3):

  • This time, the model classified this as class 1.
  • It’s again, because this point is closer to the points in the training data that are labeled as 1, that’s why it’s classified as 1.
  • So basically values around (3.5, 2.8) will fall in the region classified as 1.

Conclusion and explanation of the results

  • Our model has now learned how to split the data. That is because the decision tree algorithm splits the data based on attribute values that best separate the classes. The split points are determined by minimizing the Gini impurity.

  • It has also learned how to classify data. Because each test point is classified by traversing the decision tree from the root to a leaf node. The path taken is determined by the attribute values of the test point.

    For instance:

    • For (1.5, 1.7), the decision tree routes it through splits that lead to class 0.
    • For (3.5, 2.8), the decision tree routes it through splits that lead to class 0.
    • For (8, 3), the decision tree routes it through splits that lead to class 1.

So finally, our model has learned to separate the data into regions where each region predominantly contains points from one class. The predictions for the test points reflect the learned patterns from the training data, showing the ability of the decision tree to generalize and classify new instances based on these patterns.

Good job! Now you know how to use the Decision Tree model. :tada:

Testing the Neural Network model

In this section you’ll learn more on how to use the Neural Network model. This is the hardest model to understand within this API. But I’m sure you will be able to understand it. So whithout further ado, let’s dive in!

Tests and results

Testing script:

local MachineLearningModule = require(game.ServerStorage.MachineLearningModule) -- Requiring the main module

-- Example for the Neural Network model
print("============== Neural Network =================")
local nn_model = MachineLearningModule.new(MachineLearningModule.ModelType.NeuralNetwork)

-- Training data
local nn_data = {
   {0, 0}, {0, 1}, {1, 0}, {1, 1}
}
local nn_targets = {
   {0}, {0}, {0}, {1}
}
local nn_params = {
   numInputs = 2,
   numHidden = 48,
   numOutputs = 1,
   epochs = 20000,
   learningRate = 0.01,
   debugMode = false -- adds additional prints in the console for each epochs
}

-- Training the model and then using it to predict the results of each rows of the AND logic table
nn_model:train(nn_data, nn_targets, nn_params)
local nn_prediction = nn_model:predict({0, 0})
local nn_prediction2 = nn_model:predict({0, 1})
local nn_prediction3 = nn_model:predict({1, 0})
local nn_prediction4 = nn_model:predict({1, 1})

-- Printing in the console the results of these outcomes
print("Neural Network Prediction for (0, 0) is", math.floor(nn_prediction + 0.5))
print("Neural Network Prediction for (0, 1) is", math.floor(nn_prediction2 + 0.5))
print("Neural Network Prediction for (1, 0) is", math.floor(nn_prediction3 + 0.5))
print("Neural Network Prediction for (1, 1) is", math.floor(nn_prediction4 + 0.5))

Expected results:
image

Explanations

Now that we have seen how does this model works, let’s break down the results step by step and analyze it together:

AND logic gate and Training data

The training dataset we have set in nn_data is basically the truth table of the AND logic gate. But before seeing and comparing our results to what we expect, we will first have a look at what exactly is an AND logic gate.

An AND logic gate (or AND gate) is a basic digital logic gate that outputs 1 only when both of its inputs are 1. In all other cases, it outputs 0. The truth table for an AND gate is as follows:

Capture d'écran 2024-08-08 2121362

(We won’t take care of the first row after input and output (“A”, “B” and “AAND B”) as it is not relevant for now.)

Neural Network results and conclusion

So now that we know how is the AND gate supposed to work. We’ll see if our model learned it correctly.

  • First input, (0, 0):

    • The prediction made by our model is 0. The neural network correctly predicts that the output is 0 when both inputs are 0. It’s exactly what we expected.
  • Second input, (0, 1):

    • The prediction made by our model is 0. The neural network correctly predicts that the output is 0 when the first input is 0 and the second input is 1. Again, that’s we expected, so our model is doing pretty well so far!
  • Third input (1, 0):

    • The prediction made by our model is 0. The neural network correctly predicts that the output is 0 when the first input is 1 and the second input is 0.
  • Last input (1, 1):

    • The prediction made by our model is 1. The neural network correctly predicts that the output is 1 when both inputs are 1. So our model learned the AND logic gate correctly, hooray!

The neural network has successfully learned the AND gate function. It predicts the correct output for each possible input pair, showing that it has effectively modeled this basic logic gate. This is a good indication that our neural network is functioning as expected, and the training process was successful.

We won’t go too far in details for this model because it is not really necessary for the sake of this guide. But if you are curious about how it works, go ahead and check the module script!

Impressive! Now you know how to use the Neural Network model. :tada:

Making a script to test every models at once

In this section we’ll make a script to combine every testing scripts we saw above.

Main script:

-- All in one testing script

local MachineLearningModule = require(game.ServerStorage.MachineLearningModule)

-- Example for the Linear Regression model
print("============== Linear Regression =================")
local lr_model = MachineLearningModule.new(MachineLearningModule.ModelType.LinearRegression)

-- Training data
local lr_data = {1, 2, 3, 4, 5}
local lr_targets = {2, 4, 6, 8, 10}

-- Training the model and then using it to predict a few simple outcomes
lr_model:train(lr_data, lr_targets)
local lr_prediction = lr_model:predict(6)
local lr_prediction2 = lr_model:predict(7)
local lr_prediction3 = lr_model:predict(8)

-- Printing in the console the results of these outcomes
print("Linear Regression Prediction for 6 is", lr_prediction)
print("Linear Regression Prediction for 7 is", lr_prediction2)
print("Linear Regression Prediction for 8 is", lr_prediction3)

-- Example for the Decision Tree model
print("============== Decision Tree =================")
local dt_model = MachineLearningModule.new(MachineLearningModule.ModelType.DecisionTree)

-- Training data
local dt_data = {
	{2.7, 2.5, 0}, {1.4, 2.3, 0}, {3.3, 4.4, 0},
	{1.3, 1.8, 0}, {3.0, 3.0, 0}, {7.6, 2.7, 1},
	{5.3, 2.0, 1}, {6.9, 1.7, 1}, {8.0, 3.0, 1}
}
local dt_params = {maxDepth = 2, minSize = 1}

-- Training the model with the tree parameters (basically the size of the decision tree) and then using it to predict a few simple outcomes
dt_model:train(dt_data, {}, dt_params)
local dt_prediction = dt_model:predict({1.5, 1.7})
local dt_prediction2 = dt_model:predict({3.5, 2.8})
local dt_prediction3 = dt_model:predict({8, 3})

-- Printing in the console the results of these outcomes
print("Decision Tree Prediction for (1.5, 1.7) is", dt_prediction)
print("Decision Tree Prediction for (3.5, 2.8) is", dt_prediction2)
print("Decision Tree Prediction for (8, 3) is", dt_prediction3)

-- Example for the Neural Network model
print("============== Neural Network =================")
local nn_model = MachineLearningModule.new(MachineLearningModule.ModelType.NeuralNetwork)

-- Training data
local nn_data = {
	{0, 0}, {0, 1}, {1, 0}, {1, 1}
}
local nn_targets = {
	{0}, {0}, {0}, {1}
}
local nn_params = {
	numInputs = 2,
	numHidden = 48,
	numOutputs = 1,
	epochs = 20000,
	learningRate = 0.01,
	debugMode = false -- adds additional prints in the console for each epochs
}

-- Training the model and then using it to predict the results of each rows of the AND logic table
nn_model:train(nn_data, nn_targets, nn_params)
local nn_prediction = nn_model:predict({0, 0})
local nn_prediction2 = nn_model:predict({0, 1})
local nn_prediction3 = nn_model:predict({1, 0})
local nn_prediction4 = nn_model:predict({1, 1})

-- Printing in the console the results of these outcomes
print("Neural Network Prediction for (0, 0) is", math.floor(nn_prediction + 0.5))
print("Neural Network Prediction for (0, 1) is", math.floor(nn_prediction2 + 0.5))
print("Neural Network Prediction for (1, 0) is", math.floor(nn_prediction3 + 0.5))
print("Neural Network Prediction for (1, 1) is", math.floor(nn_prediction4 + 0.5))

Expected results:

Applications in Roblox games

Applying machine learning concepts to a roblox game can brings up a new type/genre of game, where the player will be truly immersed in the game and where he will be able to either RP with NPCs that acts like humans and have the same behavior as classic humans, but also having intelligent npcs (either enemy AIs or friendly NPCs) who can take decisions by themselves!

It can also make obbies that directly adapts to the player’s level where the neural network model we saw above, could be used to analyze the player performance and change the game difficulty in real-time (game example: an auto-evolutive obby that adpats to the player’s level). This would create a new personalized in game experience for the players which won’t be experiencing boredom as much as they did before!

The possibilites are endless, and there are a lot of examples like procedural terrain generation or personalized recommendation system that could also be worth checking out.

Updates

This is the first version of this API/modules. I will keep updating this to include more features and make it even easier to work with or include in your Roblox games.

Potential new features:

  • AI driven NPCs model (so you can use this model and adapt it easily for your games)
  • Personalized user experience
  • AI powered procedural content generation (to generate personalized storylines or dialogs trees based on player’s decisions)

Feedback and questions

If you have any questions or if you are having any issues, let me know!

Any feedbacks or ideas are welcomed too, so don’t hesitate and tell us!

23 Likes

why its didnt do anything? please explain how it works and for what

1 Like

I made a guide to explain everything in details, please read it.

1 Like

ohh yeah! i see, sorry for that, I just read it.

2 Likes

Well I’m intrigued, but with your explanations will they work for NPCs? because you did say the potential new features AI driven NPCs model. How am I supposed to do that?

2 Likes

I’m currently working on that feature, it will be available in the next update. To answer your question, you would typically use a combination of both the decision tree and the neural network models to make the NPCs actually think for themselves. So they would be reactive to any actions made by the player, thanks to the neural network we provided them. They could even engage in conversation or start some kind of relationship with the player, thanks to their ability to take decision by themselves.

I hope I answered your question correctly!

1 Like

thanks., but I don’t want to be mean but my question was basically about if you have code example to do that. That’s what I was trying to say.

Oh, well sorry then lol, it’s gonna be available in the next update for sure though!

Alright. I’ll keep in touch thank you!

1 Like

I think this might be right but I don’t know why it’s just keep on going 1,

Code:

local RS = game:GetService("ReplicatedStorage")
local Modules = require(RS:WaitForChild("Modules").Modules)
local machineLearning = Modules.MachineLearningModule
local nn_model = machineLearning.new(machineLearning.ModelType.NeuralNetwork)
local dt_model = machineLearning.new(machineLearning.ModelType.DecisionTree)

local NPC = workspace:WaitForChild("NPCFolder"):WaitForChild("tracedrounds")
local humanoid = NPC:FindFirstChildWhichIsA("Humanoid")
local humanoidRootPart = humanoid.RootPart

-- Define constants
local NUM_INPUTS = 5
local NUM_HIDDEN = 5
local NUM_OUTPUTS = 4 -- For actions: move forward, move backward, turn left, turn right
local EPOCHS = 1000
local LEARNING_RATE = 0.1
local DEBUG_MODE = false

-- Example training data for Decision Tree
local trainingData = {
	{1, 0, 0, 0}, -- Forward obstacle detected
	{0, 1, 0, 0}, -- Backward obstacle detected
	{0, 0, 1, 0}, -- Right obstacle detected
	{0, 0, 0, 1}  -- Left obstacle detected
}

local trainingTargets = {
	1, -- Move forward
	2, -- Move backward
	3, -- Turn left
	4  -- Turn right
}

-- Initialize Neural Network model
nn_model:train({}, trainingTargets, {
	numInputs = NUM_INPUTS,
	numHidden = NUM_HIDDEN,
	numOutputs = NUM_OUTPUTS,
	epochs = EPOCHS,
	learningRate = LEARNING_RATE,
	debugMode = DEBUG_MODE
})

-- Initialize Decision Tree model with training data
dt_model:train(trainingData, trainingTargets, {maxDepth = 5, minSize = 1})

-- Define obstacle detection function
local function detectObstacles()
	local obstacles = {}
	local directions = {
		{ direction = humanoidRootPart.CFrame.LookVector, name = "forward" },
		{ direction = -humanoidRootPart.CFrame.LookVector, name = "backward" },
		{ direction = humanoidRootPart.CFrame.RightVector, name = "right" },
		{ direction = -humanoidRootPart.CFrame.RightVector, name = "left" }
	}
	for _, dir in ipairs(directions) do
		local ray = Ray.new(humanoidRootPart.Position, dir.direction * 10)
		
		local params = RaycastParams.new()
		params.RespectCanCollide = true
		params.FilterType = Enum.RaycastFilterType.Exclude
		params.FilterDescendantsInstances  = {NPC}
		
		local hit, position = workspace:Raycast(ray.Origin, ray.Direction)
		if hit then
			table.insert(obstacles, {direction = dir.name, distance = (humanoidRootPart.Position - hit.Position).magnitude})
		end
	end
	return obstacles
end

-- Define action functions
local function moveForward()
	humanoid:MoveTo(humanoidRootPart.Position + humanoidRootPart.CFrame.LookVector * 5)
end

local function moveBackward()
	humanoid:MoveTo(humanoidRootPart.Position - humanoidRootPart.CFrame.LookVector * 5)
end

local function turnLeft()
	humanoidRootPart.CFrame = humanoidRootPart.CFrame * CFrame.Angles(0, math.rad(-90), 0)
end

local function turnRight()
	humanoidRootPart.CFrame = humanoidRootPart.CFrame * CFrame.Angles(0, math.rad(90), 0)
end

-- Define function to choose action using the neural network model
local function chooseAction(obstacles)
	local inputs = {}
	-- Process obstacle data into inputs
	-- Example: 1 for obstacle detected, 0 for no obstacle
	for _, direction in ipairs({"forward", "backward", "left", "right"}) do
		local detected = false
		for _, obstacle in ipairs(obstacles) do
			if obstacle.direction == direction then
				detected = true
				break
			end
		end
		table.insert(inputs, detected and 1 or 0)
	end

	-- Predict action
	local output = nn_model:predict(inputs)
	print(output)
	local action = output

	-- Execute action
	if action == 1 then
		moveForward()
	elseif action == 2 then
		moveBackward()
	elseif action == 3 then
		turnLeft()
	elseif action == 4 then
		turnRight()
	end
end

-- Main loop
while true do
	local obstacles = detectObstacles()
	chooseAction(obstacles)
	wait(1) -- Adjust as needed for your game
end

Edit: I know I said I’ll keep in touch but I’m just testing it with NPCs (also since I don’t really know code things I apologize I used chatgpt)

2 Likes

I’m interested! However, the plugin seems to be private.

1 Like

Set the NUM_HIDDEN to 48 in the constants and try again. The NUM_HIDDEN allows the model to have more time to think, so it gives more accurate results.

Oh oops, thank you for telling me! I’ll fix it now.

1 Like

Plug-in now available, Sorry for the inconvenience.

1 Like

I mean you could’ve used the model there is no difference between them the plugin just inserts it into serverstorage, and the model you just put into serverstorage (no I was not looking at cheats I have the model version)

1 Like

I put 48 into the NUM_HIDDEN but still choosing the action 1

its usually 1 or 0 and it always choosing 1

also its not just 1 but for me in my pov its always 1 and 3

I see, well I tried to fix your script (I’m on phone so it might not be totally accurate though):

local RS = game:GetService("ReplicatedStorage")
local Modules = require(RS:WaitForChild("Modules").Modules)
local machineLearning = Modules.MachineLearningModule
local nn_model = machineLearning.new(machineLearning.ModelType.NeuralNetwork)
local dt_model = machineLearning.new(machineLearning.ModelType.DecisionTree)

local NPC = workspace:WaitForChild("NPCFolder"):WaitForChild("tracedrounds")
local humanoid = NPC:FindFirstChildWhichIsA("Humanoid")
local humanoidRootPart = humanoid.RootPart

-- Define constants
local NUM_INPUTS = 4 -- Number of directions to consider
local NUM_HIDDEN = 5
local NUM_OUTPUTS = 4 -- For actions: move forward, move backward, turn left, turn right
local EPOCHS = 1000
local LEARNING_RATE = 0.1
local DEBUG_MODE = false

-- Example training data for the neural network
local trainingData = {
    {1, 0, 0, 0},  -- Forward obstacle detected
    {0, 1, 0, 0},  -- Backward obstacle detected
    {0, 0, 1, 0},  -- Right obstacle detected
    {0, 0, 0, 1},  -- Left obstacle detected
}

local trainingTargets = {
    {1}, -- Move forward
    {2}, -- Move backward
    {3}, -- Turn left
    {4}, -- Turn right
}

-- Initialize Neural Network model
nn_model:train(trainingData, trainingTargets, {
    numInputs = NUM_INPUTS,
    numHidden = NUM_HIDDEN,
    numOutputs = NUM_OUTPUTS,
    epochs = EPOCHS,
    learningRate = LEARNING_RATE,
    debugMode = DEBUG_MODE
})

-- Define obstacle detection function
local function detectObstacles()
    local obstacles = {}
    local directions = {
        { direction = humanoidRootPart.CFrame.LookVector, name = "forward" },
        { direction = -humanoidRootPart.CFrame.LookVector, name = "backward" },
        { direction = humanoidRootPart.CFrame.RightVector, name = "right" },
        { direction = -humanoidRootPart.CFrame.RightVector, name = "left" }
    }
    for _, dir in ipairs(directions) do
        local ray = Ray.new(humanoidRootPart.Position, dir.direction * 10)
        
        local params = RaycastParams.new()
        params.RespectCanCollide = true
        params.FilterType = Enum.RaycastFilterType.Exclude
        params.FilterDescendantsInstances = {NPC}
        
        local result = workspace:Raycast(humanoidRootPart.Position, dir.direction * 10, params)
        if result then
            table.insert(obstacles, 1)
        else
            table.insert(obstacles, 0)
        end
    end
    return obstacles
end

-- Define action functions
local function moveForward()
    humanoid:MoveTo(humanoidRootPart.Position + humanoidRootPart.CFrame.LookVector * 5)
end

local function moveBackward()
    humanoid:MoveTo(humanoidRootPart.Position - humanoidRootPart.CFrame.LookVector * 5)
end

local function turnLeft()
    humanoidRootPart.CFrame = humanoidRootPart.CFrame * CFrame.Angles(0, math.rad(-90), 0)
end

local function turnRight()
    humanoidRootPart.CFrame = humanoidRootPart.CFrame * CFrame.Angles(0, math.rad(90), 0)
end

-- Function to choose action using the neural network model
local function chooseAction(obstacles)
    -- Predict action based on obstacles
    local output = nn_model:predict(obstacles)
    local action = output[1]

    -- Execute action
    if action == 1 then
        moveForward()
    elseif action == 2 then
        moveBackward()
    elseif action == 3 then
        turnLeft()
    elseif action == 4 then
        turnRight()
    end
end

-- Main loop
while true do
    local obstacles = detectObstacles()
    chooseAction(obstacles)
    wait(1) -- Adjust as needed for your game
end

Tell me if it works or if you have any issues!

don’t worry your good but they’re are errors:
attempt to perform arithmetic (sub) on nil and number on train apparently the trainingData doesn’t work.

1 Like