DataPredict [Release 1.21] - General Purpose Machine Learning And Deep Learning Library (Learning AIs, Generative AIs, and more!)

I’m not too sure what you mean by the first question, but I’ll skip that first so that I can explain the rest. You might need to further elaborate on that. Probably it is covered in API here?

For the terminal state, it was removed. The reason being is that I expect for users to continuously call the :reinforce() function without any limit. Using terminal state assumes that the model will be trained within certain period amount of time. However, this is not true if we were to apply to real-life scenarios as the model requires to continuously learn for infinite amount of time and hence we may never reach to the point of terminal state.

Also, while it is true that the model uses the same neural network, you need to understand that giving different input gives out different output for the same model parameters weights. I use the current predicted feature vector as an input for training for previous feature vector. This will cause an update for the weights based on previous feature vector and future predicted vector as inputs. (Also refer to first link under references in the API).

I think you should’ve kept it. I don’t think this library is going to be used in real-life scenarios and in a game-development context I think most tasks are episodic, not non-episodic. You could just include support for both and have the user specify whether the task is episodic or not.

The target network is an improvement to DQN which helps stabilize its performance. DQN is prone to divergence as noted by the deadly triad issue Why is there a Deadly Triad issue and how to handle it ? | by Quentin Delfosse | Medium where when combining TD-Learning / Bootstrapping, off-policy learning, and function approximation leads to instability and divergence. It works by creating a separate neural network which is a copy of the policy network called the target network. The target network is used for calculating the target q value when training the DQN instead of the policy network. The target network is not trained however. Instead, every x steps the weights from the policy network are copied to the target network. This helps reduce divergence and increases stability by creating a stationary target which is periodically updated.

Essentially, you can think of the target network as a “frozen” target which changes periodically.
Edit: This article: Why is there a Deadly Triad issue and how to handle it ? | by Quentin Delfosse | Medium is much better at explaining the deadly triad issue than the one I linked before.
Also this one: Bootcamp Summer 2020 Week 7 – DQN And The Deadly Triad, Or, Why DQN Shouldn’t Work But Still Does (gatech.edu)

2 Likes

Hmm, when I refer in real-life, I also mean in game development though. For your games it might be episodic, for me, it is mostly non-episodic for my games. Also, i want to cover large uses cases that includes both episodic and non-episodic scenarios, so it seem that the removal of terminal state would still satisfy both use cases. If i had introduced the terminal state, it would remove the non-episodic use case.

That being said, it just a minor detail anyways regarding about the terminal state. I doubt it changes too much if I don’t have the piece of code Target = Reward.

In addition to that, if I were to implement that system, I predict I would need to make some changes to the code that would be more inflexible for programmers who wishes to instantly use this library out of the box.

Also thanks for the other articles by the way. I will have a read at them, but I will always prioritize performance over correctness since Roblox puts a limit on how much hardware we can use.

The target network isn’t that computationally expensive and it’s pretty easy to implement. Nor is the terminal state flag added to each experience.

The removal of the terminal state is an issue in my opinion, because from what I’ve observed in the Q-Learning algorithm, it seems to propagate the reward/punishment in a terminal state backwards so that over time it knows which actions to choose that lead it to the terminal state of higher reward.

I don’t think it would be that inflexible, just add a parameter for episodic or non-episodic. Or even better, just make it so that when update is called if the terminal state flag exists it assumes episodic, if not it assumes non-episodic.

Also, I think the way you implemented DQN is a bit strange. From what I remembered in python DQN tutorials (although I only have basic knowledge of python) the way they do it is they have separate functions for training, predicting, and setting up how long the DQN will train for. I didn’t really understand much so if you want to check it out yourself here is a link to something I found online: stable-baselines3/stable_baselines3/dqn/dqn.py at master · DLR-RM/stable-baselines3 · GitHub

Setting the target to reward in terminal states also improves accuracy. How you ask? Well, if we calculate the Q_Target in a terminal state as usual (without Target = Reward) we’re assuming that the state we’re in is not a terminal state, so we use bootstrapping to estimate the expected discounted cumulative reward. This makes no sense because it assumes that there are possible states and actions the agent can take after the terminal state, and therefore isn’t accurate. Instead of using a likely inaccurate estimate of the Q-Value for the terminal state, why not just use the true Q-Value of the terminal state instead (the reward)?

I don’t know if I’m the only one but, the amount of ModuleScript links confused me. So, the package version always contains the most up-to-date library and the ModuleScripts above that contain different versions of the library? So only two modules are required?

  • Package version, option above that, or unstable version
  • Matrix library

Also, I tried the introduction code and it gave me an error. Edit: I managed to fix it. I forgot an extra {} around the predicted vector.

Yea. For Matrix library, Just choose MatrixL auto update ones.

The DataPredict, you choose any. The unstable + package version contains the small updates that changes quite some stuff which may break your things in the future.

1 Like

Do you know what’s wrong with my code?

local Library = require(script.Parent["DataPredict  - Release Version 1.2"])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()

NeuralNet:addLayer(2,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(1,true,'sigmoid',Optimizer)


local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)
 ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork:569: Input layer has 3 neuron(s), but feature matrix has 2 features!  -  Server - NeuralNetwork:569
  19:12:21.892  Stack Begin  -  Studio
  19:12:21.892  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 569 - function train  -  Studio - NeuralNetwork:569
  19:12:21.892  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  19:12:21.892  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  19:12:21.892  Script 'ServerScriptService.aqwamtestscript', Line 41  -  Studio - aqwamtestscript:41
  19:12:21.893  Stack End  -  Studio

You forget :setClassesList() if I am not mistaken. You might also forgot to set which gradient descent you want to choose i think.

I checked the ModuleScript code and it defaults to stochastic gradient descent if you don’t put anything. I tried :setClassesList() but it gives me the same error:

local Library = require(script.Parent["DataPredict  - Release Version 1.2"])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()

NeuralNet:addLayer(2,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(1,true,'sigmoid',Optimizer)

NeuralNet:setClassesList({'a'})


local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)
ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork:569: Input layer has 3 neuron(s), but feature matrix has 2 features!  -  Server - NeuralNetwork:569
  22:40:26.170  Stack Begin  -  Studio
  22:40:26.170  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 569 - function train  -  Studio - NeuralNetwork:569
  22:40:26.170  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  22:40:26.170  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  22:40:26.171  Script 'ServerScriptService.aqwamtestscript', Line 43  -  Studio - aqwamtestscript:43
  22:40:26.171  Stack End  -  Studio

Supposed to put parameters :setClassesList() to {0, 1} because the label vector only contains 0 and 1. If you want to use “a” b", then you need to change label vector to have those labels instead of 0 and 1.

Oh, I see. What about regression then? Also, the API explanation of what :setClassesList() does was sort of unclear.

Edit: It still gives me the same error.

local Library = require(script.Parent["DataPredict  - Release Version 1.2"])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()

NeuralNet:addLayer(2,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(1,true,'sigmoid',Optimizer)

NeuralNet:setClassesList({0,1})

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

When you use true at second parameter of addLayer(), it will add separate neuron that is not counted to the first parameter.

setClassesList() Only applies to multi class classification algorithms, otherwise follow the labeling standard.

So, is that why it bypassed the error when I changed the number of nodes in the first :addLayer() to 1? So, you’re saying that the bias counts as a neuron as well which is why the error appeared? Also, I’m getting a new error now:

23:27:22.240  ServerScriptService.MatrixL:709: attempt to get length of a number value  -  Server - MatrixL:709
  23:27:22.240  Stack Begin  -  Studio
  23:27:22.241  Script 'ServerScriptService.MatrixL', Line 709 - function applyFunction  -  Studio - MatrixL:709
  23:27:22.242  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 167 - function forwardPropagate  -  Studio - NeuralNetwork:167
  23:27:22.242  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 625 - function train  -  Studio - NeuralNetwork:625
  23:27:22.242  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  23:27:22.243  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  23:27:22.243  Script 'ServerScriptService.aqwamtestscript', Line 40  -  Studio - aqwamtestscript:40
  23:27:22.243  Stack End  -  Studio
local Library = require(script.Parent["DataPredict  - Release Version 1.2"])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()

NeuralNet:addLayer(1,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(1,false,'sigmoid',Optimizer)

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

The final layer, keep the first parameter to 2. And set the second parameter as false.

That then gives you this error:

 ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork:103: The number of classes are not equal to number of neurons. Please adjust your last layer using setLayers() function.  -  Server - NeuralNetwork:103
  23:35:20.047  Stack Begin  -  Studio
  23:35:20.047  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 103 - function convertLabelVectorToLogisticMatrix  -  Studio - NeuralNetwork:103
  23:35:20.047  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 557 - function processLabelVector  -  Studio - NeuralNetwork:557
  23:35:20.048  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 611 - function train  -  Studio - NeuralNetwork:611
  23:35:20.048  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  23:35:20.048  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  23:35:20.048  Script 'ServerScriptService.aqwamtestscript', Line 40  -  Studio - aqwamtestscript:40
  23:35:20.049  Stack End  -  Studio

Section of the code that changed:

NeuralNet:addLayer(1,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(2,false,'sigmoid',Optimizer)

You forgot to use :setClassesList({0,1}) before/after adding new layers. Make sure the classes are set prior training.

I thought you said it only applies to multi-class classification algorithms. This only has one class. I’ve also tested that before and after and it gives me this error:

 23:42:08.639  ServerScriptService.MatrixL:105: Argument 1 and 2 are incompatible! (2, 4) and (4, 4)  -  Server - MatrixL:105
  23:42:08.639  Stack Begin  -  Studio
  23:42:08.639  Script 'ServerScriptService.MatrixL', Line 105 - function broadcastAndCalculate  -  Studio - MatrixL:105
  23:42:08.640  Script 'ServerScriptService.MatrixL', Line 117 - function add  -  Studio - MatrixL:117
  23:42:08.640  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Optimizers.AdaptiveMomentEstimation', Line 61 - function calculate  -  Studio - AdaptiveMomentEstimation:61
  23:42:08.640  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 293 - function gradientDescent  -  Studio - NeuralNetwork:293
  23:42:08.641  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Models.NeuralNetwork', Line 637 - function train  -  Studio - NeuralNetwork:637
  23:42:08.641  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  23:42:08.641  Script 'ServerScriptService.DataPredict  - Release Version 1.2.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  23:42:08.641  Script 'ServerScriptService.aqwamtestscript', Line 44  -  Studio - aqwamtestscript:44
  23:42:08.641  Stack End  -  Studio
local Library = require(script.Parent["DataPredict  - Release Version 1.2"])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()



NeuralNet:addLayer(1,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(2,false,'sigmoid',Optimizer)

NeuralNet:setClassesList({0,1})

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

Neural network is multiclass algorithm… The one that are not are Logistics Regression, Support Vector Machine.

Also, hmm… Seems like I have to check that later.

But by definition multi-class essentially means classification with multiple outputs? I’m doing sigmoid with only one output. Or wait checking my code it seems like the last layer has 2 instead of 1, so I guess that would make it multi-class.

These little nuances seem sort of unintuitive. It would help if they were documented in the API and some verified code examples were in the API as well. I also just noticed that neural network is listed under classification. I thought it was a general type of algorithm that could be both classification and regression.

Correction: Multi-class means more than two classes. Sigmoid with only 1 output in a neural network should only have two classes.

Well the thing is it can be a regression algorithm, but I traded off the design implementation for ease of use for new programmers.

Anyways, I’m pretty sure there’s quite a number of code examples of you scroll up the posts.

If still problematic, use :createLayers() function instead.

I am little busy going somewhere right now.

Also, to you, it may mean two classes using single neuron, but in my case, it uses two neurons for each classes each.