DataPredict [Release 1.18] - General Purpose Machine And Deep Learning Library (Learning AIs, Generative AIs, and more!)

There’s only 3 code samples and a video tutorial, 2 from you and 1 from another person who just took the code from the introduction section in the API. Only one of them really is relevant to my problem.

It’s also a bit annoying to have to keep scrolling to find these resources and would help anyone who is reading the API to have these included in there. Or you could even just modify your original post and include a bunch of example code.

Ok, I’ll try :createLayers() and see if it works. Edit: It does, but this is not preferred. It does not allow for activation functions like sigmoid or softmax for the output layer.

I’m having a bit of trouble understanding your last sentence. Do you mean it uses two neurons per class? Two classes for two neurons? Could you provide an example? Also, looking at the relevant code snippet I found from scrolling up, it would seem like the :setClassesList corresponds with the number of outputs in the neural network. I might’ve read it wrong though. I will check it again and see if I can derive anything useful from it.
Edit 2: It seems like my interpretation is correct, however the output layer doesn’t seem to support 1 neuron.
Also, I think we have different interpretations on the meaning of multi-class classification.
My interpretation:


Is yours this by any chance?
image

One neuron per one class. That how my neural network was designed.

Yeap. Something like that. That is how my neural network works.

I see. You should add support for one neuron in the output layer in my opinion. It just seems a bit weird to not have support for it.

Also, consider adding softmax as an activation function (if you haven’t already) because it thrives in multi-class classification problems. It also avoids the argmax issue.

Very well then, I’ll add those two while I fix some codes that was encountered by other people.

1 Like

Updated Release 1.2 / Beta 1.15.0. Now contains Softmax and StableSoftmax activation functions. Also added 1 neuron output layer support.

1 Like

Hey, did you update the links? I tried both the unstable and latest stable release version and I still get the “Argument 1 and 2 are incompatible! (2, 4) and (4, 4)” error.

1 Like

I didn’t update the links, but rather i just updated the scripts inside those links.

Also which library are you referring to?

Also, it is kind of strange how it doesn’t work on yours, but works on Cffex…

2 Likes

For the DataPredict library I tried both this one: DataPredict - Release Version 1.2 - Roblox and this one: Aqwam’s Roblox Machine And Deep Learning Library - Creator Marketplace
And for the matrix library I flip-flopped between this one: Aqwam’s Roblox Matrix Library - Roblox and this one: MatrixL (Aqwam’s Roblox Matrix Library) - Roblox
I think you need to update the links for it to work, but I’m not sure because I don’t have much experience with creating libraries.

2 Likes

Show me the sample code that you are trying to run.

2 Likes

I don’t know if I should be using stable or unstable but this code is using unstable:

local Library = require(script.Parent['AqwamRobloxMachineAndDeepLearningLibrary'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()

NeuralNet:addLayer(1,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(2,false,'StableSoftmax',Optimizer)
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)

NeuralNet:setClassesList({0,1})

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

print(ModifiedModel:predict({{90, 90}},true))
13:09:51.292  ServerScriptService.MatrixL:105: Argument 1 and 2 are incompatible! (2, 4) and (4, 4)  -  Server - MatrixL:105
  13:09:51.293  Stack Begin  -  Studio
  13:09:51.293  Script 'ServerScriptService.MatrixL', Line 105 - function broadcastAndCalculate  -  Studio - MatrixL:105
  13:09:51.293  Script 'ServerScriptService.MatrixL', Line 117 - function add  -  Studio - MatrixL:117
  13:09:51.293  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Optimizers.AdaptiveMomentEstimation', Line 61 - function calculate  -  Studio - AdaptiveMomentEstimation:61
  13:09:51.293  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 506 - function gradientDescent  -  Studio - NeuralNetwork:506
  13:09:51.293  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 930 - function train  -  Studio - NeuralNetwork:930
  13:09:51.293  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  13:09:51.293  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  13:09:51.293  Script 'ServerScriptService.aqwamtestscript', Line 43  -  Studio - aqwamtestscript:43
  13:09:51.293  Stack End  -  Studio

And when I switch the “2” to “1” in the output layer it gives me the same error saying the number of classes is not equal to the number of output neurons or something.

Edit: Strange, I just tried that again and now it’s giving me the same error above instead of “the number of classes not being equal to the number of output neurons error.”

1 Like

Ah. Please don’t use the same optimizer object on each layer. Individual layers needs to have their own ones.

Different layers have different matrix dimensions, which will reflect to the optimizer’s calculations.

1 Like

It works now when I use StableSoftmax but the cost is either 1 or 0 and the first* print returns an empty array.

  13:18:16.916  Iteration: 1		Cost: 0  -  Server - BaseModel:149
  13:18:16.916  Data Number: 1		Final Cost: 0
  -  Server - GradientDescentModifier:159
  13:18:16.918  Iteration: 1		Cost: 0  -  Server - BaseModel:149
  13:18:16.918  Data Number: 2		Final Cost: 0
  -  Server - GradientDescentModifier:159
  13:18:16.919  Iteration: 1		Cost: 1  -  Server - BaseModel:149
  13:18:16.919  Data Number: 3		Final Cost: 1
  -  Server - GradientDescentModifier:159
  13:18:16.920  Iteration: 1		Cost: 1  -  Server - BaseModel:149
  13:18:16.921  Data Number: 4		Final Cost: 1
  -  Server - GradientDescentModifier:159
  13:18:16.922  Iteration: 1		Cost: 0  -  Server - BaseModel:149
  13:18:16.922  Data Number: 5		Final Cost: 0
  -  Server - GradientDescentModifier:159
  13:18:16.923  Iteration: 1		Cost: 0  -  Server - BaseModel:149
  13:18:16.923  Data Number: 6		Final Cost: 0
  -  Server - GradientDescentModifier:159
  13:18:16.925  Iteration: 1		Cost: 1  -  Server - BaseModel:149
  13:18:16.925  Data Number: 7		Final Cost: 1
  -  Server - GradientDescentModifier:159
  13:18:16.928  Iteration: 1		Cost: 0  -  Server - BaseModel:149
  13:18:16.928  Data Number: 8		Final Cost: 0
  -  Server - GradientDescentModifier:159
  13:18:16.929  Iteration: 1		Cost: 1  -  Server - BaseModel:149
  13:18:16.929  Data Number: 9		Final Cost: 1
  -  Server - GradientDescentModifier:159
  13:18:16.930   ▼  {
                    [1] = {}
                 }  -  Server - aqwamtestscript:46
  13:18:16.930   ▼  {
                    [1] =  ▼  {
                       [1] = 1
                    }
                 }  -  Server - aqwamtestscript:48
local Library = require(script.Parent['AqwamRobloxMachineAndDeepLearningLibrary'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)

NeuralNet:addLayer(1,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(1,false,'StableSoftmax',Library.Optimizers.AdaptiveMomentEstimation.new())
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)

NeuralNet:setClassesList({0})

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

print(ModifiedModel:predict({{90, 90}},true))

I also tried switching the activation function to sigmoid because it would better suit this problem; However, it gave me an error:

 ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork:119: attempt to call a nil value  -  Server - NeuralNetwork:119
  13:20:11.174  Stack Begin  -  Studio
  13:20:11.174  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 119  -  Studio - NeuralNetwork:119
  13:20:11.174  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 924 - function train  -  Studio - NeuralNetwork:924
  13:20:11.174  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent  -  Studio - GradientDescentModifier:153
  13:20:11.174  Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 179 - function train  -  Studio - GradientDescentModifier:179
  13:20:11.174  Script 'ServerScriptService.aqwamtestscript', Line 42  -  Studio - aqwamtestscript:42
  13:20:11.174  Stack End  -  Studio

That’s normal for stochastic nature + 1 neuron output models…

Also, use “Sigmoid” and not “sigmoid”. I just changed the casing.

I tried that. It also doesn’t work.

local Library = require(script.Parent['AqwamRobloxMachineAndDeepLearningLibrary'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)

NeuralNet:addLayer(1,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(1,false,'Sigmoid',Library.Optimizers.AdaptiveMomentEstimation.new())
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)

NeuralNet:setClassesList({0})

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

print(ModifiedModel:predict({{90, 90}},true))

Edit: I just realized softmax always sums to 1. Oops.

1 Like

I have updated the library. Try the newer one.

1 Like

It works now but it has the same cost per iteration. I’m currently experimenting with different setups to see if maybe there is an issue with the way it’s setup.

local Library = require(script.Parent['DataPredict  - Release Version 1.2'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.001)

NeuralNet:addLayer(1,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(1,false,'Sigmoid',Library.Optimizers.AdaptiveMomentEstimation.new())
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)

NeuralNet:setClassesList({0})

local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)

local featureMatrix = {

	{ 0,  0},
	{10, 2},
	{-3, -2},
	{-12, -22},
	{ 2,  2},
	{ 1,  1},
	{-11, -12},
	{ 3,  3},
	{-2, -2},

}

local labelVectorLogistic = {

	{1},
	{1},
	{0},
	{0},
	{1},
	{1},
	{0},
	{1},
	{0}

}

ModifiedModel:train(featureMatrix,labelVectorLogistic)

local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1

print(PredictedVector)

print(ModifiedModel:predict({{90, 90}},true))
1 Like

I think it’s just overfitting…

If you go for 2 neurons, it will show different probabilities.

1 Like

I think it is both overfitting and the dying ReLU issue. I switched to LeakyReLU and now the cost varies.
Edit: I have to go eat so I will be back later.

1 Like

Can you write an example code here? I really have no idea how this works, and could read in with the comments and the code then. That would really help me and probably a par others!
I would be so infinitely grateful!!! With this model you can probably do so much. My ideas are already limitless…
unfortunately, there is the problem with the understanding :confused: :unamused::pensive: :melting_face:
Thank you!:heart:

4 Likes