One neuron per one class. That how my neural network was designed.
Yeap. Something like that. That is how my neural network works.
I see. You should add support for one neuron in the output layer in my opinion. It just seems a bit weird to not have support for it.
Also, consider adding softmax as an activation function (if you haven’t already) because it thrives in multi-class classification problems. It also avoids the argmax issue.
Very well then, I’ll add those two while I fix some codes that was encountered by other people.
Updated Release 1.2 / Beta 1.15.0. Now contains Softmax and StableSoftmax activation functions. Also added 1 neuron output layer support.
Hey, did you update the links? I tried both the unstable and latest stable release version and I still get the “Argument 1 and 2 are incompatible! (2, 4) and (4, 4)” error.
I didn’t update the links, but rather i just updated the scripts inside those links.
Also which library are you referring to?
Also, it is kind of strange how it doesn’t work on yours, but works on Cffex…
For the DataPredict library I tried both this one: DataPredict - Release Version 1.2 - Roblox and this one: Aqwam’s Roblox Machine And Deep Learning Library - Creator Marketplace
And for the matrix library I flip-flopped between this one: Aqwam’s Roblox Matrix Library - Roblox and this one: MatrixL (Aqwam’s Roblox Matrix Library) - Roblox
I think you need to update the links for it to work, but I’m not sure because I don’t have much experience with creating libraries.
Show me the sample code that you are trying to run.
I don’t know if I should be using stable or unstable but this code is using unstable:
local Library = require(script.Parent['AqwamRobloxMachineAndDeepLearningLibrary'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
local Optimizer = Library.Optimizers.AdaptiveMomentEstimation.new()
NeuralNet:addLayer(1,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(3,true,'ReLU',Optimizer)
NeuralNet:addLayer(2,false,'StableSoftmax',Optimizer)
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)
NeuralNet:setClassesList({0,1})
local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)
local featureMatrix = {
{ 0, 0},
{10, 2},
{-3, -2},
{-12, -22},
{ 2, 2},
{ 1, 1},
{-11, -12},
{ 3, 3},
{-2, -2},
}
local labelVectorLogistic = {
{1},
{1},
{0},
{0},
{1},
{1},
{0},
{1},
{0}
}
ModifiedModel:train(featureMatrix,labelVectorLogistic)
local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1
print(PredictedVector)
print(ModifiedModel:predict({{90, 90}},true))
13:09:51.292 ServerScriptService.MatrixL:105: Argument 1 and 2 are incompatible! (2, 4) and (4, 4) - Server - MatrixL:105
13:09:51.293 Stack Begin - Studio
13:09:51.293 Script 'ServerScriptService.MatrixL', Line 105 - function broadcastAndCalculate - Studio - MatrixL:105
13:09:51.293 Script 'ServerScriptService.MatrixL', Line 117 - function add - Studio - MatrixL:117
13:09:51.293 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Optimizers.AdaptiveMomentEstimation', Line 61 - function calculate - Studio - AdaptiveMomentEstimation:61
13:09:51.293 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 506 - function gradientDescent - Studio - NeuralNetwork:506
13:09:51.293 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 930 - function train - Studio - NeuralNetwork:930
13:09:51.293 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent - Studio - GradientDescentModifier:153
13:09:51.293 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 179 - function train - Studio - GradientDescentModifier:179
13:09:51.293 Script 'ServerScriptService.aqwamtestscript', Line 43 - Studio - aqwamtestscript:43
13:09:51.293 Stack End - Studio
And when I switch the “2” to “1” in the output layer it gives me the same error saying the number of classes is not equal to the number of output neurons or something.
Edit: Strange, I just tried that again and now it’s giving me the same error above instead of “the number of classes not being equal to the number of output neurons error.”
Ah. Please don’t use the same optimizer object on each layer. Individual layers needs to have their own ones.
Different layers have different matrix dimensions, which will reflect to the optimizer’s calculations.
It works now when I use StableSoftmax but the cost is either 1 or 0 and the first* print returns an empty array.
13:18:16.916 Iteration: 1 Cost: 0 - Server - BaseModel:149
13:18:16.916 Data Number: 1 Final Cost: 0
- Server - GradientDescentModifier:159
13:18:16.918 Iteration: 1 Cost: 0 - Server - BaseModel:149
13:18:16.918 Data Number: 2 Final Cost: 0
- Server - GradientDescentModifier:159
13:18:16.919 Iteration: 1 Cost: 1 - Server - BaseModel:149
13:18:16.919 Data Number: 3 Final Cost: 1
- Server - GradientDescentModifier:159
13:18:16.920 Iteration: 1 Cost: 1 - Server - BaseModel:149
13:18:16.921 Data Number: 4 Final Cost: 1
- Server - GradientDescentModifier:159
13:18:16.922 Iteration: 1 Cost: 0 - Server - BaseModel:149
13:18:16.922 Data Number: 5 Final Cost: 0
- Server - GradientDescentModifier:159
13:18:16.923 Iteration: 1 Cost: 0 - Server - BaseModel:149
13:18:16.923 Data Number: 6 Final Cost: 0
- Server - GradientDescentModifier:159
13:18:16.925 Iteration: 1 Cost: 1 - Server - BaseModel:149
13:18:16.925 Data Number: 7 Final Cost: 1
- Server - GradientDescentModifier:159
13:18:16.928 Iteration: 1 Cost: 0 - Server - BaseModel:149
13:18:16.928 Data Number: 8 Final Cost: 0
- Server - GradientDescentModifier:159
13:18:16.929 Iteration: 1 Cost: 1 - Server - BaseModel:149
13:18:16.929 Data Number: 9 Final Cost: 1
- Server - GradientDescentModifier:159
13:18:16.930 ▼ {
[1] = {}
} - Server - aqwamtestscript:46
13:18:16.930 ▼ {
[1] = ▼ {
[1] = 1
}
} - Server - aqwamtestscript:48
local Library = require(script.Parent['AqwamRobloxMachineAndDeepLearningLibrary'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
NeuralNet:addLayer(1,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(1,false,'StableSoftmax',Library.Optimizers.AdaptiveMomentEstimation.new())
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)
NeuralNet:setClassesList({0})
local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)
local featureMatrix = {
{ 0, 0},
{10, 2},
{-3, -2},
{-12, -22},
{ 2, 2},
{ 1, 1},
{-11, -12},
{ 3, 3},
{-2, -2},
}
local labelVectorLogistic = {
{1},
{1},
{0},
{0},
{1},
{1},
{0},
{1},
{0}
}
ModifiedModel:train(featureMatrix,labelVectorLogistic)
local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1
print(PredictedVector)
print(ModifiedModel:predict({{90, 90}},true))
I also tried switching the activation function to sigmoid because it would better suit this problem; However, it gave me an error:
ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork:119: attempt to call a nil value - Server - NeuralNetwork:119
13:20:11.174 Stack Begin - Studio
13:20:11.174 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 119 - Studio - NeuralNetwork:119
13:20:11.174 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Models.NeuralNetwork', Line 924 - function train - Studio - NeuralNetwork:924
13:20:11.174 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 153 - function startStochasticGradientDescent - Studio - GradientDescentModifier:153
13:20:11.174 Script 'ServerScriptService.AqwamRobloxMachineAndDeepLearningLibrary.Others.GradientDescentModifier', Line 179 - function train - Studio - GradientDescentModifier:179
13:20:11.174 Script 'ServerScriptService.aqwamtestscript', Line 42 - Studio - aqwamtestscript:42
13:20:11.174 Stack End - Studio
That’s normal for stochastic nature + 1 neuron output models…
Also, use “Sigmoid” and not “sigmoid”. I just changed the casing.
I tried that. It also doesn’t work.
local Library = require(script.Parent['AqwamRobloxMachineAndDeepLearningLibrary'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.01)
NeuralNet:addLayer(1,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(1,false,'Sigmoid',Library.Optimizers.AdaptiveMomentEstimation.new())
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)
NeuralNet:setClassesList({0})
local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)
local featureMatrix = {
{ 0, 0},
{10, 2},
{-3, -2},
{-12, -22},
{ 2, 2},
{ 1, 1},
{-11, -12},
{ 3, 3},
{-2, -2},
}
local labelVectorLogistic = {
{1},
{1},
{0},
{0},
{1},
{1},
{0},
{1},
{0}
}
ModifiedModel:train(featureMatrix,labelVectorLogistic)
local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1
print(PredictedVector)
print(ModifiedModel:predict({{90, 90}},true))
Edit: I just realized softmax always sums to 1. Oops.
I have updated the library. Try the newer one.
It works now but it has the same cost per iteration. I’m currently experimenting with different setups to see if maybe there is an issue with the way it’s setup.
local Library = require(script.Parent['DataPredict - Release Version 1.2'])
local NeuralNet = Library.Models.NeuralNetwork.new(1,0.001)
NeuralNet:addLayer(1,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(3,true,'ReLU',Library.Optimizers.AdaptiveMomentEstimation.new())
NeuralNet:addLayer(1,false,'Sigmoid',Library.Optimizers.AdaptiveMomentEstimation.new())
--NeuralNet:createLayers({1,3,3,2},'ReLU',Optimizer)
NeuralNet:setClassesList({0})
local ModifiedModel = Library.Others.GradientDescentModifier.new(NeuralNet)
local featureMatrix = {
{ 0, 0},
{10, 2},
{-3, -2},
{-12, -22},
{ 2, 2},
{ 1, 1},
{-11, -12},
{ 3, 3},
{-2, -2},
}
local labelVectorLogistic = {
{1},
{1},
{0},
{0},
{1},
{1},
{0},
{1},
{0}
}
ModifiedModel:train(featureMatrix,labelVectorLogistic)
local PredictedVector = ModifiedModel:predict({{90, 90}}) -- Should be 1
print(PredictedVector)
print(ModifiedModel:predict({{90, 90}},true))
I think it’s just overfitting…
If you go for 2 neurons, it will show different probabilities.
I think it is both overfitting and the dying ReLU issue. I switched to LeakyReLU and now the cost varies.
Edit: I have to go eat so I will be back later.
Can you write an example code here? I really have no idea how this works, and could read in with the comments and the code then. That would really help me and probably a par others!
I would be so infinitely grateful!!! With this model you can probably do so much. My ideas are already limitless…
unfortunately, there is the problem with the understanding
Thank you!
The tutorials are already available in the documentation. Let me know if you’re curious about anything but isn’t included under the tutorial section.