DataPredict [Release 1.18] - General Purpose Machine And Deep Learning Library (Learning AIs, Generative AIs, and more!)

Environment feature vector must be in a table of table of values.

Please read the DataTypes in the API reference to see what it looks like.

1 Like

I think its:

local testData = {

{1, 90, 32},
{1, -120, -41}

}

local predictedVector = LogisticRegressionModel:predict(testData)

1 Like

Also thank you very much again. I have fixed it. Must be fatigue that caused me to missed that.

1 Like

No problem, now it’s like this:

local DQN = DataPredict.Models.QLearningNeuralNetwork.new() -- Create a new model object.

DQN:createLayers({4, 3, 2}) -- Setting up our layers.
DQN:setClassesList({"Up", "Down"}) -- Setting up our classes.
local reward = 0
wait(10)
local part = workspace:FindFirstChild("AIPart")
local oldposition = part.Position.X
DQN:setPrintReinforcementOutput(true)

while true do
	local playerposition = part.Position.X
	--##!!
    local environmentFeatureVector = {
		{playerposition,0, 0}
	}
	--##!!
	if not (playerposition >= 50) then
		if playerposition == oldposition then
			reward = -1
		else
			reward = playerposition - oldposition
		end
	else
		reward = 50
	end
	oldposition = part.Position.X
	local actionLabel = DQN:reinforce(environmentFeatureVector, reward) -- Run the reinforce() function.
	if actionLabel == "Up" then
		local pol = part.Position
		part.Position.X = pol + 1
	else
		local pol = part.Position
		part.Position.X = pol - 1
	end
	if part.Position.X >= 50 then
		wait(10)
	end
	wait(0.1)
end

unfortunately now comes the error:

  12:06:41.118  ServerScriptService.MatrixL:231: Argument 1 and 2 are incompatible! (1, 3) and (5, 4)  -  Server - MatrixL:231
  12:06:41.118  Stack Begin  -  Studio
  12:06:41.118  Script 'ServerScriptService.MatrixL', Line 231 - function dotProduct  -  Studio - MatrixL:231
  12:06:41.118  Script 'ServerScriptService.DataPredictLibrary.Models.NeuralNetwork', Line 368 - function forwardPropagate  -  Studio - NeuralNetwork:368
  12:06:41.119  Script 'ServerScriptService.DataPredictLibrary.Models.NeuralNetwork', Line 960 - function predict  -  Studio - NeuralNetwork:960
  12:06:41.119  Script 'ServerScriptService.DataPredictLibrary.Models.QLearningNeuralNetwork', Line 211 - function reinforce  -  Studio - QLearningNeuralNetwork:211
  12:06:41.120  Script 'ServerScriptService.AI2', Line 109  -  Studio - AI2:109
  12:06:41.120  Stack End 

Thanks for the help :heart:

,
Sorry

And:
local testData = {

{1, 90, 32},
{1, -120, -41}

}
Doesn’t there have to be another comma?
Please excuse me that I(probably) get on your nerves so much

Two issues:

createLayers({3, 3, 2}) -- 3 inputs, without bias.

local environmentFeatureVector = {

		{1, playerposition, 0,  0} -- 1 is bias

	}
1 Like

Yay, thank you, it finally works, thanks to your help I made (with your module) my first AI.
Thank you!

Huh, I get this warning:
14:46:58.779 The model diverged! Please repeat the experiment again or change the argument values. - Server - NeuralNetwork:938
and a litte after this this error:
14:47:09.837 ServerScriptService.DataPredictLibrary.Models.QLearningNeuralNetwork:119: table index is nil - Server - QLearningNeuralNetwork:119

And how can you save the progress of the model so that it does not always start from 0 when you make a new test?

That warning happens if you set one of these values too high:

  • Learning rate

  • DiscountFactor

  • Reward Value

Usually these values are recommended to be between 0 and 1 for the first two. The last one should be between -1 and 1.

You can save your model using :getModelParameters() and load it using :setModelParameters(). Just look for BaseModel under Models in API Documentation.

1 Like

Thanks, my max reward is two and my max penalty is 4. But I’m only waiting 0.01 second before the next execution, that’s the problem right?
Oh, I just saw that I did something wrong and the max reward can be much higher.

I wait now 5 seconds and reworked the reward system.
But Anyway ( after 5 min of training):
16:20:22.208 The model diverged! Please repeat the experiment again or change the argument values. - Server - NeuralNetwork:938
16:20:22.209 Current Number Of Episodes: 165 Current Epsilon: 0.4990005 - Server - QLearningNeuralNetwork:239
16:20:22.209 Right - Server - AI2:181
16:20:22.209 1 - Server - AI2:156
16:20:22.744 The model diverged! Please repeat the experiment again or change the argument values. - Server - NeuralNetwork:938
16:20:22.745 Current Number Of Episodes: 166 Current Epsilon: 0.4990005 - Server - QLearningNeuralNetwork:239
16:20:22.745 Left - Server - AI2:178
16:20:22.746 -2 - Server - AI2:156

What does this mean? Current Epsilon

Reward is recommended to be between -1 and 1, but it is not necessary. As long it isn’t too high like 9999, then it would be fine. Yours seem okay.

Try adding individual layers using :addLayer() function. Might be a neural network structure issue.

Epsilon is already explained in the documentation for Q-Learning neural network.

1 Like

Added a video showcasing self-learning sword-fighting AIs that uses this library.

2 Likes

Great video, would it be possible to share the code from it?

1 Like

Take the full file instead. It would be a hassle to set things up.

1 Like

Training AIs how to walk…

2 Likes

Wow,

Is there a code in the docs? :grin:

It’s not doesn’t even give results yet. Might need some finetuning later. Once I get the result I need, then the code probably get released.

1 Like

You should give punishments if they fall down, map soon

By the way you should use an r6 rig since it will be easier in my opinion

It is R6 rig. I also have added falling and crawling punishment.

How much should they walk? Im thinking 100 studs