i have no idea how to continue learning where it left off, but it would be a really nice feature
Is there a way I can continue training with the saved training data?
Amazing module. While playing around I made a self-driving car (on a track) for fun.
Pretty cool.
I have few questions though. I plan on making neural networks from scratch, I want to know how the mutations/crossover etc work in genetic algorithm? And how backpropagation works in supervised learning.
A link to a good article/video or good explanation will be appreciated. Thanks.
give them speed control and congrats you made super ro-kart
I would highly recommend 3blue1brown’s series of videos on neural networks. They explain everything very well and with examples, including backpropagation.
For genetic algorithms, crossovers are usually done by randomly choosing ‘traits’ of the 2 parent networks that are mixed together to create the child network. These can be network shape, node count, or, most commonly, individual nodes.
As this is done, the mutations are also applied. These are done by just adding a tiny random number to every parameter (weights, biases, etc). You can also multiply, but, in my experience, this doesn’t work well. This video doesn’t have much and lacks a voiceover, but it should be good enough.
Hello. I’m trying to make a basket ball AI which try changing the velocity of the ball until it raycast a part which is the goal part but I don’t know how I can do it with the GeneticAlgorithm system. Can someone help me please.
I am having some issues lately, After some time of training (2 - 6 minutes) studio terminates it self. Because of that issue, I published the place and tried it but its the same result. Instead of closing itself it just says lost connection any type of help is much appreciated.
did you find out if you could train it where it left off ?
Hey your module is amazing ! However is it possible to train a network where it left off if you save it’s neural network ? Also i have quite some issue where my population doesn’t seems to evolve but just some randoms result i’m not sure why since my input are just distance. Thanks if you help
Could u share the source code of this place ? I’m interrested since i’m having issue with mine. Thanks !
Great to know that y’all like the module! Unfortunately, since I wrote this thing 2 years ago and don’t have the time to do open-source work anymore due to working with RB Battles, I can’t make heads or tails of the code myself!
Before being hired, I worked on a successor to this library that is roblox-ts based and leagues more sophisticated, performant, and works similarly to Google’s TensorFlow, including tensors, parallelization, automatic differentiation, etc. However, I don’t have the time to finish it for now and will only be able to catch up on it if we have sufficiently long breaks around here.
It’s ok, your module is already incredible even tho i can see where you could improve it i’m happy with it as of rn. However just a little question, i’m having some issue with trying to involve a genetic simulations i don’t know if it’s because of my input or how i manage my output but if you have a bit of time and your ok to answer some of my questions it would be amazing. Thanks a lot !
it would be cool if you released the module even if its unfinished, somebody would probably somewhat finish the module and it would be usable.
My schedule is opening up soon so I’ll be able to work on it again and start working on a barebones release before giving it all the bells and whistles. It probably won’t have genetic algorithms or LSTMs (useless on Roblox anyway) to start with, but we’ll get there.
I’m planning on making a neural network module. It’s extremely object oriented. Could I DM you about it? Because I’m not sure if the math for it is correct or not.
Has the softmax activation function been implemented yet?
Sorry for any late replies; I don’t frequent the forum much.
This library will eventually be superseded by another written in roblox-ts that mimics TensorFlow. I’m currently employed so I don’t have much time to work on it, but I imagine that most of the core work on it is done and I need to add some QoL functions before finishing the network side of things (which, surprisingly enough, is the smallest bit). Softmax will be included.
Hello, how do I train all agents in a population all at once?
math.random(os.clock() + os.time())
--------------------------------------------------------------------------------------
local package = script:WaitForChild('NNLibrary')
local base = require(package.BaseRedirect)
local feedforward_network = require(package.NeuralNetwork.FeedforwardNetwork)
local param_evo = require(package.GeneticAlgorithm.ParamEvo)
local momentum = require(package.Optimizer.Momentum)
local clock = os.clock()
--------------------------------------------------------------------------------------
local generations = 100
local population = 20
local input_module = require(script:WaitForChild('get_input'))
local setting = {
HiddenActivationName = "LeakyReLU";
OutputActivationName = "Sigmoid";
LearningRate = 0.1;
RandomizeWeights = true;
}
local function normalize(x, min, max)
return (x - min) / (max - min)
end
local genetic_setting = {
ScoreFunction = function(net)
local npc = script.NPC:Clone()
local npc_hum = npc:FindFirstChildWhichIsA("Humanoid")
local npc_ws = npc_hum.WalkSpeed
local timer = 100
npc:PivotTo(workspace.training_environment.spawn.CFrame + Vector3.new(0,4.5,0))
npc.Parent = workspace.training_environment.agents
local start_time = os.clock()
local loss = false
local trained = false
local additional_score = 0
for _, obj in ipairs(npc:GetDescendants()) do
if obj:IsA("BasePart") then
obj.Touched:Connect(function(hit)
if workspace.training_environment.detect_loss.Part == hit then
loss = true
trained = true
elseif hit == workspace.training_environment.target then
loss = false
trained = true
end
end)
end
end
while not loss or not trained do
if timer <= 0 then
break
else
timer -= 0.1
end
task.wait()
local output = net(input_module.get_input(npc, 100, 90))
npc_hum:MoveTo(CFrame.new(0, 0, -output.front).Position)
npc_hum:MoveTo(CFrame.new(0, 0, output.back ).Position)
npc_hum:MoveTo(CFrame.new( output.front, 0, 0).Position)
npc_hum:MoveTo(CFrame.new(-output.front, 0, 0).Position)
local angle = npc:GetPivot() * CFrame.new(0, math.rad(output.rotate_a - output.rotate_b), 0)
npc:PivotTo(angle)
end
if loss then
additional_score -= 10
else
additional_score += 30
end
npc:Destroy()
return os.clock() - start_time + additional_score
end;
HigherScoreBetter = true;
PercentageToKill = 0.4;
PercentageOfKilledToRandomlySpare = 0.1;
PercentageOfBestParentToCrossover = 0.8;
PercentageToMutate = 0.8;
PostFunction = function(geneticAlgo)
local info = geneticAlgo:GetInfo()
print("Generation "..info.Generation..", Best Score: "..info.BestScore/(100)^2*(100).."%")
end;
ParameterNoiseRange = 0.01;
ParameterMutateRange = 0.2;
}
------------------------------------------------------------------------------------------------------------------------
local temp_net = feedforward_network.new(
{"right", "front_right", "front", "front_left", "left"}, 2, 3,
{"front", "back", "right", "left", "rotate_a", "rotate_b"}, setting
)
local score = {}
local genetic_algo = param_evo.new(temp_net, population, genetic_setting)
genetic_algo:ProcessGenerations(generations)
local save = genetic_algo:GetBestNetwork():Save()
print(save)
Is there any way to do Deep Reinforcement learning with this module? Ive used this before for deep learning with datasets but i wouldnt be able to use that for things like a parkour or selai where i cant tell it what its doing wrong, or what its expected output should have been. or at least not without something super complicated where at that point it would just be better to do a different approach.
i could use genetic algorithms but then i have the issue where it reaches a point that genetic algorithms become too inefficient to use.
How good would this be for Pathfinding? if it could do pathfinding.