Neural Network Library (Obsolete)

Is it possible to train this in game? Like instead of going into studio, and training the neural network, we instead will just allow the neural network to learn while the game is being played by players?

4 Likes

Yes. Though the UI portion will become quite difficult to share (if you want to), the networks can easily be done in-game and trained on players. Note though that it will take quite a lot of training to make a decent AI.

3 Likes

How would I use this to make an NPC? Is there any way?

3 Likes

Of course! The library offers you all the tools to make a neural network NPC. You, however, have to do the training and configuring.
Making a good NPC with neural networks is a tough process that heavily depends on how you do it and what you choose the inputs/outputs to be, and what you train it with; another bot? A player?
TL:DR it is more than possible, just takes time and effort.

3 Likes

Ok! I want to know where to get started. Would I be able to train them to pathfind, or will I have to use an algorithm? I’ve watched a few videos, but the whole thing didn’t really make sense to me because I like to learn by example. Either way, really cool resource! I really think this could benefit me and many others. :slight_smile:

3 Likes

Training them is completely up to you and how you think you should do it.
I personally wouldn’t try to make them pathfind since that isn’t something a neural network can do; it wouldn’t have the information to do so.
There have been a couple projects for neural learning on Roblox, maybe you’ll find some inspiration there.

3 Likes

I have another question, How does the AI save on Roblox? I’d imagine caching data is hard especially when there’s no true method of caching training data. I don’t even understand how it saves in general, so I’m also curious about that.

3 Likes

The neural network does not save. When the neural network is done training, it will have a string which should look like a little something like this:

To get this string, you will have to do this after training:

local network = module.saveNet(network)

Then you would obviously save it to a datastore.

To load a network you would first get it from the datastore: local networkThatIsLoadedFromTheDataStore = DataStore:GetAsync(Key) then to load it to the network, you would do a little something like so:

module.loadNet(networkThatIsLoadedFromTheDataStore)

(Correct me if anything I said was wrong though! :stuck_out_tongue: )

5 Likes

@DragRacer31’s answer is correct. The network can be saved via saveNet() or hardCode() at any time. saveNet() is if you want to save it into a format that can be put into the Datastore or something that you want to later load with loadNet(), while hardCode() is something you use if you want to just copy paste the code into a script. The save in both cases is just a string.

3 Likes

Like most people creating the fitness function might be challenging. I just want to make sure that I have understood the concept first.

So from what I understand you add in some inputs and get some outputs. It doesnt matter how many outputs you will have as long as you create a fitness function to correct these outputs to your desired output.

Example: I will send in 5 inputs (distances) and there will be two outputs. If I train the network with a good fitness function I can correct these outputs to be whatever I want based on my scoring system. I can make one of them the engine power and the other output “turn” if I trained them in that way.

TL:DR

Its a universal function where you feed inputs and your fitness function determins what those outputs will do after training it.

I also forgot to ask last time. Are we doing supervised learning when we have to give a true value for their guess attempt and check if the error is smaller than a set value

→ network guesses something
→ if raycast shows you are too close to a wall on the left side → move right

– compare true value (move right) with the guessed value (move left or move right) and score it

2 Likes

For backpropagation, yes. The library runs your network with the given inputs, compares the output to the correct output, and adjusts from there. You don’t need to do anything there other then provide the set of inputs and the set of correct outputs.
The fitness function is needed only for genetic algorithms. There, you assign a score for how well a network performs, which is completely up to you. The only thing the genetic algorithm cares about is which networks are the best and which are the worst.

1 Like

Thank you for answering. What I meant with training was for forward propagation. Running the networks several generations and scoring it is what I meant by training. The outputs is from the function running the networks guessing a value.

local array = module.forwardNet(network,{fd,rd,ld,frd,fld})
local turndirection = array[1]
local enginepower = array[2] 

running all the network through several generations will correct the networks weight and give you a more accurate output that looks like what you defined. At start it will randomly guess which is most likely inaccurate

I was wondering if ran the network wrong. If the network randomly guesses good answers → gives good results in an early generation. This will be bred with the function

local best = module.runGenNet(nets,scores)

If next generation happen to score very bad, would this carry over and overwrite the good result from the previous generation. Do I need to save or can I blindly just score each network and run the module.runGenNet(nets,scores) on the highest network tested for that generation (which could be bad compared to the previous generation that happened to guess good).

1 Like

I decided to start a bit smaller rather than with the too ambitious “car going around track” network. As you mentioned XOR worked good with backpropagation. I tried to add a forwardpropagation to it, but as I soon discovered it would not make any progress. I concluded that I had done the whole forward propagation wrong for both the car and the xor network. This is the code for the forward propagation of the XOR gate network.

EDIT: I soon come to realize the code actually trained it at around 1k generations it did very good at guessing
image

local module=require(workspace.KNNLibrary) --Activating and getting the library

-- creating the neural network --
local NetworkFolder = game.Workspace.NetworkFolder
local totalnets = 100
local inputs = 2 -- 2 for xor
local hiddenL = 3 -- hidden layers 
local hiddenN = 4 -- hidden nodes
local outputs = 1 -- one for xor
local activator = "LeakyReLU" --"Identity", "Binary", "Sigmoid", "Tanh", "ArcTan", "Sin", "Sinc", "ArSinh", "SoftPlus", "BentIdentity", "ReLU", "SoftReLU", "LeakyReLU", "Swish", "ElliotSign", "Gaussian", "SQ-RBF"
local recurrent = false
local Bias = 0.5 -- default bias
local nets = module.createGenNet(NetworkFolder,totalnets,inputs,hiddenL,hiddenN,outputs,activator,recurrent,Bias) 
--------------------------------


math.randomseed(tick())

local Gen = 1
while wait() do
	
	local SumPoints = 0 -- sum all points from all networks
	local scores = {}
	
	local t = tick()
	for z = 1,#nets do
		local points = 0.01 -- close to zero starting point
		local network = module.loadNet(nets[z])
		local a = math.random(0,1)
		local b = math.random(0,1)
		local output = module.forwardNet(network,{a,b})[1]
		
		-- fitness function --
		if a == b then
			points = points + (1-output) -- higher points when guessed closer to 0
		elseif a~=b then
			points = points + output -- score higher when guessed closer to 1
		end
		----------------------
		
		
		table.insert(scores,points)
		
		SumPoints = SumPoints + points -- sums all score to find the avg score later
		
		--if tick()-t>=0.1 then	
		--	t=tick()			
		--	wait()
		--end
		
	end
	
	local best = module.runGenNet(nets,scores) -- breeds best networks and continues to next generation
	
	
	-- each network can score a max 1 point
	-- summing up each score in network and divide by the total network
	-- gets the avg score of all networks and makes a percent of that
	-- If all network guessed correctly success rate would be 100%
	
	local SuccessRate = (SumPoints/#nets)*100
	print('Generation: '..tostring(Gen),' | Success rate: ',SuccessRate,'%')
	Gen = Gen+1
	
	
end




2 Likes

When using an activation function (LeakyReLU or ReLU), the output(s) I get when running the program seems to be very discreet values (either very close to 0 or 1).

The XOR system trains the network to spit out 1 or 0 as an answer. This is relatively fast and can be done trained with feed forward propagation >900 generations with a 90%+ accuracy. This is because the output almost 90% of the time gives you a number super close to 0 or 1.

Then I tried to create a function to add two numbers between 0 and 1 (with interval 0.01 between each numbers). I wanted the output to be a close number to the addition between those two inputs.

The output(s) were very reluctant to give numbers between 0 and 1. Most of the time it gives a number very close to 0 (ex: 1e-200) or a number very close to 1 (0.9999999). Even training it for almost 10k generations makes it reluctant to change the output to a number between this interval. I am not using binary activation function. Isn’t the probability of getting any numbers in the interval as an output the same (assuming you haven’t trained the program yet). Feels like it is giving me 0s and 1s 90% of the time.

Here is the code. Could it be that the learning time is just very slow?

Neural Network: Sum of two numbers between 0 and 1 | interval 0.01 between each number
Output: Want the network to guess the addition

local module=require(workspace.KNNLibrary) --Activating and getting the library

-- creating the neural network --
local NetworkFolder = game.Workspace.NetworkFolder
local totalnets = 50
local inputs = 2 -- 2 
local hiddenL = 2 -- hidden layers 
local hiddenN = 2 -- hidden nodes
local outputs = 1 
local activator = "LeakyReLU" --"Identity", "Binary", "Sigmoid", "Tanh", "ArcTan", "Sin", "Sinc", "ArSinh", "SoftPlus", "BentIdentity", "ReLU", "SoftReLU", "LeakyReLU", "Swish", "ElliotSign", "Gaussian", "SQ-RBF"
local recurrent = true 
local Bias = 0.5 -- default bias
local nets = module.createGenNet(NetworkFolder,totalnets,inputs,hiddenL,hiddenN,outputs,activator,recurrent,Bias) 
--------------------------------


math.randomseed(tick())

local Gen = 1
while wait() do
	
	local SumPoints = 0 -- sum all points from all networks 
	local scores = {}
	local t = tick()
	for z = 1,#nets do
		local network = module.loadNet(nets[z])
		
		local a = math.random(0,100)/100
		local b = math.random(0,100)/100
		a = math.floor(a * 100)/100 -- 2 decimal
		b = math.floor(b * 100)/100 -- 2 decimal
		
		local guess = module.forwardNet(network,{a,b})[1]
		guess = 2*math.floor(guess * 100)/100 -- 2 decimal
		
		-- fitness function --
		-- want the sum to be close to the true value |  max score 1 | min score = 0 |
		local sum = a+b
		local d = math.abs(sum-guess)
		local point = (2-d)/2 -- makes sure the max point award is 1 for a perfect score
		point = point^2 -- amplifying score to really distinguish the best (score will still be within 0-1 domain)

		
		--[[
			Example:
			
			-- Perfect score --
			a+b=2
			guess=2
			d = math.abs(2-2) = 0
			point = (2-d)/2 == (2-0)/2 = 1
			
			
			-- Worst score --
			a+b=2
			guess=0
			d = math.abs(2-0) = 2
			point = (2-2)/2 == (2-0)/2 = 0
			
		--]]
	
		
		
		
		----------------------
		
		table.insert(scores,point)
		SumPoints = SumPoints + point -- sums all score to find the avg score later
		
	end
	
	local best = module.runGenNet(nets,scores) -- breeds best networks and continues to next generation
	local MaxPoint = 50 -- each 50 generations can score up to max 1 point
	
	local SuccessRate = (SumPoints/MaxPoint)*100 -- see how close you are to a perfect score of 50 points
	print('Generation: '..tostring(Gen),' | Success rate: ',SuccessRate,'%')
	Gen = Gen+1
	
	
end

After 10k generations it seems and I assume it to be learning as it started of at around 30% correcty guesses

Does the code have any noticeable logical errors? It seems the learning rate decreases and increases randomly.

Network started of at 30% correct guesses

After 10k generations (Network is steadily learning more)
image

After 16k generations (network is worse at guessing correctly)

image
After 20k generations (nothing seems to be changed but it still cant get up to 50% correct guesses)
image

1 Like

The problem with neural networks is that they cannot be exact, ever.
Unless they memorize, the networks we have thought of thus far cannot give absolutely 100% answers everytime everywhere. This is why they are usually not used in math unless you use specific activation functions like Binary along with extremely extensive training (Binary gives only 0 and 1 so the answer will always be exact, though the training is hard because Binary is a bad activator in general).
This is why if the answer needs to be exact-ish, you always put a tolerance depending on the range you’re working with. If the network has to answer 1, most of the time it won’t because it will instead give a number extremely close to 1, just because of the math that these networks are based off of.
Also, you shouldn’t use genetic algorithms for answers where the network is static, like this one. If time is not a factor in the network’s function (i.e not an NPC or enemy) and the trends that create the answers from the inputs are pretty easy to notice, you should always use backpropagation.
Now, if time is a factor or the trends aren’t that noticeable (AKA you can’t easily figure the answer out yourself), then genetic training may be appropriate.

1 Like

Is it possible to extract answers from a NN but train from a fixed-weight layer after that “answer”?

Example: Training a NN to play a game, so you need to take inputs it gives but also want to train it to match a score?

1 Like

If I understand you correctly, either one of the 2 training methods will do that. The NN is fully functional from the get-go. It will take inputs and give you outputs; chance are, though, they will be pretty bad.
Then, you train it. The NN will be functional all throughout the training process and can be used at any time between training sessions. The more you train it, the closer it will get to your desired outcome.
If time is not a factor in making a decision, use backpropagation.
If time is a factor in making a decision, like for an NPC, use genetic training.

I’m more thinking of trying to use NNs to make an ally AI that doesn’t surpass the player in skill; so, rather, optimize the AI’s weighted score total to equal the player’s. The weights would exist to encourage the AI to do specific roles (ex. Support AI prioritizes healing and buffing allies and disincentivizes attacking).

Reason being I want to definitively solve the “op/useless ally” problem that comes with AI allies.

Any suggestion on how to set that up?

1 Like

This is a question that I cannot answer.
How to setup such an AI is completely dependent on the usage, how fast pace it is, who or what it will be versing, and how you want to train it.
There is no 1 way of doing this.
The most common, though, is to have the AI fight itself while trying to outsmart itself. The moment it finds a good offensive strategy, it has to find a defensive as it is using said strategy against itself. This process results in the AI theoretically finding all possible strategies and finding the defenses against them.
Some methods include using a second neural network that uses some human feedback to guess what the human wants the robot to do. This works exceptionally well when you don’t actually know how to program a scoring function for the bot.

The problem with the second idea is that you would need to either

There are multiple problems with the second neural network approach though.
First of all, who trains the AI. If say a developer was to train the AI, then either a developer has to create the server and spend who knows how many test runs trying to train the AI. Another option is the developer pre-trains the AI in studio or something, but then they would need to store this AI program, and this AI program will begin from this program in every server its used. We also cannot guarantee that the human will create the optimal decisions the AI should be making.
Another option is if the player themselves trains the AI. The problem is that you would need to constantly calibrate the friendly AI with the player. You cannot guarantee the consistency of the player with the AI. The player is less likely to make optimal decisions each time, or poor decisions each time. It doesn’t account for the growth of the player.
Now, if there was an AI that was attached to the player, it could be possible. What I mean is a neural network that monitors the player’s gameplay throughout the game and tracks decision making all the time. That would be a poor decision making AI to start since it will have nothing to start out from, but it will grow with the player. At the same time, the developer could set a specified time to make this AI available to the player to reduce the poor decision making early on. But the consequence would be in the performance of the machine/network as well as the security of it.

1 Like