Neural Network Library (Obsolete)

Recently been trying to work on a very basic network that predicts your gender based on weight and height input. However, I have recently, after entering a few hundred samples, tested it and found out that it always outputs whichever type of gender was most recently entered. I am using the following script:

> local module=require(workspace.KNNLibrary) 
> 
> local network = module.createNet(2,2,5,1,"LeakyReLU",false,0.5)
> 																				
> local vis = module.getVisual(network)
> vis.Parent = game.Players.jhunlzeus4.PlayerGui
> local val = Instance.new("StringValue",workspace)
> 
> 																																
> --0 = female
> --1 = male
> 
> script.Parent.TextButton.MouseButton1Down:Connect(function()
> 	if script.Parent.SelectedGender ~= -1 then
> 		local inputs = {}
> 		table.insert(inputs,tonumber(script.Parent.Weight.Text)/1000)
> 		table.insert(inputs,tonumber(script.Parent.Height.Text)/1000)
> 		for i,v in pairs(inputs) do
> 			print(v)
> 		end
> 		module.backwardNet(network, 0.1, inputs, {script.Parent.SelectedGender.Value})
> 		val.Value = module.saveNet(network)
> 	end
> end)
> 
> script.Parent.M.MouseButton1Down:Connect(function()
> 	script.Parent.SelectedGender.Value = 1
> 	print(script.Parent.SelectedGender.Value)
> end)
> 
> script.Parent.F.MouseButton1Down:Connect(function()
> 	script.Parent.SelectedGender.Value = 0
> 	print(script.Parent.SelectedGender.Value)
> end)
> 
> script.Parent.Calc.MouseButton1Down:Connect(function()
> 	local answer = module.forwardNet(network,{tonumber(script.Parent.Weight.Text)/1000,tonumber(script.Parent.Height.Text)/1000})
> 	module.updateVisualActive(network,vis,{tonumber(script.Parent.Weight.Text)/1000,tonumber(script.Parent.Height.Text)/1000})
> 	print(answer[1])
> 	print(math.floor(answer[1]))
> 	if answer[1] < 0.5 then 
> 		print("Female")
> 	else
> 		print("Male")
> 	end
> end)

Any advice or suggestions would be appreciated, as the neural network is not working properly at all right now.

1 Like

I would assume that this is because of the learning rate but it seems low enough. The logic seems fine so you should look at the visualizer instead. If there are trends to be seen, the network should not change much as the learning goes on and should slow down.
As the inputs seem fine, there is most likely a logic problem with the way you handle the inputs (try a UI-less script that just enters the info directly).
Iā€™d like to point out that tonumber() is most likely unnecessary here.

1 Like

Thank you, I am glad that my logic appears fine. Which visualizer should I look at for noticing trends? I tried updatevisualstate for around 50 data values and noticed no change, so would I be better off using updatevisualactive? Or does the fact updatevisualstate does not show any change signal there is no trends for this data?

1 Like

Visual State gives you the colored representation of the network itself. If there is no change, that means the network is not learning for whatever reason. Visual Active only shows the route the network takes with the specified inputs.

1 Like

Hello, so I followed the suggestions you recommended I now used a decoded JSON table of my values to train the AI, and also used updatevisualactive to monitor for any changes to the network, and I did not seem to notice any changes in the GUI, but did notice minor changes to the stringvalue where I save the network. It at least displays some variance in the outputs, but it is still highly inaccurate. Would you recommend more/less nodes, a different activator, or anything else. I am going to go ahead and feed it more data right now, it currently has 500 data sets.

Update: It now has 10,000 data sets and is still completely inaccurate, always guessing female unless you input a number insanely high, such as 5000 kgs. I am unsure of what is happening but I canā€™t figure it out.

Further update: I automated the testing progress to see how accurate it is, it simply tests the numbers it was trained on, itā€™s accuracy rate is only a 50% chance, but it gets this result by simply going for female every time, and 50% of the data is females.

1 Like

Then there are a few possible things that could cause this.
A) The hidden trend youā€™re looking for does not exist within the dataset youā€™re providing, and the network understands that 50% is better than nothing.
B) Youā€™re not scaling your inputs correctly. Remember that you have to occupy the entire 0 to 1 range along with a maximum precision of 0.01 for best results.
C) Youā€™re not taking the output correctly. Remember that the output will practically never be a solid 1, it will always be less than 1. Same with 0. Try printing the raw output numbers and see then.
D) Youā€™re inputting or scaling the target output incorrectly.

If this continues, I suggest you just take the generation-based learning path. It is far easier, more ā€˜correctableā€™ as you set the scores yourself, and is usually the thing used in the field. Even if you do something wrong with creating it, it has a tendency to still give you something you can work with.
This is, of course, at the expense of speed.

1 Like

Thank you, I actually looked for the corelation in this data, and it has a very strong linear corelation. I do assume that it is a scaling issue, and I will work on adjusting this. If it fails, I will switch to a generation based approach.

I printed all the outputs of answers and they are all in the 0.007-0.008 range, I will work on scaling inputs and results.

1 Like

I went ahead and converted to a genetic based algorithm, I still had similar issues persist, it just seems to output a similar number over and over. After running it an hour or so, it has remained at an 11.54% success rate for a majority of this time. Can I assume that this is simply the best it is able to perform, or is there an issue with my input/output scaling.

Source:

local module = require(workspace.KNNLibrary)
local nets = module.createGenNet(workspace.Networks,20,2,4,3,1,"LeakyReLU")


local https = game:GetService("HttpService")
local data = https:GetAsync("https://ai-data.glitch.me/data")
local tablev = https:JSONDecode(data)

local vis = module.getVisual(nets[1])
vis.Parent = game.StarterGui

for g = 1,1000000000 do
	print("Generation: "..g)
	local scores = {}
	local step = 8
	local tim = tick()
	for n = 1,#nets do
		if n%16 == 0 then wait() end
		local network = module.loadNet(nets[n])
		local wins=0
		for i = 1,#tablev,step do
			local gender = tablev[i][2]
			local weight = tablev[i][4]
			local height = tablev[i][3]
			local answer = module.forwardNet(network,{weight/300,height/200})[1]
			local agender = 0
			if answer >0.5 then
				local agender = 1
				if agender == gender then
					wins = wins+1
				end
			else
				local agender = 0
				if agender == gender then
					wins = wins+1
				end
			end
		end
		table.insert(scores,wins)
	end
	local best = module.runGenNet(nets,scores) 			--With all of the networks scored, we run the next generation
	module.updateVisualState(nets[best],vis)
	table.sort(scores)	
	workspace.currentattempt.Value = module.saveNet(nets[best])
	print("Best network success rate: "..scores[#scores]/(800/step)^2*(100).."%")	
end

If it appears this is the best it can do, would adding more hidden layers/nodes be of any help?

1 Like

Wow, something to replace old generation easily abused votekicking systems in certain popular unnamed Roblox First Person Shooters!

ā€¦

nudge nudge

cough cough

throws the chair

flips the desk

1 Like

After messing around with it some over the last 2 days, I have managed to reach around 25% accuracy. Any suggestions you may have to further increase this would be appreciated

For reference, the data it runs through is 10,000 sets, so step needs to be relatively high.
Outputs are either a 0 or 1 ~90% of the time

local module = require(workspace.KNNLibrary)
local nets = module.createGenNet(workspace.Networks,20,2,5,4,1,"LeakyReLU")


local https = game:GetService("HttpService")
local data = https:GetAsync("https://ai-data.glitch.me/data")
local tablev = https:JSONDecode(data)

local vis = module.getVisual(nets[1])
vis.Parent = game.StarterGui

for g = 1,1000000000 do
	print("Generation: "..g)
	local scores = {}
	local step = 15
	local tim = tick()
	for n = 1,#nets do
		if n%10 == 0 then wait() end
		local network = module.loadNet(nets[n])
		local wins=0
		for i = 1,#tablev,step do
			local gender = tablev[i][2]
			local weight = tablev[i][4]
			local height = tablev[i][3]
			local answer = module.forwardNet(network,{weight/250,height/200})[1]
			local agender = 0
			if g == 100 then
				print(answer)
			end
			if answer >0.5 then
				local agender = 1
				if agender == gender then
					wins = wins+1
				end
			else
				local agender = 0
				if agender == gender then
					wins = wins+1
				end
			end
		end
		table.insert(scores,wins)
	end
	local best = module.runGenNet(nets,scores) 			--With all of the networks scored, we run the next generation
	module.updateVisualState(nets[best],vis)
	table.sort(scores)	
	workspace.currentattempt.Value = module.saveNet(nets[best])
	print("Best network success rate: "..scores[#scores]/(800/step)^2*(100).."%")	
end
2 Likes

This module is really cool!
Would it be possible to train a network that finds games that you may like based on the name/description of your previously liked games? Iā€™m messing around with a Roblox game finder at the moment.

1 Like

Absolutely. This will, however, require in-depth knowledge of how to use a perception neural network to analyze text, which is a science of its own.
The trend is there since similar games tend to have similar descriptions/titles, its viable since it can be automated, and its possible since this type of network can analyze text, if not without some effort.

3 Likes

Is there an easy way to see what it does, in studio. Is there just a function to run it?

1 Like

Finally got around to trying this out! Itā€™s pretty neat! I trained AI off of environmental data & output data I recorded during me driving around the map along random GPS routes. I got it to 84% accuracy (AKA the network responds to the same environment 84% the same way that I do). Iā€™m working on getting that network to drive itā€™s own car right now, excited to see the results!

7 Likes

If you donā€™t mind answering, how did you choose the amounts of node to have in each layer, and how many layers to have because as far as i searched around most of the article says that as long as there is not too many of them any numbers can be used. Also, what do you feed in for the car because that seems a lots of inputs XD

1 Like

Generally, the amount you would go for is 2/3 the input node count. Here, ScriptOn has 11 input nodes (direction raycasts, whether the car is near some checkpoint, itā€™s speed, itā€™s current angle, etc), so 7 nodes per hidden layer follows this rule. The entire network consists of only 2 hidden layers because the carā€™s task is probably not too complex.

1 Like

Thank you for the reply :slight_smile: I was always confused upon this because many articles never went in to this too much in detail.

1 Like

Itā€™s been a bit of a long road to get this guy running, but progress is being made. Now he can follow directions & stay somewhat stable, but struggles with drifting.

I had to simplify his inputs quite a bit (down from 11 to 7) and made his outputs Throttle Up/Down, Steer Left/Right, and Handbrake. Having Throttle/Steer only be 1 output each was causing some issues when thereā€™s a negative output value (ex: steer left is -1), so changing to to a positive value for each side (left/right) + using ReLU to enforce positive outputs really helped. Iā€™m now at 95.5% accuracy, but you can see where the last 4.5% is lostā€¦ haha

Either way, neat stuff @Kironte, thank you for sharing this with everyone!

s

Edit: As I give it better and better data it gets smarter. Itā€™s now able to corner & recover from crashes

https://i.imgur.com/g5QdjwJ.mp4

9 Likes

Awesome progress! Not sure why the UI is glitching for you as Iā€™ve seen that happen only when there are 100+ nodes in a layer, but hopefully you can make something even better than your last model in the video at the top!

1 Like

I got some more data & kept tuning things. Now I have a car that can drift/drive a bit like I do.

4 Likes