Making my simple NN yield better results

I’m starting to learn NNs which I’ll need for an AI in my game, so I started with a simple one.
The NN will see if the number is negative or positive, it should return 1 if it’s positive.
After a lot of testing, the best result I could get was 64% - I know this can be so much better.
Here’s the current code:

local module = require(game:GetService("ReplicatedStorage").Modules.KNNLibrary)
local runservice = game:GetService("RunService")

local network = module.createNet(1,3,2,1,"LeakyReLU")
local timesToTrain = 1000000
local timesToTry = 10000
local learningRate = 0.01
local tock = tick()
local testNumber = 1

while true do	
	for i = 1, timesToTrain do
		local randomNumber = math.random(-500, 500)
		local correctAnswer = 1
		if randomNumber < 0 then
			correctAnswer = 0
		end
		module.backwardNet(network, learningRate, {randomNumber}, {correctAnswer})
	
		if (tick() - tock) >= 0.1 then
			tock = tick()
			runservice.Heartbeat:Wait()
			print((i/timesToTrain)*100 .. "% trained")
		end
	end	

	local wins = 0
	local count = 0

	for i = 1, timesToTry do
		local number = math.random(-500, 500)/1000
		local answer = module.forwardNet(network, {number})[1]
		local shouldBeCorrect = 1
		if number < 0 then
			shouldBeCorrect = 0
		end
	
		if math.abs(shouldBeCorrect - answer) <= 0.4 then
			wins = wins + 1
			print("Got", answer, "- classed as correct")
		end
	

		print("Testing, " .. (count/timesToTry)*100 .. "%")	
		count = count + 1
		runservice.Heartbeat:Wait()
	end
	
	print("Test number:", testNumber .. "\n" .. (wins/timesToTry)*100 .. "% successful")
	testNumber = testNumber + 1
	wait(2)
end

(yes it’s messy, but it’s just a test right now)

The library I’m using is the KNNLibrary, I recommend you check it out if you haven’t already.

Any help is appriciated.

2 Likes

This is because LeakyReLU does not handle negative numbers well at all. It doesn’t blatantly cut them off like ReLU does, but it is still bad.
For a network of this context, I’d recommend a Tanh activator with the same setup. Tanh gives positive and negative numbers the same weight, just in opposite spectrums. Since Tanh is vulnerable to gradient vanishing, inputs must be scaled carefully, but you already have that.
I’d also recommend not using while true do since that will repeat the training infinitely with no ability to stop it.

1 Like

It got a whopping 100% success rate (wow), I didn’t think it’d be so game changing.
Here’s the network if you or anyone else wants to use it:

 [1,[[[[-837.250902258801829702861141413],[566.850109641904737145523540676]],[0.361434091215525499229954675684,0.770983953247453612789286125917]],[[[3.33391348390689934433339658426,-3.20326441126014982430092459254],[-2.54490553719058665294028287462,1.6580443456883635633403173415]],[0.677318362597772782862648455193,0.142195778104652004181218671874]],[[[-2.60729406611035718910329705977,2.84504084950210067717080164584],[2.37442996918401760808592371177,0.0604443447441228298711024535805]],[-0.0590477978808492467988067176066,0.524864513533332632810868290107]],[[[3.87446396484759025824473610555,-1.42362176551823438330757198855]],[0.120326367342872067589532036891]]],false,"Tanh"]