I’m currently busy writing a neural network library for smart/adaptive NPCs.
Eventually I’d like to be able to use it to emulate fake players in a game for instance, when there are too few players in a server I can spawn in a few bots that were trained to behave like real players.
The problem I am currently facing however is that the neural network is too slow for real-time usage.
Doing a 100 calculations per second takes roughly ~0.12
seconds.
Time varies with complexity but this might not even be enough to run more than 5 bots at once (without parallel Luau), even with parallel Luau this is still way too slow.
I am however trying to improve single-core performance so I can potentially simulate at least 16 bots at once, running 20 calculations/forwardings per second at least.
The script that I use to test the code.
local nnmod = require( game.ReplicatedStorage.neuranet)
local nn = nnmod.new_neuralnet(6, {8, 8, 8, 6})
print(nn)
task.wait(1)
local t = os.clock()
task.desynchronize()
for i = 1, 100 do
for index, node in nn.input
do
node.v = math.random()
end
nnmod.forward(nn, true)
end
task.synchronize()
local delta = os.clock() - t
warn("[NeuroNet] Took " .. delta .. " seconds to complete forwarding.")
nnmod.size_of(nn)
Below is the code primarily responsible for forwarding and processing.
--!optimize 2
local tymo = require(script.Parent.base._types)
local funcs = require(script.Parent.base._functions)
local _settings = require(script.Parent.base._settings)
local module = {}
-- Shortcuts.
local abs = math.abs
local sin = math.sin
local cos = math.cos
local exp = math.exp
local rand = math.random
--[[
========================================
Node functions.
========================================
]]--
-- Activate the node with it's assigned function.
@native
function module.activate(
no : tymo.node,
val : number
)
no.v = funcs.func_list[ no.f ] (val)
end
--[[
========================================
Layer functions.
========================================
]]--
--[[
Arguably one of the most important functions.
Processes all the nodes from one layer towards the target node.
This function is responsible for running the neural network.
]]
@native
function module.forward_node(
from_layer : tymo.node_layer,
target_node : tymo.node
)
-- We sum everything together here.
local sum : number = target_node.bw
-- Loop through every node in the previous layer.
for i = 1, #from_layer
do
-- Cache values from node for code readability.
local from_val : number = from_layer[i].v or 0 -- Value from starting node.
local target_con : number = target_node.c[i] or 0 -- Target node's connection to previous layer.
local target_wei : number = target_node.w or 0 -- Target node's weight.
-- Perform the maths.
sum *= (
(from_val * target_con) * target_wei
)
end
-- Check bias.
if abs(sum) < abs(target_node.b)
then
sum += target_node.b -- Add if less than bias.
end
-- Finally activate the node.
module.activate(target_node, sum)
end
--[[
This function basically just wraps the forward_node() function into a loop.
Makes coding infinitely easier and simplifies functions by splitting up logic.
]]
@native
function module.forward_layer(
from_layer : tymo.node_layer,
to_layer : {tymo.node}
)
-- LoOoOoOoP where we forward to every node in the target layer!
for index, target_node in to_layer
do
module.forward_node(
from_layer, target_node -- From layer > target node.
)
end
end
--[[
========================================
Neural network.
========================================
]]
-- Runs the entire neural network, yippee!
@native
function module.forward(net : tymo.neuralnetwork, debugging : boolean?)
-- Forward the entire layer.
for i, layer in net.layers
do
module.forward_layer(
net.layers[i - 1] or net.input,
layer
)
end
-- Code below is just for debugging.
if not debugging then return end
local last_layer : tymo.node_layer = net.layers[ #net.layers ]
for k, v in last_layer
do
print("Node " .. k .. " = " .. v.v)
end
end
-- Checks the size and complexity of the neural network.
function module.size_of(net : tymo.neuralnetwork) : number
local node_count : number = #net.input
for index, layer in net.layers
do
node_count += #layer
end
warn("Size of neural network: " .. node_count .. " nodes.")
return node_count
end
return module
Function module that contains various activation / value processing functions.
--!optimize 2
local module = {}
local tymo = require(script.Parent._types)
local _settings = require(script.Parent._settings)
-- Shortcuts.
local abs = math.abs
local sin = math.sin
local cos = math.cos
local exp = math.exp
local rand = math.random
-- Real business starts beyond this point.
--[[
========================================
Utility functions.
========================================
]]--
-- 2 functions for validating node connections after addition/removal.
function module.validate_addition(
target_node : tymo.node,
connected_layer : tymo.node_layer?
)
if not connected_layer
then
error("Layer " ..tostring(connected_layer).. " does not exist.")
elseif (#target_node.c + 1) > #connected_layer
then
error("Connections count wouldn't match.")
end
end
function module.validate_removal(
target_node : tymo.node,
connected_layer : tymo.node_layer?
)
if not connected_layer
then
error("Layer " ..tostring(connected_layer).. " does not exist.")
elseif (#target_node.c - 1) < #connected_layer
then
error("Connections count wouldn't match.")
end
end
--[[
========================================
Simple math utility functions.
========================================
]]--
-- Returns a random number with range (-1 .. 1).
@native
local function random() : number
--return ( rand() + rand() ) - 1
return ( rand() - rand() ) -- Might be slightly faster? Results seem the same as above.
end
-- Converts (0 .. 1) to (-1 .. 1).
@native
local function normalize( v : number ) : number
return (v * 2) - 1
end
-- Converts (-1 .. 1) to (0 .. 1).
@native
local function unnormalize( v : number ) : number
return (v + 1) / 2
end
-- Normalize a number to ensure it never goes outside the (-1 .. 1) range.
@native
local function limit( v : number ) : number
local num : number = unnormalize(v) % 1
return normalize(num)
end
module.random = random
module.normalize = normalize
module.unnormalize = unnormalize
module.limit = limit
--[[
========================================
-- Node activation functions.
========================================
]]
-- moduleified hyper-bolic tangent.
--local function htan_plus(x : number, a : number, b : number, c : number, d : number) : number
-- return
-- (exp(x * a) - exp(-x * b)) /
-- (exp(x * c) + exp(-x * d))
--end
-- Linear curve with clamping.
local function linear(x : number)
return math.clamp(x, -9.999, 9.999)
end
-- Sigmoid curve.
@native
local function sigmoid(x : number) : number
return 1 / (1 + exp(-x))
end
-- Wrap map curve.
@native
local function wrap(x : number)
return ((x + 1) % 2) - 1
end
-- ReLu curve.
local function relu(x : number) : number
return math.max(x, 0)
end
-- Reverse ReLu curve.
local function rev_relu(x : number) : number
return math.min(x, 0)
end
-- Pi sine curve.
@native
local function pi_sine(x : number) : number
return sin(x * math.pi)
end
--[[
Random activation function.
Can be used to add "randomness" to a neural network.
Useful if you want the result to be different
even when the input is the same.
]]
@native
local function aran(x : number)
return x + (random() * _settings.def.random_scale)
end
--[[
A list of all activation functions.
WARNING: modifying this list will have consequences
and breaks every neural network that depends on this specific order.
If you wish to add your own functions
you should always append TO THE BOTTOM of the list.
Neural networks are not backwards-compatible with older versions of this list.
]]
module.func_list = { -- Whole table is made all at once since it might be more optimized.
linear;
sigmoid;
wrap;
relu;
rev_relu;
pi_sine;
aran;
math.tanh;
math.sin;
math.cos;
math.round;
}
-- Picks a random function from the function list.
function module.random_function() : number
local size = #module.func_list
return math.random(1, size)
end
return module
The output.
12:22:44.210 [NeuroNet] Created input layer with 6 inputs. - Server - constructor:142
12:22:44.210 ▶ [NeuralNet] Created layer with 8 nodes. (x3) - Server - constructor:115
12:22:44.210 [NeuralNet] Created layer with 6 nodes. - Server - constructor:115
12:22:44.211 ▶ {...} - Server - neural net test:8
12:22:45.231 Node 1 = 0 - Server - processor:180
12:22:45.231 Node 2 = -0.05600904476120605 - Server - processor:180
12:22:45.231 Node 3 = 0 - Server - processor:180
12:22:45.231 Node 4 = 0.18197623951841124 - Server - processor:180
12:22:45.231 Node 5 = 0.6186482173094424 - Server - processor:180
12:22:45.231 Node 6 = -0.42393142440776876 - Server - processor:180
...
12:22:45.327 [NeuroNet] Took 0.09615569999914442 seconds to complete forwarding. - Server - neural net test:36
12:22:45.327 Size of neural network: 36 nodes. - Server - processor:201
I’m also totally aware that some activation functions could be slow.
They are necessary however for the most part since this algorithm is inspired by NEAT and I wanted to give every node the potential to have it’s own/unique activation function to allow for more complex interaction between nodes while requiring a much smaller neural network.
I might need some approximations or cheaper alternatives that can give roughly the same results.
Thanks in advance.
Files
If you need the module itself to look at I can DM it, I preferably would like to not share it here since it’s not fully done and ready to be open-sourced just YET.
Eventually I might want to release this module to the public so that more developers can create “intelligent” NPCs.
Real-time performance is very important since you’re essentially supposed to use this for sword fighting, shooting and parkour NPCs that learn and adjust to their environments.