Help on Nueral Network

Hello!
So I want to make a simple neural network
Neural Network Library 2.0 - Resources / Community Resources - DevForum | Roblox
But the problem is I dont understand anything. As a developer who isnt great in english. I’ve been looking for the examples. Like this one. But sadly, I’ve got no idea what it does.

If anyone can tell me what the usage of the one i showed you for. It can give me basic knowledge of the neural network to work on!

Please help!

1 Like

Great to see that you want to experiment with neural networks!

To begin with: please note that neural networks aren’t an one-size-fits-all solution. They’re a black box, can’t be debugged & its results will fluctuate (based on your fitness function)

This guide should explain neural networks on its own:

Eventually, you can try searching for a YouTube video explaining Neural Networking on a simple way. If it’s visual, it might make more sense :slight_smile:

2 Likes

Hi!

To use a package like this, it’s good to know how a usual neural network works, and what all the parts are called. There are many youtube videos to explain things like forward- and backward-propagation. This helps to understand why each step is done.

Normally, the layer of input neurons fire according to the input. This activates neurons in the next layers, until the output layer is reached. Because the activations travel from input to output, this is called ‘forward propagation’. The output layer is like the ‘answer’ to a question. For example, if I activate the input neurons in some way, what is the output it gives?

Then comes the step where you decide if the answer that the NN gave as output was correct. The answer is just one, or a few, activated neurons, and in this step you give the neural network a score for its answer. A good answer gets a high score, a bad answer a low score. The function that does this is called the ‘Scoring function’.

Then you train the network to do better next time: if it received a bad (or good) score, a signal travels from output to input, to adjust the behaviour of each neuron. This is done using differential equations and derivatives. Basically, it goes something like ‘if I adjust the neurons parameters like this, would the answer have been better this time?’. Because this signal travels from the output layer to the input layer, it is called ‘backward propagation’.

The way a neuron decides if it will fire depends on two things:
-its individual parameters, which is just a set of values (‘bias’ and ‘weights’). This is what is adjusted during the training step, and decides how ‘sensitive’ it is to also fire when it is activated by neurons in the previous layer.
-its activation function, for example relu or sigmoid. This is normally the same for all neurons in a layer and doesn’t change during the simulation. This is a hyperparameter that you normally set in the settings of a neural network script.

So, when you use a package or library, normally most of the steps are pre-built for you. You can just tell the library:
I want a NN with 4 layers, each layer has 20 neurons, it uses relu activation function for all layers, and the scoring function is as follows, etc.

This saves you a lot of work. And that is exactly what the code you linked to is doing. So to understand all these steps and their names, helps you to use the library to really quickly create neural network simulations.

Now, you probably know, neural networks have come to be used for a lot of things, and they are part of many bigger applications. They are part of convolutional neural networks, generative adverserial networks, etc. Those kinds of networks can process images and see what is in them, or create pictures of humans that never existed.

In the case of your link, the neural network is used to create an ‘evolutionary algorithm’. In evolution, there is a group of neural networks. Each NN is called an individual and the group is called the ‘population’. Each NN, as before, gets input and uses it to generate answers. A scoring function decides how well it does. But now, the scores of all the individuals in the population is compared, and the worst ones are deleted (on average, there is some randomnes added). The best ones have a chance to be copied, so that the population stays the same size. Sometimes in an evolutionary algorithm, several of the best ones are combined, instead of making copies. But that doesn’t work well for neural networks, because normally trying to combine two neural networks into one of the same size, breaks them completely.

So in theory, this can be used instead of the backpropagation, or it can be used on top of it. Results have traditionally not been very efficient, but it’s an interesting strategy that looks a lot like natural evolution in biology. Personally I expect future breakthroughs in this area.

What you see in those scripts you linked, is a definition of all the steps I wrote above:
It imports the classes it needs from the package library
It defines the hyperparameters as settings. For example, the relu activation function for a particular layer of neurons.
Then it creates a new neural network using the library, telling it how many layers, and how many neurons in each layer, etc.
Then it starts to work training the model: it gives it an input, calculates the output, sends the output to the scoring function, receives a score back, and uses the score or multiple scores for backpropagation (‘learning’). By repeating this proces, the NN or population of NN become better at solving the problem.
Finally it decides it has trained enough, in these scripts simply after X runs. Now it goes to the testing phase: one last time it gives the NN a set of inputs, asks the output and sends it to the scoring function, and finally it reports how well the network(s) did after training.
Then the script is done. Normally you would want to save the Neural Networks you trained, so you can use it to solve problems for you. You know how good it is (or should be) because of the testing phase.

So that’s basically the whole thing. You have to be careful of a lot of things, such as the training set and the test set being ‘independent’ (not the same items from a list used again, etc). But just for learning about this, you can try to get some of these scripts to run, adjust some hyperparameters, neural network sizes, the scoring function, etc. Look at some online courses about neural networks. You will learn a lot and it’s super interesting!

One note is that, it’s very easy to create simulations that run for days, or even longer! Computers are fast, but not fast enough! Evolutionary algorithms especially are very heavy to run. Normally, you would not run this kind of program on a CPU, but on a GPU with parallel processing. A common language to do all this in is Python, which has libraries to accomodate this. I am a big fan of lua, but the roblox API does not allow parallel GPU computations (yet ;P), and so you can only run very simple neural networks on here (which is fine, for now! :slight_smile: ). Training a neural network is the heaviest part, when you use it to just give an answer it is not actually that heavy. So you could train a neural network elsewhere, import the parameters of all neurons, and use it in a game. You could also use HTTP-service to train bigger NN in realtime on your own system. But very small NN are fine to train in ROBLOX if you are not in a hurry.

Hope this helps, I highly recommend diving more into this, the future of AI is huge!

2 Likes

Okay neural networks isn’t an easy topic to get into. What do you want to achive?

Genetic Algorithms doesn’t mean neural networks. Rather genetic algorithms is a way to train a neural network. Ths infinite shakespearean monke.

To get into neural networks you have to have a solid understanding of basic maths.

What is a Neural Network?

To put it as simply as possible a neural network is a way to represent a mathematical function.

A function is basicallly a box where you put input in and output comes out. e.g.

function myFunc(input: number)
    return 2 * input + 4
end

Neural Networks uses weights an biases and activation function to create complex functions.

Neural Networks get very hard very quickly so instead of asking here you should read articles online that wil help. Also here is a very simple neural network builder.

Is a link to an online Neural Network builder

Yt link to simple tutorial on neural network

1 Like

image
So this is what I understand:

the math.abs(output.out - correctAnswer) is called the “Scoring function”?

Sorry, I didn’t explain that part well. “Scoring function” is actually not the right term. The terminology is not the same for evolutionary algorithms and neural networks.

There is a function somewhere called ‘ScoreFunction’. This is part of the GeneticSettings that are given to the constructor that creates a new ParamEvo instance.

local geneticAlgo = ParamEvo.new(tempNet,population,geneticSetting)

For evolutionary algorithms, a much more common term for this score function is actually the “fitness function”. The score, or ‘fitness’ determines how well the individual competes against others in the population.

When we talk about a neural network, the function that compares the given answers to the expected answers, and gives back a ‘score’, is called the “loss function”. When we average out the loss functions we call that the “cost function”. “Cost” is the term you see coming back into the function names of the library, and used in backpropagation during the ‘learning’ step.

for generation = 1, generations do
    local coords = {x = math.random(-400,400)/100, y = math.random(-400,400)/100}
    local correctAnswer = {out = isAboveFunction(coords.x,coords.y)}
    backProp:CalculateCost(coords,correctAnswer)
    if generation % numOfGenerationsBeforeLearning == 0 then
        backProp:Learn()
    end
end

From the example code, it’s not clear which cost function is used to calculate the cost, it seems to be something internal defined by the backpropagator class.

It is similar in concept, but I shouldn’t have mixed them up: “Scoring function” is not an official term. Both the fitness function and the loss function are scoring things, but it’s better to see them separately.

I’m not sure how the evolutionary algorithm is implemented to train the networks exactly, but I suspect it’s just by random mutation, without any backpropagation. By making more copies of the best performing individuals and mutating them, the neural networks still improve over time.

1 Like

Thanks, I’m quite getting much more info about it. Some more question

Im trying to make a simple location guesser, and I have a data of the coordinate named “coords”
Now example this coords is x = 213,y=-93
Since
local output = net(coords)
the output is send from 0 to 1
Well how will I actually know what is the exact value be?
(If you dont understand me, you can tell me more of what you dont understand. Since Im not good at english)

I’m not sure if I get it right, but there is a function:

function isAboveFunction(x,y)
    if x^3 + 2*x^2 < y then
        return 0
    end
    return 1
end

This function tells the program what the ‘correct’ answer would be. In your case it seems you would want to replace this, to say that the value needs to be exactly that (return 1) or else return 0.

Normally you would probably give the NN a bigger ‘target’, so to say in that function that the distance to your target must be within X. The point of a neural network is, after all, that it should be able to predict answers to questions it never heard before. In the example it only uses integers, and trains on all available integer coordinates. But if you would give it a decimal number, it should still be expected to give a good answer, if it has been trained well. That doesn’t really make much sense if only a single point is considered ‘correct’. Also, comparing decimal numbers to be exactly the same is usually a bad idea. So best to give it a function to say that the ‘correct answer’ (when the function returns 1) is when the coordinates are close to [213,-93].

If the target area is very small, it also means the NN should almost always say the correct answer is 0. Normally in AI it is often best to try to use a ‘balanced data set’, which means that the correct answer is 0 about equally often as the correct answer being 1. So 50% 50% is ideal, but in this case it doesn’t have to be very close to 50/50. There are many tricks to train using a more balanced dataset (‘upsampling’ or ‘downsampling’ for example), but for this test, as explained, I would just make the target area a bit bigger, not just one target coordinate. I’m not sure how it would affect performance in these examples to have a very unbalanced data set, but training will probably take much longer. And you might draw the conclusion that your algorithm is correct over 99% of the time, when it simply always answers 0! :slight_smile:

I’d say try it out, and see what the difference is, that’s the best way to gain more insight.

1 Like