Now, I know little about constructing NN’s, but I know something about how they work.
'What I know
I know that Neural Networks have mostly 3 or 4 layers,
First one is input, second and third ones are the ‘hidden layers’ and the last one is the output.
But, I know nothing about how to construct them.
The input layer is where you give the input, the hidden layers are the ones that process the input and make a decision, then that decision goes to the output layer.
I decided to consult the dev-forum on constructing this, but couldn’t find any good info.
You’re correct: Neural networks consist of an input layer, several hidden layers, and an output layer. The number of neurons on the input layer is the number of variables you’re passing into the NN, and the number of neurons on the output layer is the number of variables you want to get out.
The number and size of the hidden layers is chosen somewhat arbitrarily. Bigger NNs take longer to train, but can understand more complex problems. You’ll want to experiment with this to figure out what works best for your use-case.
Each neuron (aside from the input layer neurons) has one “weight” value per neuron on the previous layer and one “bias” value. The “weight” determines how much that neuron on the previous layer affects the result. To get the output, you multiply the sum of all neurons on the previous layer multiplied by its corresponding weight, and add the bias value to it. You then pass that number through an “activation function”, i.e. some function that limits the range of output values. Early neural networks used a sigmoid function for this purpose, but others may set a lower or upper limit and be linear otherwise. Again, you’ll have to experiment with what works best for you.
The activation of an input neuron is just the input variable passed through your activation function. The activation of an output neuron determines the value of your output.
Then there’s different ways to train your neural network. There’s “unsupervised learning”, which is essentially a brute-force way of finding the correct weights and biases via a genetic algorithm. Alternatively, you can do “supervised learning” via back-propagation (scroll down on that page to get a step-by-step explanation).
There’s a lot more to this than I can feasibly explain in one post, so I suggest you check out 3blue1brown’s YouTube playlist on the topic. This should give you the understanding necessary to get into neural networks and basic machine learning.