Can somebody explain how synapses work when forward propagating within the hidden layer?

So what I don’t understand what exactly a deep learning network is for. Why not just group them in 1 function? Is it to create more weights to make a more precise result? Or is it to have more biases? Even then they are sigmoidic functions when parsing out so what is the point of having multiple? To have different activation functions?

2 Likes

I don’t think I have enough knowledge to answer this confidently.

I’ll start by making assumptions:
I am guessing this is what you are talking about:
A list of inputs connected to a single neuron, with an activation layer and output.

network

I will illustrate this with an example using a linear classifier, with the objective to separating values. Yes, this isn’t a deep neural network, but bare with me here. You can probably guess how it connects!

This is the data that needs to be classified. The classifier will output a mathematical expression to correctly divide them into separate classes

data

With a single layer, we can divide it easy like so: think of the output being the rotation of the line.

class

But what if our data set is much more complicated? What if a single line isn’t enough to classify data? We turn to adding more layers.

complex

In essence, the number of layers correlate to the complexity of the input and output.
This is an incredibly simplified explanation, and only shows an example for linear classifiers. This isn’t even a deep neural network.

If you want an example as to why using multiple layers in an actual neural network. Convolutional Networks needs convolutional layers to take in… let’s say pixels to simplify the input. After multiple convolutions, pooling, activation and what not. Peeking under the hood makes it seem like the network is taking the data and chunking them, maybe organize them to lines and squiggles. Then it’s up to the last few layers to take the important features to churn them into an output.

Please… I think someone else can have a better answer. I am not completely confident in this post. I am mainly a game designer after all.

1 Like

Yea I know how activated functions work. I was asking for what exactly are these “functions” that the hidden layers receive. Right now this is my current code (python)

basically the black box or hidden layers basically use that information to interact with the games ecosystem and then return data inside the hidden layer

A neuron and a synapse hold both a weight and some number? The synapse transfers that data to the next layer of neurons? Then so on until it reaches the output? I am unsure of your question. Not all activation functions are sigmoidic? There’s one called ReLU. There is a need for multiple activation functions, they determine if the input from the previous layer is significant.