So I understand how and have written unsupervised machine learning programs, where you modify weights to achieve a maxima in a gradient descent. However this algorithm/approach only works if we know the end goal. I have been reading into how to neural networks work for dynamic environments and I have stumbled into genetic algorithms. I don’t really understand how to apply the algorithm. (Do I just apply it to the activation function?) to add on I don’t really understand what these greek letters mean too lol. Any help would be great.
This uses GA. (open sourced / uncopylocked)
GA is a way to optimize fitness by creating many different versions of a neural network (with some randomization), culling the versions with the lowest fitness, and then creating more versions by mixing together different properties of those with the highest fitness, while possibly adding random mutations. This is repeated until a neural network with a high enough fitness is created, or until the overall fitness of all of the networks converges.
Thanks for the insight but ive already gotten enough spams of articles and like some dude gave me a paper from 1989