Hello, I am new to the DevForum. I implemented the algorithm described in this paper after seeing sethbling do it to a Mario emulator and being bored at a conference for my university. http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf
I may be wrong about some specifics as this is all stuff I learned while researching to make this.
A neural network uses the perceptron model to create complex systems which evaluate problems that are normally difficult for computers, such as recognizing hand written letters. Usually these networks have a fixed shape, or topology, determined by the user. Neural networks improve their performance automatically, and the common methods of doing this are:
- Backpropagation, or when the user knows the correct answer from the inputs and tells the network how wrong it is. It then adjusts its weights (or the strength of the connections between neurons) starting from the outputs and going backward.
- Through a genetic algorithm. A bunch of networks are tested at once, and the best ones are chosen to “combine” (called crossover) and create the new networks, with small mutations. The best ones are determined by a “fitness” function which is defined differently for every problem, so you could say it works based on survival of the fittest.
The algorithm described in the paper, NEAT, uses genes separated into nodes and connections. This isn’t 100% how it works but think of a node as an if statement for a certain condition, and the connection is how convinced the AI is of that if statement. More nodes mean more complex decisions can be made, and more finely tuned connection weights means that the right decision is made more often from the pool of available. Usually, the shape of the node configuration is determined by the user. NEAT is unique in that it builds this shape itself, mitigating the use of unnecessary connections/nodes.
Info for upcoming Gyazos:
The networks are represented by parts trying to reach a goal. The parts have the ability to turn right, turn left, start moving, and stop moving of their own “free will”. The inputs to their networks (in humans it would be our 5 senses) are the relative position to the parts hit by 3 rays, coming from their LookVector, RightVector, and -RightVector. Other inputs are their current position, current velocity, and distance from the goal. Fitness is defined by how close the target is to the goal. Hitting a wall cuts fitness in half and reaching the goal multiplies fitness by 10. Other parameters are the defaults shown here. I have elitism enabled, meaning the best few from each generation are copied and used in the next one as a bench mark. Different colors indicate different species. https://gyazo.com/bcf1922fe598188085aad2461a6a7624
Early on, they learned 2 major paths to having a high fitness. One path was going in a straight line and dying close to the goal, and one path was avoiding walls until the time ended so that their fitness was never cut in half, putting them above virtually every part that did die.
Notice the light gray one near the wall to the right (path 2) vs the ones that just go straight/curve a little bit (path 1)
My hope was that eventually, the two genes that cause this behavior would be combined through mating and in generation ~120, they did.
Notice how the dark gray one does not necessarily reach the goal, but it tries to while stopping when its front ray detects a nearby wall.
I added a wall after 250 generations and they seem to be adjusting pretty quickly to it. This is after only one generation of the wall being added:
After only 3:
I will move the walls randomly once it seems easy for them to reach the goal. I will then start moving the goal around when they learn to avoid any configuration of walls.
EDIT: Just showing after around 100 generations since I never showed a clear gif where the solution was found.
High performance cost. Script runs at about 6% memory at all times (which is alright given that it is evaluating 65 growing neural networks and checking 3x as many rays every 0.03 seconds, but I’m not satisfied) and later generations can spike to 50%+ when generating the new population, which I think is just a bug that I have to find. (EDIT: not a bug. population generation runs in an abysmal O(n^k) time complexity, k being population size minus the amount of children copied from elitism and n being determined by total amount of connection mutations, which starts at 0 but has a chance of increasing slightly every generation)
Stuck. The different species sometimes get stuck on their solutions and take a long time to deviate. A higher interspecies mating rate can help bandage, but not completely solve this issue. This causes certain species to sometimes never reach their goal and get purged for staleness, even if they were so close to a solution.
- I will make a GUI (likely before the day ends) that shows the shape of the network when you click a genome.
- I will add speed of reaching the goal to the fitness function.
- I will add more configurations so that evolution can happen faster for different problems.
- I will publish this as a public module, if anyone is interested.
- I am going to release a demo game where your character learns how to walk through a maze on their own or maybe learns how to fight in a wave/tower defense type game.
Sorry for so many edits.
Any feedback would be greatly appreciated. Just wanted to show someone this.