Self-Driving Neural Network Cop Cars

I’ve gotta ask, if you had to spend so much time on it anyway, why not just make conventional AI?

3 Likes

how are you training the network to navigate and collide more accurately? my first thought is an adversarial network where you also have another car that’s trying to avoid all of the police cars.

1 Like

Very cool stuff! Congrats on the great progress.

Would you mind expanding a little bit how did you go about implementing this on Roblox?
Are you using a NN library? Or did you implement your own tools?

11 Likes

Very cool. Keep up with the good work

6 Likes

I’m very impressed with this so far.

Are you going to have something similar to Need for Speed Most Wanted 2005’s police system? It looks like it so far!

1 Like

Low-key I could have. I’m not sure if it would have done as great in some scenarios, though. The nice thing about a NN is that I don’t have to think about anything but the path I give it once training is done. Drifting, turns, etc… are all handled by it and work fairly well at any speed. We also have the added benefit of now having a ‘perfect driver’ so we can put different cars in on the same route and use the same training data. Now we can see what a perfect driver is capable of doing with the car, and tune the car until it acts in a satisfactory way. (Ex: Lambo has too strong drifting for the AI to figure out, tune the lambo stats until it works right)

This one was actually pretty easy :smiley: I just have two rays. One on each side of the car that move forward as the car moves faster. They can help it avoid obvious things like walls and most of the time foliage (which it can run over and destroy anyway so it’s not the biggest issue). I tell the NN what % of the Ray made it and it does the rest. (Ex: I draw a ray 10 studs, it hits at 6 studs in so I feed it 0.6 as an input)

I just figured it out myself. Turns out the algorithm can be done in like < 100 lines of Lua code. It just took me a long time to figure out what those lines and variables had to be >_<

Actually yes :joy:

13 Likes

Congrats as well, @ScriptOn on your award for Best NeuralNetGuy2018.

15 Likes

I thought so!

You going to have radio chatter and stuff like that?

3 Likes

Thanks for sharing your whole experience in such detail. Also, thanks for referencing the tutorials, that channel is always great!

2 Likes

I am in love! Reading this made me realize how insane neural networks are and how powerful they can be! I will definitely be studying this from now on until i have a great understanding of it. Thanks for posting this.

4 Likes

Wow😯! That looks awesome. How many lines did you use? That just looks so Unbelievable. That game your making is going to look awesome.

May I ask which activation function did you use?
I hear that ReLU or Leaky ReLU are the best nowadays, along with a softmax function in the last layer since ReLU only applies on the hidden layers. I’m still trying to figure out how to back propagate through it though.

If you do use ReLU and Softmax, may I know how you back propagate through it?
Thanks in advance, inspiring work btw!
MXKhronos

ReLu without softmax on the last layer.

3 Likes

You backprop through all activation functions the same way, just one big chain rule through all the derivatives to work out what change to a node’s weight would correct the error measured at the node output. Activation functions all have simple derivatives, and ReLU is simplest of all, it’s just piecewise defined as f(x)=0 for x<0 and f(x)=x elsewhere, so the derivative is just 0 for x<0 and 1 for x>=0. Leaky ReLU is not much more complicated, you just have some small slope for x<0, usually 0.01 so that negative values aren’t completely zeroed out, just heavily discounted. SmoothReLu is an exponential curve asymptotic to ReLU: ln(e^x+1) and its derivative is the logistic sigmoid function also commonly used as an activation function.

Softmax is something else entirely, it’s used for converting the raw output of the final hidden layer to a normalized probability distribution, typically scores for some N number of classification categories that sum to 1.0. Backprop through its summations is too involved for a devforum post, but here is a good reference derivation: The Softmax function and its derivative - Eli Bendersky's website

You wouldn’t use softmax for car AI, it’s for classifying inputs into a discrete set of classes (e.g. to decide if a photo is a cat, tree, doge, etc. You’d use a regression error function for steering a car, like squared error terms: how far off course you are, in various directions, squared so that your error is guaranteed to have a minimum you can gradient-descend your way towards (think of how a parabola is like a bowl, it has a bottom, which is where you want to end up–the minimum error). Of course a very complex multivariate loss functions can have local minima too, a major issue for solving things with NNs as a local minimum that is not the global minimum is not the lowest error solution. Part of working with NNs is techniques to avoid these less-optimal solutions.

6 Likes

I love your explanation! Thank you for helping!

2 Likes

Thanks a lot for the information! I got it working and now I’m very excited to apply it.

Do you think you’d ever release a version of this? Maybe release a tutorial on making something similar?

3 Likes

I’m so glad this got bumped. I’ve been wanting to dip my feet into neural networks for AI, and seeing that something like this is possible on Roblox is CRAZY.

I probably will get off my butt and actually try it now. I know I’m a year late, but thanks for sharing.

1 Like

Why is the police car in the video very jittery?

I run it in Studio and the chase camera stutters there.

I don’t really have anything I can release (this is something I was paid to make for someone else), but I’d be happy to help anyone that has questions. Please stop DMing me and ask them here instead so everyone can benefit!

1 Like