It definitely is working! Although I was a bit lazy and set it to if Throttle==1 (I definitely need to add a speed variable) it still made the car drive in an almost perfectly straight path and almost cleared the entire curve before crashing into the straight road in front of it, definitely have not made it this far before, especially the fact that it went that far just 4 seconds into training. I will leave it here for today and test it again tomorrow, ill let you know how it goes as usual!
Release 1.16 Version Update!
-
Refactored and renamed Deep Q-Learning, Deep SARSA and Deep Expected SARSA. This includes the variants.
-
Made some bug fixes and removed redundant codes for some of the algorithms stated above.
-
That’s pretty much it…
Hello guys!
I have uploaded the version 5 of the sword-fighting AI codes. The version 5 brings back some of the codes from the version 1 so that the AIs can learn more advanced tactics. It is also combined with the version 4 since the AIs in that version learnt things much more faster than the previous versions.
Also, credit to @noisecooldeadpool362 for providing the code improvements related to angle calculations and this will be applied to future versions of the sword-fighting AI codes.
Hilo! Currently I am using the raw magnitude distance as an input to tell the car how far eachraycast went for each side, so the value can reach well up to 50-100, I was wondering if there was a way to shorten this value since inverseDistance causes it to not perform that well compared to raw Raycast magnitude Distance.
PS: The car reaches the curve more often now! (nevermind, still likes to crash into the wall)
Apply log function to the distance.
Hey, I just realized maybe it’s better to detect if the car has reached the destination or not instead of distance. 0 for not reaching the destination, and 1 is when you reached the destination.
I don’t think the magnitude of the distance carry much weight compared to the orientation of the target location. That should make training more faster.
Hmm, the thing is the AI doesn’t seem to know where exactly to go, I think I have an idea, why not place him somewhere easier to drive at, somewhere there’s no turns, this way I think I can teach him to avoid the walls first, because that is most likely what is confusing him, once he reaches the goal, I can elevate his difficulty more and more, adding turns then eventually a 4-way.
What do you think?
Yeah, that concept is already exists in research papers. They call it “Curriculum Learning”, which has proven to work.
That being said, don’t forget to save the model parameters to a text file that you can bring them anywhere out side Roblox. It seems like you’re going to spend days working on this and it would be bad if you lose all the progress. :3
You can find the examples in the sword fighting AIs code.
I also realised a mistake I made which highly likely is the cause for most of the lack of learning, the vehicle applies realistic physics when it accelerates or reverses, and since I rely on my raycasts to be precise enough to be able to hit the short sidewalk, most of the time the rays shoot just above the sidewalk and dont hit anything, returning nil or hit the ground, to solve this lazy issue, I decided to bring in ‘Ray Walls’ that I am currently moduling into every single of my road pieces to absorb raycasts adequately and restrict them from not hitting anything, making the raycast more precise and accurate to hopefully improve training, I am guessing this will greatly improve what agent I have now since he does not need to fight against rough or noisy input data anymore.
I thought you would realize that later when I saw your videos :>
All I could suggest is to make them point a little more downward.
Wait… jus curious am I bugging or did u disable us from directly adding layers to Double Expected SARSA
Also, still get these issues from DeepDoubleExpectedSARSAV2
Happens from this line of code
ClippedPPO has issues as well:
I don’t think there is a way to set the classesList
I managed to somewhat brute-force fix it:
But I get another error after that
From here
Same as well for DeepQlearning and DeepDoubleQlearning
Ah I forgot to mention that you now need to use :setModel() and put your neural network there.
If I had left it unchanged there, I expect adding future codes will become much more difficult and time consuming.
What about the errors? D: they were occurring after I had :setModel()
Are the models still giving outputs despite the errors?
They give output once before erroring and refusing to provide further output.
Was there any change to how you collect the states?
For example you collect the state every heartbeat instead of timed interval?
Because if it is, I can confirm that the neural network calculation isn’t fast enough to catch up with it.
Ah okay. Try to remove the experience replay part. I’m pretty sure that one isn’t there previously.
still the same errorr
That’s very odd. I can’t replicate the error at all. I have to look into your code base if I want to fix this.
Meanwhile, I can see you’re doing Genetic Algorithm, maybe you can check if your code is having issues?