DataPredict [Release 1.21] - General Purpose Machine Learning And Deep Learning Library (Learning AIs, Generative AIs, and more!)

Hmm, could I ask if you’re planning on maybe adding Neuroevolution or Genetic Algorithms to your module? I think from my experience, roblox has a tough time trying to keep up with deep learning calculations as you had previously said, Luau’s calculation speed is not up to par with the rest of the script, hence bringing the error into the room. Doing some research, I have seen a very successful AI attempt in roblox, coming from a genetic algorithm within an autonomous Police AI agent structure, I think you’ve seen his twitter posts before but i’ll quote his devforum link regardless,

Self-Driving Neural Network Cop Cars

He has done a brilliant job at creating his piece of AI, which leads me to suggest, because your module has a wonderful neural network setup with matrices and such, would it be possible for you to maybe implement a genetic algorithm or a neuroevolution implementation?

Nope. No plans. The thing with genetic algorithms / neuro-evolution, you have a lot of way to do it. For me to build the code that covers everything is pretty much impossible. Here are the questions related to this issue.

  • Do I choose a single value or a single layer (matrix) to evolve?

  • How do adjust parameters that represents evolution when combining two parameters? Is it choosing maximum, average or minimum values?

  • Do I construct new neural network architecture from an existing one to represent it as evolution?

  • And many more.

There’s so many things you can do, and I think it is best to not implement it for flexibility and easier code maintenance.

1 Like

Hmm, yeah, I agree, it is truly flexible, the amount of ways u can accomplish this is remarkable, apparently scripton rescripted his work 10 times or so, according to his tweets. Well, do you have any tips for using your module to somehow integrate a genetic algorithm/neuroevolution simulation to?

Take advantage of the Matrix library of mines. Unlike the DataPredict code structure, MatrixL is pretty much easy to use and you can read the codes behind them easily. You may want to use these functions:

  • getSize()

  • applyFunction()

Also keep in mind that the neural networks stores table of matrices for the model parameters.

1 Like

Hilo! The NEAT Agent I trained manages to reach the goal point, but I can’t seem to train it to stop there. It either slows down before reaching the goal then reverses away from it, or drives full throttle into the wall thats in front of the goal after reaching the goal. Any thoughts?

I don’t know. Maybe use a little touch of reinforcement learning to fine tune the model parameters?

1 Like

Hmm, what about any reward functions?

All I could think is to reward based on distance between the outer edges of the car and the edges of that box I suppose.

1 Like

Hmm, I changed my settings a bit, and opted for raw output to control the car’s throttle, steering and brake boolean, but my main problem is the car likes to throttle then dethrottle afterwards, making it seem like the car is stationary. I implemented a penalty that imposes -0.01 for every 0.01 seconds the car remains stationary, but it takes a long time for it to stop going back and forth rapidly and pick a direction to drive in, even after that it drives in the wrong direction due to lack of training and time wasted in the rapid acceleration/deceleration that causes the car to be stationary. I was wondering if you knew any setting or parameters I could play around with generally (i.e learning rate or such) to minimise this sort of behaviour?

Meh, can’t really help without a video.

I found that raising the spawn point a bit higher motivates the vehicle to move a bit more, hence the subtle bouncing when it spawns

Please only use raw values when you fully understand the difference between different reinforcement learning algorithms and the structure of neural networks (particularly the activation functions). Seriously. It’s going to take up much more time than it should be.

If you insist on doing this, make sure your last layer isn’t a softmax layer. I’m too lazy to even explain at this point.

1 Like

Yep, I understand, softmax is only for labelling since it spreads out the probability, which is what I wouldn’t need for raw values. Also, i’m not using a reinforcement algorithm this time :sweat_smile:, it’s a NEAT algorithm.

For anyone looking to understand neural networks a bit before reinforcement learning or generally machine learning, here’s a series I found super helpful.

Although it is in JavaScript, you should be able to understand through the graphics and physical examples he uses in this video.

If you are interested, you can view the playlist course from his channel that explains step-by-step how to build the simulator he scripted in HTML, CSS and JavaScript.

1 Like

To be fair, when I started knowing about this concept, almost every source that is not some kind of research paper (Youtube, etc) simply explained in regards to the actual ‘Neural Network’ in the brain.

Which complicated things much.

They did not specify that traditional neural networks are literally just a function with parameters in it. The goal of gradient descent is to calculate the optimal direction of each parameters relative to each other in order to bring down the cost function as much as possible.

In RL for games, I imagine it as a function which fits itself to a high-dimensional curve, like a quadratic function trying to adjust it’s a, b, c coefficients to imitate a target quadratic curve. The target is like the curve that describe the decision of the agent for every input value.

Just sharing some of my experiences.

Hi guys, I have added a survey at the main post for me to check how satisfied you are with this library.

Do let me know what are your thoughts!

4 Likes

Well, his method of explaining does pay a buck when you continue down his series and do the code with him, I’d side with you on the fact that research papers are the key to concisely learning Machine Learning but for anyone not wanting to sit and read laps of papers, this video would be great to get a base understanding of what you can expect from a neural network / training a network with proven proof of concept results. ( also, one of the few videos that don’t throw hidden layers into a black box and ditch it off, he uses matrices to explain some of it in his videos if not, a bunch of graphic solutions like a function graph that suit visual learners like me :3 )

Did I forget to mention it’s from a Karelia University of Applied Science Professor? I’d watch it anyways : )

1 Like

Hello, I am trying to train sword fighting bots, but they just jump around in circles constantly. I’ve trained it for a few hours but there is no improvement. There are 32 agents running on a single model. They seem to just use all the possible outputs at all times.

I used some of the code from the example version 5, but I modified it to use a single model and for multiple agents.

Yeah. Don’t use a single model. The thing is that all the reinforcement learning algorithms requires data from previous frames and the data must be sequential in order to work. Otherwise, you are just screwing up the training.

If you really still want to use a single model, at least use different ReinforcementLearningQuickSetup and not use ProximalPolicyOptimization model.

1 Like

The reason we are using a single model is for training speed so we can do training simultaneously and speed it up. For example like this. If that’s not the correct approach then how can I reduce time for training?