DataPredict [Release 1.18] - General Purpose Machine And Deep Learning Library (Learning AIs, Generative AIs, and more!)

Is it possible to give us an example. Like the result of what you managed to train?

Sure. Here ya go.

Easy To Use Version

local a = TensorL3D.create({3, 3, 3}, 2)

local b = TensorL3D.create({3, 3, 3}, 20)

print(b)

local c = a + b

print(c)

a = -a

a[1][2][1] = 10

print(a)

local Bool = a:isGreaterThan(b)

Bool:print()

a:print()

c = b:transpose(1, 3)

c:print()

d = b:tensorProduct(a)

print(d)

e = b:innerProduct(a)

print(e)

print(b:sum(1))

Speed Performant Version

local a = TensorL3D:create({3, 3, 3}, 2)

local b = TensorL3D:create({3, 3, 3}, 20)

TensorL3D:print(b)

local c = TensorL3D:add(a, b)

TensorL3D:print(c)

a[1][2][1] = 10

local Bool = TensorL3D:isGreaterThan(a, b)

TensorL3D:print(Bool)

c = TensorL3D:transpose(b, 1, 3)

TensorL3D:print(c)

d = TensorL3D:tensorProduct(b, a)

TensorL3D:print(d)

e = TensorL3D:innerProduct(b, a)

TensorL3D:print(e)

TensorL3D:print(TensorL3D:sum(b, 1))

As you can see, the second one requires you to write “TensorL3D” often, which can be quite exhausting.

Those two code above are part of my TensorL testing. All I need is to complete one more function then I can start producing DataPredict Neural stuff.

I heavily focus on speed, I go out of my way to ensure bleeding edge performance. I prefer as fast as possible, allowing my game to be as performant as possible.

I believe native Luau systems would be the best option, if you could migrate your system to use Luau, it would give you large gains.

TensorL is already in Luau though. It’s just that the way I structure Luau code has effects on speed.

If you were to choose “Easy to use” version, I have to use metamethods, which would could store functions inside the data. However, it will actually reduce the calculation speeds.

If you were to choose “Performant” version, I keep the data inside tables, which is faster. You would only call the functions when needed and is not part of data.

Hi guys, I released two versions of TensorL3D if you need a hands-on API design before deciding that I should use particular API for DataPredict Neural.

You can have a look at the API designs by accessing the link given inside the post under this paragraph. Also please do give feedback on the API designs in that post.

1 Like

Dear all,

Due to a bad circumstances I received from certain company, I will have to make changes to the licenses because of their greed.

From now on, I will add a clause that if someone were to use my work commercially without paying for a separate agreement, they will need to open source all of their work under MIT License.

Don’t worry, this is not aimed for smaller and unprofitable developers who wanted to use my work, but rather to punish larger companies who wants to piggy back on my two-three years worth of hard work developing this library for free.

Apologies for the license change.

What are your thoughts to this?

  • It’s okay, I understand.
  • Why is that company really greedy?
  • Please don’t change the license!

0 voters

1 Like

Bro you done all these by yourself, why not compete with scikit-learn haha.

Yea, I was thinking if I should compete with scikit-learn, but they have a hugeeee number of models that I can’t catch up with. Besides, this library is catered for game development, so I’m not sure if adding extra models would be used by the community.

On a flip side, I was making a competitor to PyTorch during my university days. It was named as DataPredict Neural. It would have very similar code structure to PyTorch. Also, I made the codes public (not open source) although it isn’t quite complete, just to avoid some companies attempting to claim my personal projects and show that I have proof that I own 100% of my codes in case of disputes.

2 Likes

yea true, we hang our work or code on arxiv to avoid those stuff, great job you have done here and all your work, like i can’t believe someone is working on this on Roblox :joy:

1 Like

Hello guys! I have updated the licenses for DataPredict and MatrixL.

I also have added some exception for small and unprofitable developers for commercial use.

The licenses will take affect effectively immediately.

2 Likes

Also, I updated the Affinity Propagation calculations. So if you’re using it, I recommend you to update it now.

What did the company specifically do?

1 Like

Just a little teaser for DataPredict Neural. Still figuring out on backpropagation function design on paper.

Screenshot 2024-05-18 095451

4 Likes

Hey, I tried out your library for a bit to train using reinforcement learning(specifically PPO) but got confused pretty quickly, the documentation helped a bit but i’m still largely confused. How do I create my own reinforcement learning model and train it? If it’s possible could you write a little sample script? The sample scripts on the github dont work for some reason

Uh, your phrasing is quite vague here. Is the sample script is a part of the sword-fighting AIs or inside the documentation tutorials?

Documentation Tutorials, for example in the ‘Getting Started with Reinforcement Learning’ area, there are 2 coding examples provided but I could not get any of them working at all. I believe your library is excellent but I think the issue is its not well-documented enough. To me, there still are some missing connections here and there that I have to search high and low for which makes the documentation not very readable.

In all honesty, the documentation would be great if there were more detailed step-by-step tutorials that worked and explains each function and what they do/ can do. Sort of like how the roblox documentation works. Others can read their documentation and have a jist of what it does, how it works and how to use it in different scenarios just by looking at the description of the functions, the example codes provided by roblox, as well as the readability of the overall page.

But, could I request from you at this time a simple sample script with notes inside explaining the code that works through your library? So that I can gain a better understanding at it and learn how to properly use it for a different idea?

1 Like


Here is an agent I managed to begin training successfully by brute-forcing and debugging the original script, however as you can see here it’s success rate(The 2 decimal place value that is being outputted) stays well between 15%-30%, instead of going higher. I’m not sure if this is because of how I layered the neurons or how I scripted the agent to train.

And here is my ‘BuildModel’ function along with the structure of the neurons.

image
Eventually, this error also occurs. I believe its because my SelectedValueVector[1][1] outputs an ‘inf’ or ‘-inf’.


SelectedValueVector[1][1] becomes inf here ^

Could you help me out? Why does this error occur? How do I solve it? How do I make this agent train properly?

Okie then. I’ll improve on the documentation. But first I need to know what were you actually looking for back then since your description is very vague. I can use them to further understand what you are talking about.

Anyways for the sample script, I’ll give it to you later. I’m a bit busy right now with some other stuff.

Well, I was trying out your sample script, the one that predicts whether the numbers inputted are postive + positive or negative + positive etc etc. However, it doesn’t seem to work. i think the code is outdated so the second reply I sent with the images and videos is the work I did to try and fix the code, it works, but with the little knowledge I have on your library, it doesn’t really get far accurate beyond 15%-30% which I have no idea why.

Ultimately, my goal is to try and create an AI that can sustain it’s group, sort of like a bee hive. For starters, each NPCs have healthbars and hunger bars. The hive/group in all have a collective sum of money that decreases when they buy food to consume and increases when they work. I wanted the AI to smartly manage each NPC in order to sustain their group(leave no one hungry while maintaining work to earn money for food).

But, i’m saving that idea for until I gain a proper grasp of how your library works anyways.

First off… That’s just a bad neural network structure… It’s actually pointless to have two layers of the same number of neurons next to each other. It will only make only train slower and fail to “generalize” properly. Remove the final layer and you’ll see it will improve.

Second, it seems like using the “Sigmoid” and “Tanh” activation function led to what they call “exploding gradients”. You can read more here. For now, stick with “LeakyReLU”.

I recommend that you read up on some deep learning and reinforcement learning related stuff because I built this library under the assumption that the person have theoretical knowledge on deep learning and reinforcement learning. There’s seriously too much for me to cover to fit into the documentation.

1 Like