You got some bad inputs there. And too many unneeded inputs. If you just want the npc to go specific position, all you need is the distance and angle difference. Also use raycast to check if the npc is blocked or not.
Hi guys! Currently, I’m looking into a number of features to be added to the library for the next version, and I will be surveying what features I should be adding. Here are the list of stuff I’m thinking of adding. Implementing one of these stuff can be quite difficult and time-consuming.
Generative Adversarial Network (GAN)
Information:
GANs are the early stages of generative AI technology that you see used by Roblox today. It allows players to generate content out of random input. I might include the variants of GAN as well. In theory it should work, but I’m not sure if it would work in real life scenarios.
Potential Use Cases:
Generating buildings and arts from nothing.
Discrete State Space For Reinforcement Learning
Information:
Currently the reinforcement learning algorithms can only take continuous state (environment) space, which might not be applicable to all learning AI use cases. By adding discrete state space for the reinforcement learning, hopefully it adds more flexibility to our library.
Potential Use Cases:
Enables the learning AI’s to play board games.
Enables the learning AI gets rewarded based on room/grid its on.
Recurrent Neural Networks (RNN) With Reinforcement Learning
Information:
The current reinforcement learning neural networks has no ability to remember the past. By integrating recurrent neural network with reinforcement learning, the AI can remember things from the past.
Note that the training might take significantly more longer than your usual reinforcement learning neural networks and may cause Roblox Studio to freeze if you’re not careful.
But before I can implement this, I need to update the current Recurrent Neural Network and the LSTM.
Potential Use Cases:
More tactical learning AIs
Put in your vote which one you prefer. Note that this is for research purposes only and it might not get implemented.
Generative Adversarial Network (GAN)
Discrete State Space For Reinforcement Learning
Recurrent Neural Networks (RNN) With Reinforcement Learning
Hello guys! I wanted to inform you that I have made several calculation mistakes on some parts of reinforcement learning algorithms. Because of this, I recommend you to reinstall the DataPredict library to get the latest fixes.
Don’t worry, there are no API changes, just internal calculation changes.
I’m bumping this up. Currently, I have implemented the original Generative Adversarial Network (GAN). If a lot of people use it, then I’ll put the variations of it. Apparently, it was easy for me to implement…
I have about 14 in development projects that use this library, it’s just not many people are talking in the forums, it may seem silent but in the background there is a lot of usage for your library.
Programming for me is more like an internal feeling, a deeper understanding of a different language, so for me personally I don’t use much documentation.
Just giving you a heads up. If you’re planning to use Generative models, I recommend you reupdating the DataPredict Library. Apparently, I did not double check for some missing codes.
Just giving you a heads up. DataPredict had minor bugs that went under the radar, especially for reinforcement learning algorithms. Please do update it as soon as possible.
Hi guys! This is an another day for me to put a survey again!
Currently, I plan to resume DataPredict Neural and looking into how I should structure the codes of TensorL. TensorL is similar to MatrixL library, but the MatrixL library stores 2 dimensional arrays. Meanwhile the TensorL contains any number of dimensional arrays.
The thing I want to ask you is that for TensorL, do you prioritize on speed performance or ease of use?
If you focus on performance, the structure of TensorL library will be similar to the MatrixL. You pretty much need to call add(), subtract() and other functions which can be not easy to use.
If you focus on ease of use, the structure of TensorL library will use the natural arithmetic operators (i.e. +, -, *) and the values will be stored in “tensor” object. It is easier to use, but less speed performant.
local a = TensorL3D.create({3, 3, 3}, 2)
local b = TensorL3D.create({3, 3, 3}, 20)
print(b)
local c = a + b
print(c)
a = -a
a[1][2][1] = 10
print(a)
local Bool = a:isGreaterThan(b)
Bool:print()
a:print()
c = b:transpose(1, 3)
c:print()
d = b:tensorProduct(a)
print(d)
e = b:innerProduct(a)
print(e)
print(b:sum(1))
Speed Performant Version
local a = TensorL3D:create({3, 3, 3}, 2)
local b = TensorL3D:create({3, 3, 3}, 20)
TensorL3D:print(b)
local c = TensorL3D:add(a, b)
TensorL3D:print(c)
a[1][2][1] = 10
local Bool = TensorL3D:isGreaterThan(a, b)
TensorL3D:print(Bool)
c = TensorL3D:transpose(b, 1, 3)
TensorL3D:print(c)
d = TensorL3D:tensorProduct(b, a)
TensorL3D:print(d)
e = TensorL3D:innerProduct(b, a)
TensorL3D:print(e)
TensorL3D:print(TensorL3D:sum(b, 1))
As you can see, the second one requires you to write “TensorL3D” often, which can be quite exhausting.
Those two code above are part of my TensorL testing. All I need is to complete one more function then I can start producing DataPredict Neural stuff.
I heavily focus on speed, I go out of my way to ensure bleeding edge performance. I prefer as fast as possible, allowing my game to be as performant as possible.
I believe native Luau systems would be the best option, if you could migrate your system to use Luau, it would give you large gains.