Do you need a general purpose Machine and Deep Learning Library that has similar API to Scikit-Learn? You can view it here:
Overview
Ever wanted a PyTorch-like deep learning library for Roblox? Now you can!
Thanks to Lua being capable of copying object-orientated programming feature, this library is able to do automatic differentiation, distributed training and more!
Documentation And Tutorials: Welcome to Aqwam’s DataPredict Neural Library! | DataPredict Neural
Example Learning AI Codes
- Self-Learning Sword-Fighting AIs [Version 1] (Uses Release Version 1.7)
(DataPredict Neural’s, DataPredict’s, TensorL’s and MatrixL’s libraries terms and conditions apply due to source code containing these libraries.)
Features
-
Take advantage of calculations combining both automatic differentiation and manual differentiation, simplifying complex calculations while maintaining high performance.
-
Craft complex models effortlessly using dynamic computational graphs, giving you the ability to create any models you want and modify them at runtime.
-
Take advantage of model and data parallelism capabilities for extremely fast training, prediction and experimentation.
-
Build singular models that are interconnected between servers and clients through distributed training.
-
Build models that handles multi-dimensional inputs and outputs to solve any demands of your projects.
-
Dive into user-friendly API designed for you to learn in a couple of minutes.
-
Built for production-grade and research-grade applications.
-
Cross compatible with DataPredict library.
Use Cases
-
In-Game Recommendation System
-
Self-Learning AIs (such as enemies, pets and companions)
-
Image Moderation
-
Image Generation
-
Player Action Prediction System
-
Furniture/Parts Placement Generation (if you have some land plot system like in Lumber Tycoon or the Work At Pizza Place)
-
Personalized Item For Player (for example, if you have an item that wants to adapt to player’s usage of item)
Make note that when used with used with Roblox’s MemoryStore service, it will allow you to do global cross-server training. The library was designed to handle distributed training from the start, which PyTorch and TensorFlow did not do.
Preview Code
SequentialNeuralNetwork:setMultipleFunctionBlocks(
WeightBlocks.Linear.new({dimensionSizeArray = {1, 1, 3}}),
WeightBlocks.Linear.new({dimensionSizeArray = {1, 3, 5}}),
ActivationBlocks.LeakyReLU.new(),
DropoutBlocks.Dropout.new({dropoutRate = 0.5}),
WeightBlocks.Linear.new({dimensionSizeArray = {1, 5, 1}}),
ShapeTransformationBlocks.Transpose.new({dimensionIndexArray = {2, 3}}),
ActivationBlocks.LeakyReLU.new()
)
for i = 1, 100000 do
local generatedLabelTensor = SequentialNeuralNetwork:forwardPropagate(inputTensor)
local lossTensor = CostFunction:calculateLossTensor(generatedLabelTensor, labelTensor)
local costValue = CostFunction:calculateCostValue(generatedLabelTensor, labelTensor)
SequentialNeuralNetwork:backPropagate(lossTensor)
print(costValue)
task.wait()
end
Plugins And Scripts That Complements With This Library
-
Parallel Luau (For speeding up the self-learning AIs’ training process through multi-threading.)
-
StepPhysics Plugin API (For speeding up the self-learning AIs’ training process through speeding up the environment.)
-
Graph Module (For plotting graphs.)
FAQs
-
Can i build transformers with it?
- Yes.
-
Can I build computational graphs with it?
- Yes.
-
Can I build graph neural networks with it?
- Yes.
-
Can I link and unlink layers during run-time?
- Yes.
-
How does the model parallelism work?
-
There are two separate gradient calculations that will run in their own threads:
-
Loss gradient in the respect of previous block’s output.
-
Loss gradient in the respect to current block’s weight.
-
With these two gradients being calculated in different threads, one can continue backpropagating the gradient to the earlier blocks and one can continue with weight updates separately.
-
-
Join our official Discord server: DataPredict Community
Comparison Against Other Libraries
Development Priority Poll
What should be the next update will be?
- Internal parts such as convolutional layers and pooling layers.
- External models such as reinforcement learning, generative and recurrent models.
0 voters