DataPredict Neural [Release 1.7] - PyTorch-like Deep Learning Library Meets Roblox!

Do you need a general purpose Machine and Deep Learning Library that has similar API to Scikit-Learn? You can view it here:

Overview

DataPredictNeuralIconSmall|242x242 50%

Ever wanted a PyTorch-like deep learning library for Roblox? Now you can!

Thanks to Lua being capable of copying object-orientated programming feature, this library is able to do automatic differentiation, distributed training and more!

Documentation And Tutorials: Welcome to Aqwam’s DataPredict Neural Library! | DataPredict Neural

Example Learning AI Codes

(DataPredict Neural’s, DataPredict’s, TensorL’s and MatrixL’s libraries terms and conditions apply due to source code containing these libraries.)

Features
  • Take advantage of calculations combining both automatic differentiation and manual differentiation, simplifying complex calculations while maintaining high performance.

  • Craft complex models effortlessly using dynamic computational graphs, giving you the ability to create any models you want and modify them at runtime.

  • Take advantage of model and data parallelism capabilities for extremely fast training, prediction and experimentation.

  • Build singular models that are interconnected between servers and clients through distributed training.

  • Build models that handles multi-dimensional inputs and outputs to solve any demands of your projects.

  • Dive into user-friendly API designed for you to learn in a couple of minutes.

  • Built for production-grade and research-grade applications.

  • Cross compatible with DataPredict library.

Use Cases
  • In-Game Recommendation System

  • Self-Learning AIs (such as enemies, pets and companions)

  • Image Moderation

  • Image Generation

  • Player Action Prediction System

  • Furniture/Parts Placement Generation (if you have some land plot system like in Lumber Tycoon or the Work At Pizza Place)

  • Personalized Item For Player (for example, if you have an item that wants to adapt to player’s usage of item)

Make note that when used with used with Roblox’s MemoryStore service, it will allow you to do global cross-server training. The library was designed to handle distributed training from the start, which PyTorch and TensorFlow did not do.

Preview Code

SequentialNeuralNetwork:setMultipleFunctionBlocks(
	
	WeightBlocks.Linear.new({dimensionSizeArray = {1, 1, 3}}),
	
	WeightBlocks.Linear.new({dimensionSizeArray = {1, 3, 5}}),
	
	ActivationBlocks.LeakyReLU.new(),
	
	DropoutBlocks.Dropout.new({dropoutRate = 0.5}),
	
	WeightBlocks.Linear.new({dimensionSizeArray = {1, 5, 1}}),

	ShapeTransformationBlocks.Transpose.new({dimensionIndexArray = {2, 3}}),
	
	ActivationBlocks.LeakyReLU.new()
	
)

for i = 1, 100000 do
	
	local generatedLabelTensor = SequentialNeuralNetwork:forwardPropagate(inputTensor)

	local lossTensor = CostFunction:calculateLossTensor(generatedLabelTensor, labelTensor)
	
	local costValue = CostFunction:calculateCostValue(generatedLabelTensor, labelTensor)
	
	SequentialNeuralNetwork:backPropagate(lossTensor)

	print(costValue)
	
	task.wait()
	
end

Plugins And Scripts That Complements With This Library
  • Parallel Luau (For speeding up the self-learning AIs’ training process through multi-threading.)

  • StepPhysics Plugin API (For speeding up the self-learning AIs’ training process through speeding up the environment.)

  • Graph Module (For plotting graphs.)

Download Links

DataPredict Neural (Advanced Deep Learning Library)

TensorL (Tensor Library)

FAQs
  1. Can i build transformers with it?

    • Yes.
  2. Can I build computational graphs with it?

    • Yes.
  3. Can I build graph neural networks with it?

    • Yes.
  4. Can I link and unlink layers during run-time?

    • Yes.
  5. How does the model parallelism work?

    • There are two separate gradient calculations that will run in their own threads:

      • Loss gradient in the respect of previous block’s output.

      • Loss gradient in the respect to current block’s weight.

      • With these two gradients being calculated in different threads, one can continue backpropagating the gradient to the earlier blocks and one can continue with weight updates separately.

Join our official Discord server: DataPredict Community

Comparison Against Other Libraries

Development Priority Poll

What should be the next update will be?

  • Internal parts such as convolutional layers and pooling layers.
  • External models such as reinforcement learning, generative and recurrent models.

0 voters

34 Likes

Genuinely insane, I was just thinking of trying to learn datapredict again and you come out with something that will make my life a billion times easier :smiley:

Awesome work

3 Likes

Development Priority Survey

Hello guys! I’m adding a survey to the first post to see what I should prioritize. Please put in your votes so that choice will most likely to be developed first!

For now, I’ll be taking a break from developing this library for a while and wait for you guys to vote until enough data is received.

1 Like

Im going to check it out then i get home, Im att my grand parents house rn.

1 Like

I was training a regular nn with @Cffex to classify digits with the mnist dataset, which reached a local optima of 80%, no matter what we did it just wouldn’t improve. I’m looking forward on the convolutional layers since it’s much better for processing images

also

this has parallel luau? if so then i’m definitely switching over

would love to have typechecking tho, so i dont have to constantly look at the docs

1 Like

I don’t exactly understand how this works. Is is possible for you to update with comments please?

looks cool tho

1 Like

I find the normal data predict library easier to understand, this library is to make the API similar to that of the PyTorch library. You might want to get a ML or PyTorch refresher to understand better. Try the normal one and fiddle around as its mor documented

1 Like

Nice!

It will have it during later updates. Just not now. I’m currently designing how the tensors should be moved around while taking full advantage of the parallel luau capabilities. That doesn’t mean you should switch though.

Haha, Sorry about that! Will add it later once I get the pure Lua version of this library up and running.

2 Likes

Beta Version 0.1.0 Update!

Added

  • WeightBlocks: AutoSizeLinear, AutoSizeBias

  • ShapeTransformationBlocks: Flatten, Reshape

  • PoolingBlocks: AveragePooling, MaximumPooling, MinimumPooling, BasePoolingBlock

  • CompressionBlocks: Sum, BaseCompressionBlock

  • OperatorBlocks: Add, Subtract, Multiply, Divide, Concatenate, DotProduct, BaseOperatorBlock

  • Containers: ComputationalGraph

  • Models: GenerativeAdversarialNetwork, WassersteinGenerativeAdversarialNetwork, BaseGemerativeAdversarialNetwork, BaseModel

  • Cores: AutomaticDifferentiationTensor, SymbolicDifferentiationTensor

Changed

  • Utilities: IterativeTrainingWrapper, TensorToClassConverter

  • Cores: FunctionBlocks

Removed

  • Cores: DifferentiationTensorObject
2 Likes

Also don’t forget to update the tensor library as well!

Beta Version 0.2.0 Update!

Added

  • PoolingBlocks: AveragePooling1D, AveragePooling2D, AveragePooling3D, MaximumPooling1D, MaximumPooling2D, MaximumPooling3D, MinimumPooling1D, MinimumPooling2D, MinimumPooling3D, MaximumUnpooling1D, MaximumUnpooling2D, MaximumUnpooling3D

  • EncodingBlocks: OneHotEncoding, LabelEncoding, BaseEncodingBlock

  • CostFunctions: BinaryCrossEntropy, CategoricalCrossEntropy, FocalLoss

Changed

  • Utilities: TensorToClassConverter

  • Cores: FunctionBlocks

Removed

  • PoolingBlocks: AveragePooling, MaximumPooling, MinimumPooling, MaximumUnpooling
2 Likes

TensorL-3D Update!

Please convert the current TensorL-3D library to TensorL as soon as possible! I will not make the DataPredict-Neural be compatible with TensorL-3D in the future.

TensorL can handle any N-dimensional sized tensors and DataPredict-Neural will follow the TensorL’s format. So this should be fun!

2 Likes

Beta Version 0.3.0

Added

  • ConvolutionBlocks: Convolution1D, Convolution2D, Convolution3D, BaseConvolution

Changes

  • OperatorBlocks: Mean, StandardDeviation, ZScoreNormalization

Side Notes

Also don’t forget to update your TensorL library!

@everyone

3 Likes

Beta Version 0.4.0

Added

  • EncodingBlocks: PositionalEncoding, OneHotEncoding, LabelEncoding, BaseEncodingBlock

  • ConvolutionBlocks: AutomaticConvolution1D, AutomaticConvolution2D, AutomaticConvolution3D

  • OperatorBlocks: Extract

  • PaddingBlocks: ZeroPadding, BasePaddingBlock

Changes

  • WeightBlocks: AutoSizeLinear → AutomaticLinear, AutoSizeBias → AutomaticBias

  • ConvolutionBlocks: Convolution1D, Convolution2D, Convolution3D

  • Containers: ComputationalGraph

Side Notes

Don’t forget to update the TensorL library!

1 Like

Beta Version 0.4.1

Fixes

  • The calculations for the first derivative of the convolution function block is now fixed. Now the exploding gradients issue should occur less often.

Side Notes

Don’t forget to update the TensorL library! You can now extract values where the target index is larger than origin index.

2 Likes

Beta Version 0.5.0

Changes

  • Renamed backPropagate() function to backwardPropagate() function for Sequential and ComputationalGraph Containers.

  • Renamed backPropagateGenerator() function to backwardPropagateGenerator() function for GenerativeAdversarialBaseModel.

  • Renamed backPropagateDiscriminator() function to backwardPropagateDiscriminator() function for GenerativeAdversarialBaseModel.

  • Some internal changes was made to fit the needs of the function above.

Side Notes

Please update your TensorL library for:

  • Renaming of getSize() function to getDimensionSizeArray() function.

  • Bug fixes relating to tensor broadcasting

2 Likes

Tensor Library Update!

Changes

  • Some of the tensor operations now takes 1.5-3.0x less time to get the results.

Added

  • A more efficient tensor library has been added at the cost of ease of debugging. It is called the “TensorL Efficient”.
1 Like

Tensor Library Update!

Fixes

  • Fixed an issue where the dimension sum adds an unwanted dimension.

Changes

  • Performance improvements on some operations in “TensorL Efficient”.
1 Like

Tensor Library Update!

Fixes

  • Fixed an issue where the createIdentityTensor() function does not create the identity tensor properly.

Changes

  • Performance improvements on some operations in “TensorL Efficient”.

Added

  • Added permute() function.
1 Like

(post deleted by author)