DataPredict Neural [Release 1.9] - PyTorch-like Deep Learning Library Meets Roblox!

Release Version 1.1 / Beta Version 0.7.0

Added

  • Expansion Blocks:

    • BaseExpansionBlock

    • ExpandNumberOfDimensions

    • ExpandDimensionSizes

1 Like

Release Version 1.2 / Beta Version 0.8.0

Changes

  • Moved InputHolder to Holder Blocks

  • Changed all the Shape Transformation Blocks settings so that it saves the transformed tensor.

Added

  • Holder Blocks:

    • BaseHolderBlock

    • VariableHolder

    • NullaryFunctionHolder

2 Likes

Release Version 1.3 / Beta Version 0.9.0

Changes

  • If any of the input tensors have its number of dimensions or dimension sizes expanded, it will collapse the partial first derivative tensor to the input tensorā€™s number of dimensions and dimension sizes.

Added

  • Operator Blocks:

    • Clamp

    • Maximum

    • Minimum

Release Version 1.4 / Beta Version 1.0.0

Changes

  • Everything related to ā€œFirstDerivativeā€ is renamed to ā€œChainRuleFirstDerivativeā€.

  • Everything related to ā€œSecondStepFirstDerivativeā€ is renamed to ā€œFirstDerivativeā€.

  • Calculations for first derivative tensors inside the FunctionBlocks are now more efficient.

Fixes

  • Replaced forgotten TensorLā€™s getSize() functions to getDimensionSizeArray().

  • Fixed a bug where MaxUnpooling FunctionBlocks requires incorrect input tensors dimension sizes.

1 Like

Release Version 1.5 / Beta Version 1.1.0

Added

  • Added RandomNetworkDistillation under ā€œModelsā€.
1 Like

Which would be the best module for a self-learning AI out of your two modules and if they are equally as good, what is the best one to start on as someone who has never stepped into anything related to this?

Start with the other library, DataPredict. It has a lot more tutorials and simpler to set up.

Come back to this library if you want more specialized neural networks to build your self-learning AIs with it.

1 Like

Hello guys! I would like to inform you on the change of what we determine commercial use as to support smaller developers who only started off their game development business.

Here are the list of changes:

  • 1000 USD in a life time to 1000 USD per year.

  • 100 active players in a single time frame to no limit.

  • When a company has subsidiaries or related entities, the total revenue is combined.

Hopefully, by making the threshold more lenient, it would help such businesses to grow further and better in shorter amount of time.

Release Version 1.6 / Beta Version 1.2.0

Added

  • Added CircularPadding, ConstantPadding, ReflectionPadding and ReplicationPadding under ā€œPaddingBlocksā€.

Release Version 1.7 / Beta Version 1.3.0

Added

  • Added Dropout1D, Dropout2D and Dropout3D under ā€œDropoutBlocksā€.

  • Added DataPredictLinearAndBias under ā€œWeightBlocksā€.

Announcement: Changes to the Definition of What Qualifies as Commercial Use

Dear Users,

I would like to inform you of an important update regarding the definition of commercial use and the requirements for obtaining a separate commercial agreement.

Previously, a separate commercial agreement was required for companies (and their related entities) whose combined revenue exceeded $1,000 per year. However, after careful consideration and in an effort to streamline the process and reduce the administrative tasks involved, I have decided to increase the threshold to $3,000 within 365 days (not per 365 days).

This change will allow smaller businesses and individual developers more flexibility to use the library without the need for a separate commercial agreement, while ensuring that larger companies or those with substantial revenue engage in the necessary commercial use agreement.

Key Points:

  • A separate commercial agreement is required for companies (or individuals, if applicable) whose combined revenue (including subsidiaries or related entities) exceeds $3,000 within 365 days (not per 365 days). If you exceed this threshold and do not wish to enter into a separate agreement, you must follow the commercial use conditions outlined in the Terms and Conditions.

  • Business-to-business (B2B) activities remain unchanged and still require a separate commercial agreement, regardless of revenue. If you fulfil this condition and do not wish to enter into a separate agreement, you must follow the commercial use conditions outlined in the Terms and Conditions.

This change is aimed at reducing the administrative workload while continuing to foster a community of developers who can benefit from the library.

Thank you for your understanding and continued support.

Best regards,

Aqwam Harish Aiman (a.k.a. MyOriginsWorkshop)

1 Like

Release Version 1.8 / Beta Version 1.4.0

Added

  • Added ā€œMonteCarloControlā€ and ā€œOffPolicyMonteCarloControlā€ under ā€œModelsā€.

  • Added ā€œPowerā€, ā€œExponentā€ and ā€œLogarithmā€ under ā€œOperatorBlocksā€.

1 Like

Do the reinforcement learning algorithms here support a continuous action space?

Yeah, but for now the standard deviation is fixed. I have plans to create an update for them later once Iā€™m done with my exam tomorrow.

Oh, nice. Thanks! Iā€™ll definitely be using this.

It is rather specific thing you asked. Are you using this as a part of a universityā€™s research?

No, but Iā€™ve recently began on a project simulating 3D life in Roblox through neural networks and evolution. Iā€™ve already implemented the networks and evolution portion, but I think having the organisms learn through reinforcement learning with evolved parameters could enhance it alot. Iā€™ll still need to review this library more to see if itā€™s applicable for the project, but if it is, Iā€™ll probably use this in the near future. I can tell this library has alot of thought and work put into it. Thanks for providing it as a public resource!

Glad you like it. It took me around ~2 years just to get the first release. The development kind of began during the second year of my bachelorā€™s degree.

Why two years? Because I have to make sure that all the mathematics are correct.

After all, if you get the math wrong, you will certainly cause very bad damage like algorithms giving free money or destroying player retention. You will get into huge lawsuits and get sued for damages if they are used in live games.

You canā€™t just create an algorithm you just find in the internet and plant it there without consulting the experts (unless you have PhD degree or Masters in research yourself) and reading a lot of research papers that arenā€™t personal blogs.

1 Like

Release Version 1.9 / Beta Version 1.5.0

Added

  • Added SoftActorCritic, DeepDeterministicPolicyGradient and TwinDelayedDeepDeterministicPolicyGradient under ā€œModelsā€.

Changes

  • DeepQLearning, DeepStateActionRewardStateAction, DeepExpectedStateActionRewardStateAction, ProximalPolicyOptimization models and its variants now have ā€œlambdaā€ argument for TD-Lambda and GAE-Lambda functionality. This includes AdvantageActorCritic model.

  • The diagonalGaussianUpdate() function now requires actionNoiseTensor.

  • All reinforcement learning models now require ā€œterminalStateValueā€ for categoricalUpdate(), diagonalGaussianUpdate() and episodeUpdate() functions.

  • Reimplemented ActorCritic, VanillaPolicyGradient and REINFORCE models.