Release Version 1.1 / Beta Version 0.7.0
Added
-
Expansion Blocks:
-
BaseExpansionBlock
-
ExpandNumberOfDimensions
-
ExpandDimensionSizes
-
Expansion Blocks:
BaseExpansionBlock
ExpandNumberOfDimensions
ExpandDimensionSizes
Moved InputHolder to Holder Blocks
Changed all the Shape Transformation Blocks settings so that it saves the transformed tensor.
Holder Blocks:
BaseHolderBlock
VariableHolder
NullaryFunctionHolder
Operator Blocks:
Clamp
Maximum
Minimum
Everything related to āFirstDerivativeā is renamed to āChainRuleFirstDerivativeā.
Everything related to āSecondStepFirstDerivativeā is renamed to āFirstDerivativeā.
Calculations for first derivative tensors inside the FunctionBlocks are now more efficient.
Replaced forgotten TensorLās getSize() functions to getDimensionSizeArray().
Fixed a bug where MaxUnpooling FunctionBlocks requires incorrect input tensors dimension sizes.
Which would be the best module for a self-learning AI out of your two modules and if they are equally as good, what is the best one to start on as someone who has never stepped into anything related to this?
Start with the other library, DataPredict. It has a lot more tutorials and simpler to set up.
Come back to this library if you want more specialized neural networks to build your self-learning AIs with it.
Hello guys! I would like to inform you on the change of what we determine commercial use as to support smaller developers who only started off their game development business.
Here are the list of changes:
1000 USD in a life time to 1000 USD per year.
100 active players in a single time frame to no limit.
When a company has subsidiaries or related entities, the total revenue is combined.
Hopefully, by making the threshold more lenient, it would help such businesses to grow further and better in shorter amount of time.
Added Dropout1D, Dropout2D and Dropout3D under āDropoutBlocksā.
Added DataPredictLinearAndBias under āWeightBlocksā.
Dear Users,
I would like to inform you of an important update regarding the definition of commercial use and the requirements for obtaining a separate commercial agreement.
Previously, a separate commercial agreement was required for companies (and their related entities) whose combined revenue exceeded $1,000 per year. However, after careful consideration and in an effort to streamline the process and reduce the administrative tasks involved, I have decided to increase the threshold to $3,000 within 365 days (not per 365 days).
This change will allow smaller businesses and individual developers more flexibility to use the library without the need for a separate commercial agreement, while ensuring that larger companies or those with substantial revenue engage in the necessary commercial use agreement.
A separate commercial agreement is required for companies (or individuals, if applicable) whose combined revenue (including subsidiaries or related entities) exceeds $3,000 within 365 days (not per 365 days). If you exceed this threshold and do not wish to enter into a separate agreement, you must follow the commercial use conditions outlined in the Terms and Conditions.
Business-to-business (B2B) activities remain unchanged and still require a separate commercial agreement, regardless of revenue. If you fulfil this condition and do not wish to enter into a separate agreement, you must follow the commercial use conditions outlined in the Terms and Conditions.
This change is aimed at reducing the administrative workload while continuing to foster a community of developers who can benefit from the library.
Thank you for your understanding and continued support.
Best regards,
Aqwam Harish Aiman (a.k.a. MyOriginsWorkshop)
Added āMonteCarloControlā and āOffPolicyMonteCarloControlā under āModelsā.
Added āPowerā, āExponentā and āLogarithmā under āOperatorBlocksā.
Do the reinforcement learning algorithms here support a continuous action space?
Yeah, but for now the standard deviation is fixed. I have plans to create an update for them later once Iām done with my exam tomorrow.
Oh, nice. Thanks! Iāll definitely be using this.
It is rather specific thing you asked. Are you using this as a part of a universityās research?
No, but Iāve recently began on a project simulating 3D life in Roblox through neural networks and evolution. Iāve already implemented the networks and evolution portion, but I think having the organisms learn through reinforcement learning with evolved parameters could enhance it alot. Iāll still need to review this library more to see if itās applicable for the project, but if it is, Iāll probably use this in the near future. I can tell this library has alot of thought and work put into it. Thanks for providing it as a public resource!
Glad you like it. It took me around ~2 years just to get the first release. The development kind of began during the second year of my bachelorās degree.
Why two years? Because I have to make sure that all the mathematics are correct.
After all, if you get the math wrong, you will certainly cause very bad damage like algorithms giving free money or destroying player retention. You will get into huge lawsuits and get sued for damages if they are used in live games.
You canāt just create an algorithm you just find in the internet and plant it there without consulting the experts (unless you have PhD degree or Masters in research yourself) and reading a lot of research papers that arenāt personal blogs.
DeepQLearning, DeepStateActionRewardStateAction, DeepExpectedStateActionRewardStateAction, ProximalPolicyOptimization models and its variants now have ālambdaā argument for TD-Lambda and GAE-Lambda functionality. This includes AdvantageActorCritic model.
The diagonalGaussianUpdate() function now requires actionNoiseTensor.
All reinforcement learning models now require āterminalStateValueā for categoricalUpdate(), diagonalGaussianUpdate() and episodeUpdate() functions.
Reimplemented ActorCritic, VanillaPolicyGradient and REINFORCE models.