OpenML
An Open Source Machine Learning module. This module supports Neural Networks. I plan to add more to this module.
OpenML is easy to use and understand, it was built for ease of use and beginner knowledge.
All usage cases and examples are explained as much as possible as I could for new programmers.
OpenML has a lot of Type Checking so when using functions or getting objects it should tell you what’s there.
What is Machine Learning
Its a type of Artificial Intelligence that uses algorithms to train models. It allows computer systems to learn and adapt without following explicit instructions. Machine Learning basically imitates the human brain features. Humans have Biological Neural Networks while computers have Artificial Neural Networks which is composed of algorithms.
To sum it up. Machine Learning is computers learning based on algorithms.
Why use this resource?
Create Computers/AI that can learn. It’s very easy to use. One line to create a neural network, 2 more lines to teach that network using Forward/Back Propagation. This module is very versatile.
I only will create algorithms that are actually needed for Roblox.
You want to teach an AI how to parkour then go ahead with, reinforcement learning.
OpenML is Open Source.
Upcoming
Use Parallel Lua, In-depth tutorial on how Machine Learning works. I will teach you every variable behind it. I will not tell you how to use my module I will teach you, so you understand what you can do with it.
To the people who are new to Machine Learning, I will create a introduction to it and teach about Machine Learning. I also will show you how to use OpenML with that knowledge.
Demos/Previews
All Demos may not be up to latest OpenML module.
Preview of Network Learning Shapes (Rectangles and Triangles) (OpenML V1.1)
OpenML_NetworkLearnsShapes.rbxl (74.0 KB)
Preview of using Deep Q-Learning for Cars (OpenML V1.1)
OpenML_DQN_Showcase.rbxl (121.0 KB)
Preview of Deep Q-Learning for Sword Fighting AI (OpenML V1.1)
OpenMLSwordAI.rbxl (92.6 KB)
How to Use OpenML (Documentation) Documentation May not be up to date
Some Examples provide demos and videos.
Current Videos are in Deep Q-Learning and CNN
How To Create a Neural Network
local OpenML = require(game:GetService("ServerScriptService").OpenML)
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
-- Creates a Neural Network with 2 Input Nodes, 3 Hidden Nodes, and 2 Output Nodes
--[[
What the Network Looks like, I = Input, H = Hidden, O = Output
I H O
---------
O O O
O
O O O
If you did
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 3, 2 }, math.random)
You would have 2 Input Nodes, 3 Hidden Nodes, 3 Hidden Nodes, and 2 Output Nodes
What it would look like, I = Input, H = Hidden, O = Output
I H H O
------------
O O O O
O O
O O O O
]]
Custom Initialization Values
local OpenML = require(game:GetService("ServerScriptService").OpenML)
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
local NeuralNetworkCustom = OpenML.Resources.MLP.new({ 2, 3, 2 }, function()
return math.random() * 2 - 1 -- Returns a number between -1 and 1
end)
How to use Forward Propagation/Back Propagation
local OpenML = require(game:GetService("ServerScriptService").OpenML)
-- Get Propagator for forward and back propagation
local Propagator = OpenML.Algorithms.Propagator
-- Get our selected activation function
local ActivationFunction = OpenML.ActivationFunctions.TanH
-- Initialize The Network
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
local CalculateLoss = OpenML.Calculate.MSE
-- 20 iterations of training
for i = 1, 20 do
local Inputs = { 1, 1 }
local Activations = Propagator.ForwardPropagation(NeuralNetwork, Inputs, ActivationFunction)
-- Returns all Activation of each layer
local ExpectedOutput = { 0, 0 }
-- print the Loss (Just tells us how much of difference we are from what we got and what we expected)
local Loss = CalculateLoss(Activations[#Activations], ExpectedOutput)
print("\n Loss:", Loss, "\n Output Activation:", unpack(Activations[#Activations]))
-- Train the network, where when the inputs are { 1, 1 } it should output { 0, 0 }
Propagator.BackPropagation(NeuralNetwork, Activations, ExpectedOutput, {
ActivationFunction = ActivationFunction,
LearningRate = 0.1
})
end
You can see it back propagating and the loss getting decreased.
Custom Activation Functions
local OpenML = require(game:GetService("ServerScriptService").OpenML)
local Propagator = OpenML.Algorithms.Propagator
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
local ReLU = OpenML.ActivationFunctions.ReLU
local TanH = OpenML.ActivationFunctions.TanH
local Activations = Propagator.ForwardPropagation(NeuralNetwork, { 1, 1 }, function(layer: number)
if layer == 3 then -- Use TanH only on layer 3 which in our case is the last layer
return TanH
else -- Other wise if its not just use ReLU
return ReLU
end
end)
print(Activations)
How to use Genetic Algorithm
local OpenML = require(game:GetService("ServerScriptService").OpenML)
local Genetic = OpenML.Algorithms.Genetic
local NetworkA = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
local NetworkB = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
-- Mutate is by chance
local MutatedNetwork = Genetic:Mutate(NetworkA, NetworkB, 0.3)
-- Blend is a mixture of the networks by percentage, so 0.5 would mean, half of its values are between NetworkA and NetworkB
local BlendedNetwork = Genetic:Blend(NetworkA, NetworkB, 0.3)
How to use CNN's
local OpenML = require(game:GetService("ServerScriptService").OpenML)
-- Get the CNN Resource
local CNN = OpenML.Resources.CNN
-- Kernel Presets
local Kernels = CNN.Kernels
-- Create the Convolutional Network, along with its kernels
local ConvolutionalNetwork = CNN.new({
{ Kernels.Size3x3.SobelX, { -- SobelX to apply a Convolution to it
{ "MaxPooling", {2, 2}, { -- Reduce the resolution by 2
{ Kernels.Size3x3.HorizontalLine, { -- Apply HorizontalLine convolution to it
{ "MaxPooling", {2, 2} } -- Reduce resolution by 2
} }
} }
} }
})
local Image = {}
-- We will create a black circle on the center of the image, and surrounding it will be white
local center = Vector2.new(12, 12)
for y = 1, 24 do
Image[y] = {}
for x = 1, 24 do
Image[y][x] = (Vector2.new(x, y) - center).Magnitude/12
end
end
-- Send the Image through our Convolutional Network
local ConvolutedOutput = CNN.ForwardPropagation(ConvolutionalNetwork, Image)
print(ConvolutedOutput[1]) -- We only made one Node so we get the first one.
--[[
What it should do is follow how we made out kernels:
Apply SobelX
Reduce Resolution by 2 24/2 = 12
Apply HorizonalLine
Reduce Resolution by 2 12/2 = 6
It should leave us with a 6x6 Image.
For Computer vision or image recognition you can send each pixel value of the 6x6 image into a Neural Network
]]
How to use Optimizers
local OpenML = require(game:GetService("ServerScriptService").OpenML)
local Propagator = OpenML.Algorithms.Propagator
local ActivationFunction = OpenML.ActivationFunctions.TanH
-- Select our Optimizer
local AdamOptimizer = OpenML.Optimizers.Adam.new()
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
for i = 1, 20 do
local Activations = Propagator.ForwardPropagation(NeuralNetwork, { 1, 1 }, ActivationFunction)
print(unpack(Activations[#Activations]))
Propagator.BackPropagation(NeuralNetwork, Activations, { 0, 0 }, {
ActivationFunction = ActivationFunction,
Optimizer = AdamOptimizer, -- Apply the Optimizer here, It should speed up the Process of Forward and Back Propagation
LearningRate = 0.05
})
end
Deep Q-Learning
local OpenML = require(game:GetService("ServerScriptService").OpenML)
-- Choose our Activation Function
local ActivationFunction = OpenML.ActivationFunctions.TanH
-- We need this to forward propagate and back propagate the network when learning
local Propagator = OpenML.Algorithms.Propagator
-- we will create our network to use, keep in mind its 2 inputs, 3 hidden nodes, and 2 outputs
local NeuralNetwork = OpenML.Resources.MLP.new({ 2, 3, 2 }, math.random)
setmetatable(NeuralNetwork, { __index = Propagator })
-- we get our learning resource which in this case is Deep Q-Learning (DQN)
local DQL = OpenML.Algorithms.DQL.new() -- Link the DQN to the network
-- These are required and you have to put them. It also makes it easy for manipulation in between
DQL.OnForwardPropagation = function(states)
return NeuralNetwork:ForwardPropagation(states, ActivationFunction)
end
-- these too, it'll just update the network to its target (expected)
DQL.OnBackPropagation = function(activations, target)
return NeuralNetwork:BackPropagation(activations, target, { ActivationFunction = ActivationFunction, LearningRate = 0.25 }) -- if you want to use optimizer go ahead
end
local ReplayBuffer = OpenML.Resources.ReplayBuffer.new(16) -- this isn't required, its optional, number inside is the size of your buffer
-- if you want to train your network while retraining past training experiences. use this, it'll help training process.
for i = 1, 5 do
local state = { 1, 0 } --[[
Our state is basically our input. It tells us things about the environment
for example if you were making a car drive.
It would most likely be distance to something, or things like velocity
]]
local activations = NeuralNetwork:ForwardPropagation(state, ActivationFunction) --[[
we need to run the state and see waht our result is, just run it like a regular neural network
]]
local actions = activations[#activations] --[[
these are our Q Values, our single action is the Q Value thats the highest.
If action[1] is 0.2 and action[2] is 0.6, our action is actions[2] because it is the highest
]]
local action = actions[1] > actions[2] and 1 or 2 -- you can do this another way like using max
-- we need the index numbers for our actions because uh thats how we do it!
local reward = -1 -- we can set our reward, it changes the network on how good. Positive mean keep doing it, negative mean nono bad do something else.
-- i set it 1 so i can test that it keeps doing what it should do
-- this is totally optional, if you have a next state you want to put in. you can do this
--[[
local nextState = { 0, 1 } -- just change it however you want. it's basically ur environment changing.
-- what it looks like is going on here state is { 1, 0 } it looks like we moved right os our next state is { 0, 1 }
]]
print(unpack(actions)) -- lets see how the network is doing before learning
-- we send our values through to change the network parameters
DQL:Learn{
State = state,
Action = action,
Reward = reward,
ReplayBuffer = ReplayBuffer -- this is optional if you want your network to train with previous training data then this is it.
}
--[[
let me explain what goes on here.
what it basically does lets say your action is 1, action[1] and lets say action[1] is 0.75
if action[1] gets a negative reward. then it will change only that output action[1], to go into the opposite direction
so instead of action[1] being 0.75 it now proabbly has 0.3 because it got a negative reward and it'll vary depending on learning rate
thats basically it. it only changes the action you send it.
]]
print(unpack(actions)) -- lets see how the network is doing after learning
-- what you should see is the numbers going down.
end
Compression
Compression Defaults to IEEE754
There are 3 types of Compression/Encoding
ALP
IEEE754
JSON
local OpenML = require(script.Parent.OpenML)
local MLP = OpenML.Resources.MLP
local NeuralNetwork = MLP.new({2, 3, 2}, function()
return math.random() * 3 - 1.5
end)
local ALPCompression = MLP.Compress(NeuralNetwork, "ALP") -- ALP Compression
local IEEE754Compression = MLP.Compress(NeuralNetwork, "IEEE754") -- IEEE754 Compression
local JSONCompression = MLP.Compress(NeuralNetwork, "JSON") -- JSON Encoding
print(ALPCompression) -- ALP:?��G�2��.?G��>��w�,�.��t/7=>6%�k���g�;?����N��h.=0[?K��;?��l�p�;�kJN���A
print(IEEE754Compression) -- IEEE754:3F95ED47BF32A3B2.3F47FA9D3EB78F77BD2CA615.BF191019BE742F37=3E0A3625BF6BD1E6BF67C408;3FACEBF6BF06124EBD15D868.3D300B5B3F4BFC89;3F81876CBE700BC5;BE6B4A4EBFB79C41
print(JSONCompression) -- {"Nodes":[[1.1713036732564346,-0.6978103370441914],[0.7811678588612416,0.35851643279008918,-0.04215057641266262],[-0.5979019220644414,-0.23846136544807096]],"Weights":[[[0.13497218171855208,-0.9211716011775545,-0.9053349815348153],[1.3509510532232856,-0.5237168113041347,-0.03658333767135313]],[[0.042979580702473988,0.7968221325624154],[1.0119453055476955,-0.23441989869542269],[-0.229775640498594,-1.4344560463058896]]],"Format":"MLP"}
-- ALPBase64 is the default compression because datastores support it.
local ALPNetwork = MLP.Decompress(ALPCompression, "ALP")
local IEEE754Network = MLP.Decompress(IEEE754Compression, "IEEE754")
local JSONNetwork = MLP.Decompress(JSONCompression, "JSON")
print(ALPNetwork, IEEE754Network, JSONNetwork) -- They all return the same network.
ALP Compression will compress your data 500% smaller
IEEE754 Compression will compress your data 250% smaller
JSON is not compression its encoding.
How to Save a Network
local DataStoreService = game:GetService("DataStoreService")
local ServerScriptService = game:GetService("ServerScriptService")
-- Create/Get our datastore called "Networks"
local NetworkDataStore = DataStoreService:GetDataStore("Networks")
local OpenML = require(ServerScriptService.OpenML) -- Get OpenML
local MLP = OpenML.Resources.MLP -- Multi-layer Perceptron
local NeuralNetwork = MLP.new({ 2, 3, 2 }, math.random) -- Create a new Network
local CompressedNetwork = MLP.Compress(NeuralNetwork, "ALPBase64") -- Compress the network using ALP Base64
NetworkDataStore:SetAsync("MyNetwork", CompressedNetwork) -- Save Compressed Network to the DataStore
print(CompressedNetwork) -- see our compressed network
local DataStoreService = game:GetService("DataStoreService")
local ServerScriptService = game:GetService("ServerScriptService")
-- Create/Get our datastore called "Networks"
local NetworkDataStore = DataStoreService:GetDataStore("Networks")
-- How to get the network after its compressed?
local MyNetwork = NetworkDataStore:GetAsync("MyNetwork") -- Get the saved compressed Network from the DataStore
local NewNeuralNetwork = MLP.Decompress(MyNetwork, "ALPBase64")
print(NewNeuralNetwork) -- see our decompressed network that we can use
Directly Save the Network
-- Or just save the network directly
local DataStoreService = game:GetService("DataStoreService")
local ServerScriptService = game:GetService("ServerScriptService")
-- Create/Get our datastore called "Networks"
local NetworkDataStore = DataStoreService:GetDataStore("Networks")
local OpenML = require(ServerScriptService.OpenML) -- Get OpenML
local MLP = OpenML.Resources.MLP -- Multi-layer Perceptron
local NeuralNetwork = MLP.new({ 2, 3, 2 }, math.random) -- Create a new Network
NetworkDataStore:SetAsync("MyNetwork", NeuralNetwork) -- Save the network directly
Download Versions:
V1: OpenML.rbxm (8.0 KB)
V1.1: OpenML.rbxm (9.4 KB)
V1.2: OpenML.rbxm (12.3 KB)
V1.2.1: OpenML.rbxm (14.0 KB)
V1.2.2: OpenML.rbxm (17.5 KB)
V1.2.2.5 (LATEST): OpenML.rbxm (16.9 KB)
Latest Update Log
OpenML Version 1.2.2.5 (Minor Fixes)
-
Added Softmax Activation Function
-
You can now use custom loss functions in Back Propagation
-
Removed ALP Base64
-
Added ALP UTF8 (REMOVED)
-
Changed the way data gets compressed