The Hello World Of Machine Learning

Machine learning is the process of computers finding numbers by trying out things from previous numbers in the goal to get as close to a target. In Machine Learning we need what is called a model. This model tells us how our network should look like in a sense. Here is a sketch of what our model will be.

Introduction

Let’s just break this down. The input layer holds, well are input. Nothing special here. Our hidden layer harbors our neurons. We are just going to be focusing on a non deep-Q model so we will just have 1 neuron layer with 3 channels, or 3 neurons. Now these weights are connected to these things called channels. A channel is simply the flow of data in a sense. The amount of channels you have is correlated to the amount of neurons you have. So in these channels we have these things called weights (denoted w"number") and these weights are not a number from 0 to 1, the neurons are. In most data models, this is how we represent data. the way we represent this data is known as the model permutation.

Forward propagation

We will go with what is called a uniform permutation, in which our data for our neurons and output is represented as a value from 0 to 1. The point of these neurons is that we can adjust these values called bias’s to favor a weight to be x. So say your AI was doing this weird thing to get to the output you can investigate why it is doing this by changing the weights. We have a pretty straight forward model however so will have no bias’s. The neuron is adding up the weights that is in the previous channel layer (the layer from input to hidden) however since we chosen a uniform permutation we need to turn those values into a value from 0 to 1. Straight enough we can just call a sigmoid function on it. You don’t have to use a sigmoid function. In fact simply putting the activation function to be linear is great enough.



What will get from doing this is we will get one weight from the neuron, in other words the neuron takes in weights and then makes it 1 weight. The output layer acts the same as this. The value we get from the output layer after this is what we call the forward propagated data. And this whole process is known as forward propagated, think of it as the AI simply doing what it is told, it isn’t learning yet however. Note that for our first set of propagation our weights are random.

Back propagation

When an AI learns it is simply finding the delta of the previous weights and sums of them and comparing to see how much it is off. Then adding onto the previous weights. We need to first determine how much we were off, simple enough. That’s just our target value subtracted from what we got. Will also multiply this by what we got too to simplify some math later on. Will then find the sigmoid prime of this (The derivative of our activation function). From there we set the neurons to be that. And then we just cycle through for each channel updating the weights and your done, you created your first AI! Well, not necessarily. We understand how it works but like any hello, world tutorial i’m going to show the code and explain over it.

AI example

--What is our activation function?
local function sigmoid(z)
	return 1/(1+math.pow(2.71828182846, z))
end

local function backPropogate(forwardPropogatedData, goal, neurons, weights, inputs)
    --How much were we off from our target goal?
	local deltaOutputSum = forwardPropogatedData * (goal - forwardPropogatedData)
	local neuronSum = {}
    --Sigmoid activation for the channels
	for i=1,#neurons do
		neuronSum[i] = deltaOutputSum/sigmoid(neurons[i])
	end
	local weightsChannel = {}
     --Say we have inputs of 0.5 and 1 and 3 this is the equivalent of doing 0.5*1*3
	local productOfInputs = inputs[1]
	for i=2,#inputs do
		productOfInputs *= inputs[i]
	end
    --Apply the derivative
	local prevNeuronData = {
		deltaOutputSum/(deltaOutputSum/weights[7]) * sigmoid(neurons[1]),
		deltaOutputSum/(deltaOutputSum/weights[8]) * sigmoid(neurons[2]),
		deltaOutputSum/(deltaOutputSum/weights[9]) * sigmoid(neurons[3])
	}
	for i=1,#neurons do
        --Update the neurons, these were the n"number" in the model synapse
		neurons[i] += prevNeuronData[i]
	end
    --Get the delta sum
	weightsChannel[1] = prevNeuronData[1] / productOfInputs
	weightsChannel[2] = prevNeuronData[2] / productOfInputs
	weightsChannel[3] = prevNeuronData[3] / productOfInputs
    --Update the weights according to the channels they are in
	weights[1] += weightsChannel[1]
	weights[2] += weightsChannel[2]
	weights[3] += weightsChannel[3]
	weights[4] += weightsChannel[1]
	weights[5] += weightsChannel[2]
	weights[6] += weightsChannel[3]
	weights[7] += weightsChannel[1]
	weights[8] += weightsChannel[2]
	weights[9] += weightsChannel[3]
end

local input = {
	1,1
}
local target = {0}
local neurons = {
	0,0,0 --We leave it as 0 instead of random as this will mess up our bias
}
local weights = {}
for i=1,9 do
	weights[i] = math.random() --Remember, in the first round of forward propagation the weights are random
end
--We would add a number to each neuron if we had bias's
neurons[1] = input[1] * weights[1] + input[2] * weights[4]
neurons[2] = input[1] * weights[2] + input[2] * weights[5]
neurons[3] = input[1] * weights[3] + input[2] * weights[6]

for i=1,1000 do
   --apply the activation function
	local forwardPropogatedData = sigmoid(neurons[1]) * weights[7] + sigmoid(neurons[2]) * weights[8] + sigmoid(neurons[3]) * weights[9]
    --train it
	backPropogate(forwardPropogatedData, target[1], neurons, weights, input)
end
--forward propagate to see how close are ai was to 0, remember it will approach 0 but never reach it truly
print(sigmoid(neurons[1]) * weights[7] + sigmoid(neurons[2]) * weights[8] + sigmoid(neurons[3]) * weights[9])

--This is just some data I collected, have fun :)
local trainedWeights = {8.926, 8.954,8.889,8.864,8.875,8.835,9.59,9.178,8.874}
local trainedNeurons = {8.928, 8.927, 8.932}
print(sigmoid(trainedNeurons[1]) * trainedWeights[7] + sigmoid(trainedNeurons[2]) * trainedWeights[8] + sigmoid(trainedNeurons[3]) * trainedWeights[9])

That’s it!

If you have read all of this, give yourself a pat on the back :smiley: If you have any more questions feel free to either DM me here or leave it in the replies! machineLearningBoblox.rbxl (21.8 KB)

30 Likes

this is really well formatted good job

Nice tutorial. It looks a bit messy, however. It doesn’t really explain the linear algebra behind the network you’ve created, or really much of the math past the sigmoid function. However, still a great resource for some example code.

2 Likes

Yea I though’t that’d be too intimidating for beginners so I just left it as very broad terms as the actual math varies depending on the neural structure.

3 Likes

I like your tutorials, Budd. Mathematics what all roblox games needed