I’m really interested in this and it seems incredible, i would only like to see some tutorial of this being applied in a NPC
I’ll probably make a tutorial video next month once I am done with my work contract.
You can give me some suggestions and I’ll see if I can fit any of the models for those suggestions.
Suggestion : make a pathfinding ai in the tutorial
Alright! Enemy NPCs(Follow and Attack the player) but for that i would like to see if these kind of Enemy NPCs could be trained to dodge projectiles, block attacks, maybe even using objects around him to his advantage like to avoid being hit by a projectile, hope this made sense!
Hello, I am grateful for this library, however, I am still confused on how to intergrate it into gameplay. Suppose I have an NPC, I want it to train with players, so I have this pseudo-code:
local NPC = workspace.NPC
local function moveRight()
end
local function moveLeft()
end
local function moveFoward()
end
local function moveBackward()
end
local function moveJump()
end
local function attack()
end
local function getInputFromEyeWithRays()
end
How would I implement your module with this pseudo-code?
Sure! Something like this with an additional functions added. Unfortunately, I am not good at pseudo code, so I’ll give the partially-completed Lua code instead (not tested).
NPC trained using player data and online learning.
local NPC = workspace.NPC
local NeuralNetwork = DataPredict.Models.NeuralNetwork
local OnlineLearning = DataPredict.Others.OnlineLearning
local function moveRight()
end
local function moveLeft()
end
local function moveFoward()
end
local function moveBackward()
end
local function moveJump()
end
local function attack()
end
local function checkIfIsCharacter(HittedPart)
local ParentPart = HittedPart.ParentPart
local Humanoid = ParentPart:FindFirstChild("Humanoid")
if Humanoid then
return 1
else
return 0
end
end
local function performAction(actionNumber)
if (actionNumber == 1) then
attack()
if (actionNumber == 2) then
jump()
else -- You can add more actions here.
end
end
local function getInputFromEyeWithRays()
local inputVector
local materialEnumValue
local distance
local isPlayer
local ForwardRaycast = workspace:Raycast()
if ForwardRaycast then
materialEnumValue = ForwardRaycast.Material
distance = ForwardRaycast.Distance
isCharacter = checkIfIsCharacter(ForwardRaycast.Instance)
else
materialEnumValue = 0
distance = inf
isCharacter = 0
end
inputVector = {{1, materialEnumValue, distance, isCharacter}} -- 1 is added for bias
return inputVector
end
local function convertControlToActionNumber(ReceivedControls)
-- Assign each integer number to each control here
local actionNumber = 0
if (ReceivedControls == "Control1") then
actionNumber = 1
end
return actionNumber
end
local function startTrainingFromPlayer(Player)
local inputVector
local NeuralNetworkForThisPlayer = NeuralNetwork.new()
NeuralNetworkForThisPlayer:addLayer(3, true) -- First input layer. Add bias too.
NeuralNetworkForThisPlayer:addLayer(5, true) -- Second input layer. Add bias too.
NeuralNetworkForThisPlayer:addLayer(6, false) -- Final output layer. Value is six because of 6 actions.
NeuralNetworkForThisPlayer:setClassesList({1, 2, 3, 4, 5, 6}) -- 6 different actions, so six classes.
local OnlineLearningForThisPlayer = OnlineLearning.new(NeuralNetworkForThisPlayer)
local DataHarvestRemoteEvent
DataHarvestRemoteEvent.OnServerEvent:Connect(function(ReceivedPlayer, ReceivedControls)
if (ReceivedPlayer == Player) then
inputVector = getInputFromEyeWithRays()
actionNumber = convertControlToActionNumber(ReceivedControls)
OnlineLearningForThisPlayer:addInputToOnlineLearningQueue(inputVector)
OnlineLearningForThisPlayer:addOutputToOnlineLearningQueue(actionNumber)
end
end)
OnlineLearningForThisPlayer:startOnlineLearning()
end
local function runTrainedNPC()
local inputVector
local predictedActionNumber
local NeuralNetworkForThisNPC = NeuralNetwork.new()
-- The layers are the same to our previous neural network
NeuralNetworkForThisNPC:addLayer(3, true) -- First input layer. Add bias too.
NeuralNetworkForThisNPC:addLayer(5, true) -- Second input layer. Add bias too.
NeuralNetworkForThisNPC:addLayer(6, false) -- Final output layer. Value is six because of 6 actions.
NeuralNetworkForThisNPC:setClassesList({1, 2, 3, 4, 5, 6}) -- 6 different actions, so six classes.
while true do
inputVector = getInputFromEyeWithRays()
predictedActionNumber = NeuralNetworkForThisNPC:predict(inputVector)
performAction(actionNumber)
end
end
NPC trained using reinforcement (accidentally misread the question, but I’ll just leave it here).
local NPC = workspace.NPC
local NeuralNetwork = DataPredict.Models.NeuralNetwork.new()
NeuralNetwork:addLayer(3, true) -- First input layer. Add bias too.
NeuralNetwork:addLayer(5, true) -- Second input layer. Add bias too.
NeuralNetwork:addLayer(6, false) -- Final output layer. Value is six because of 6 actions.
NeuralNetwork:setClassesList({1, 2, 3, 4, 5, 6}) -- 6 different actions, so six classes.
local function moveRight()
end
local function moveLeft()
end
local function moveFoward()
end
local function moveBackward()
end
local function moveJump()
end
local function attack()
end
local function checkIfIsCharacter(HittedPart)
local ParentPart = HittedPart.ParentPart
local Humanoid = ParentPart:FindFirstChild("Humanoid")
if Humanoid then
return 1
else
return 0
end
end
local function getInputFromEyeWithRays()
local inputVector
local materialEnumValue
local distance
local isPlayer
local ForwardRaycast = workspace:Raycast()
if ForwardRaycast then
materialEnumValue = ForwardRaycast.Material
distance = ForwardRaycast.Distance
isCharacter = checkIfIsCharacter(ForwardRaycast.Instance)
else
materialEnumValue = 0
distance = inf
isCharacter = 0
end
inputVector = {{1, materialEnumValue, distance, isCharacter}} -- 1 is added for bias
return inputVector
end
local function performAction(actionNumber)
if (actionNumber == 1) then
attack()
if (actionNumber == 2) then
jump()
else -- You can add more actions here.
end
end
local function evaluateRealActionNumber(predictedActionNumber)
local realActionNumber = 0
--[[
Put your own conditions if certain conditions are met.
For example, if the NPC takes a hit while fighting, then move back.
Another one is that if the NPC doesnt move forward because of something blocking it, then it needs to jump.
--]]
return realActionNumber
end
local function run()
local realActionNumber = 0
local inputVector
local predictedActionNumber
local rewardValue = 0.3
local punishValue = 0.5
while true do
inputVector = getInputFromEyeWithRays()
predictedActionNumber = NeuralNetwork:reinforce(inputVector, realActionNumber, rewardValue, punishValue)
performAction(actionNumber)
realActionNumber = evaluateRealActionNumber()
end
end
Alternatively, you can use the QueuedReinforcementNeuralNetwork in AqwamCustomModels. However, you need the while loop.
Look’s very interesting i’m gonna check it out.
Don’t forget to leave a like on the first post if you find this useful! I’d appreciate it.
Thank you so much! The first code seems like it tries to mimic player’s playstyle, interesting!
Hiya! This looks awesome and I can see great uses for this within the platform.
However, it could be improved quite some more by including comments within your source code on GitHub. That way, it is much easier for people to go along your train of thought. Furthermore, I feel like the variable naming scheme is a bit too verbose than required. As mentioned previously, comments can be used instead to provide details about what data a variable holds.
Eh. I don’t think comments are necessary since majority of game developers do not have machine learning knowledge. Because of that, I doubt they would be interested in looking into the codes.
For the naming scheme, I kind of disagree with that. It provides clear and comprehensive descriptions of what each variable are. Even if others agree with you, the first reasoning still applies.
Hello again! Thank you for the pseudo-code it really helped! However there is one small issue, the gameplay is a little complex for just condition checking, the function would be more than 200 lines long! So I am wondering how would I make the neural network explore strategies on its own?
Kind of difficult to tell without knowing the details. May I know some more context please?
Well, I have a swordfighting arena with two spawns, one spawns the neural network, one spawns a hardcoded bot. I would like the neural network to try to find an optimal strategy to fight against the hardcoded bot
Try reinforcement learning code I provided, if you haven’t done so already. It would cut down quite a lot of codes.
Yes but it has the function:
local function evaluateRealActionNumber(predictedActionNumber)
local realActionNumber = 0
--[[
Put your own conditions if certain conditions are met.
For example, if the NPC takes a hit while fighting, then move back.
Another one is that if the NPC doesnt move forward because of something blocking it, then it needs to jump.
--]]
return realActionNumber
end
Which means with a complex gameplay would takes a lot of codes. The kind of learning I’m looking for is similar to this which the decision of an agent in an environment depends on the reward function:
I see. Unfortunately, I don’t think I have good solution for that.
I do however might implement new neural network function to take care of this issue. Are you okay waiting for a while so that i can implement new features?
It is completely fine by me! I don’t mind waiting at least for a month
To be fair, your point stands - but I’d love to review the code sometime soon (as I’m currently undergoing education in ML).
Regarding the variables, it just feels odd looking at extremely verbose naming in a repetitive manner, but guess I’ll have to get used to that lmao.
Heads up. I have added QLearningNeuralNetwork, but haven’t completed the documentation (it will take a while). You can have a look at the functions’ parameters to understand what it does.
It is available for Beta version only.
Edit: The documentation for Q-Learning Neural Network is completed.