DataPredict [Release 1.21] - General Purpose Machine Learning And Deep Learning Library (Learning AIs, Generative AIs, and more!)

It is impossible to train without dataset. However, there are models that uses a single data and no output labels at all.

We call them “Reinforcement Learning” models.

There are already plenty of source codes on the main post that somewhat covers all of the things you said.

You can also refer to “Reinforcement Learning” part in the tutorial as those tells you the stuff that are commonly found in my reinforcement learning models that you need to use to train the AIs.

Since you are new to this, I recommend you stick to Expected SARSA and Deep Q-learning (not the double version one) until you have further knowledge related to reinforcement learning (e.g. difference of on-policy and off-policy models).

1 Like

In addition to that, you can also use genetic algorithm, but the tutorials in the documentation doesn’t really cover it. You’ll have to look at YouTube for this. The first method is simpler.

1 Like

Good news everyone! I have open up the repository for pretrained neural network models! You can see the link on the main post!

If you wish to contribute, it is very important for you to read and sign the “Contributor License Agreement” in the README file.

2 Likes

Thank you very much for answering these questions.

I also have a couple of questions: How would I go about creating the environment and how would I determine the inputs for the NPCs. Lets say if I wanted to make a hide and seek simulation similar to the one seen in the Two Minute Papers: What inputs would I give the npcs?. How would they know their surroundings? Lets say there are objects that the npc can interact with. Would I just not include those in the input and have the npcs detect them from the raycasts that they’ll fire? or would I provide the npc with the object’s location. And if I was to do that, wouldn’t the amount of inputs change? Would I give the NPC the locations of hiding spots or would it be able to determine that based off training?

I also would like to thank you for creating this module/library since there isn’t much neural networking resources in roblox. (There was another module by Kironte but I like DataPredict more since it has more features and a tutorial on sword fighting npcs.)

You would need to use these inputs:

  • Raycasts from the NPC. Each raycast would do:

    • Check if the enemy is facing towards the NPC (if applicable).

    • The distance between the NPC and the enemy/part to hide behind.

    • Part details is not necessary (e.g. colour, material and so on).

  • A memory that stores duration between last seen by enemy and the current time.

  • Store the NPC’s previous actions by adding new input neurons for one-hot encoding (Optional, but I also don’t see really good result from my experience on making walking NPC).

I’m not sure if adding NPC’s and enemy’s speed would make a difference if those are added for input.

Also, your welcome!

You would need a bunch of states like the ones available here or something like this

In this example all of the state are being tracked and they are judged to output a string. Something similar would be to assign a reward value to one of these actions and input the states into the model similar to his sword fighting template where all the inputs are defined.

function cm.PlayerSentience(TextLabel,Player)
--local states={Damage=Player.Character.Humanoid:GetPropertyChangedSignal("Health"),
if not awareness then awareness=require(game.ReplicatedStorage.GlobalSpells.ChatbotAlgorithm.Awareness) end
local function Genericdialogue(v)
local dialoguetbl={}
--local v=tab
if v==0 then
v=false
end
if v==1 then
v=true 
end
if v==true or v==false then
-- Using a table of strings
dialoguetbl = {}
if v == true then
  dialoguetbl[1] = "is activated"
  dialoguetbl[2] = "is currently active"
  dialoguetbl[3] = "is ready to use"
  dialoguetbl[4] = "is functioning properly"
else
  dialoguetbl[1] = "is deactivated"
  dialoguetbl[2] = "is deactive"
  dialoguetbl[3] = "is not available"
  dialoguetbl[4] = "is no longer active."
end

end
print(v)

if v == true then
  dialoguetbl[1] = "is activated"
  dialoguetbl[2] = "is currently active"
  dialoguetbl[3] = "is ready to use"
  dialoguetbl[4] = "is functioning properly"
  dialoguetbl[1] = "is deactivated"
  dialoguetbl[2] = "is deactive"
  dialoguetbl[3] = "is not available"
  dialoguetbl[4] = "is no longer active."

end
return dialoguetbl
end   
local function booldialogue(v,active,deactive)
local dialoguetbl={}
if v==0 then
v=false
end
if v==1 then
v=true 
end
if v==true or v==false then
if v==true then
dialoguetbl=active
else 
dialoguetbl=deactive
end
end
if v==0 or v==1 then
print("Hello")
if v==1 then
dialoguetbl=active
else 
dialoguetbl=deactive
end
end
return dialoguetbl
end
local bigpower=75
local function rollstr(s)
return s[math.random(1,#s)]
end
local function getroot(instance)
    if instance ~= nil then
        if instance:FindFirstChild("HumanoidPart") ~= nil then
            return instance.HumanoidRootPart
        elseif instance:IsA("Folder") then
                if instance:FindFirstChildOfClass("Model").PrimaryPart~=nil then
            return instance:FindFirstChildOfClass("Model").PrimaryPart
            else 

return instance:FindFirstChildOfClass("Model"):FindFirstChild("Handle")
            end
        elseif instance:IsA("BasePart") then
            return instance
            
        elseif instance:IsA("Model") then
if instance.PrimaryPart then
            return instance.PrimaryPart
else 
return instance:FindFirstChildOfClass("BasePart")
end
        end
    end
end
local function judgeobjectdistance(v)
print(v)
--(magnitude, position, position2, maxDist)
local r= getroot(v)
if r then   
local dist, dir = awareness.judge.distance(( r.Position-Player.Character.HumanoidRootPart.Position).Magnitude, Player.Character.HumanoidRootPart.Position, r.Position, 80)
return awareness.judge.object(v, Player.Character) .. " is " .. dist .. " to the " .. dir
end
end
local judgepowerobsevations = {--I think my magic somewhat average. But I can keep improving!
    ["very weak"]={"could improve.","needs improvement.","could use some practice.", "has a lot of room for improvement.", "needs a lot of work."},
    ["weak"]={"could benefit from some training.","is novice at best.","appears quite amateur.","could use some guidance.", "is weak and I'm still learning the basics.", "needs more experience."},
    ["kind of weak"]={"not that good.","appears quite weak.","could be better.", "is not very impressive.", "needs more confidence.","is decent.", "is not bad.", "has some potential."},
    ["average"]={"is pretty good","is somewhat average. But I can keep improving!"},
    ["kind of strong"]={"is getting stronger!","is becoming strong!","could become very powerful if I keep training!","is showing promise.", "is getting better.", "has some skill."},
    ["strong"]={"is getting so strong.","is becoming the key to true power.","is really becoming powerful.","is impressive.", "is very good.", "has a lot of talent."},
    ["powerful"]={"is beginning to plateau. I've become so powerful.","is truly magnificent! My hard work has paid off!","is reaching its full potnetial!","is amazing.", "is outstanding.", "has mastered the art."}
}
local function linkjudgepower(s)
for i,v in judgepowerobsevations do if s[1]==i then
for t,o in v do 
table.insert(s,o)
end
end
end
return s
end
local function  styleroll(d,s,v,ljp)
local styrol=math.random(1,4)
if #s>0 then
if d=="My" 
and styrol==1 then
--My magic is low -> I have low magic
if ljp then d="I have" end
return d.." "..s[math.random(1,#s)].." "..v.."."-- I have very weak magic.
elseif styrol==2 then
d="I notice that my "
if ljp then s=linkjudgepower(s) end
s[1]="is "..s[1].."."
return d.." "..v.." "..s[math.random(1,#s)].."."-- I notice that my magic is really 
elseif styrol==3 then
if ljp then s=linkjudgepower(s) end
s[1]="is "..s[1].."."
--My magic is low -> I have low magic
d="I think my"
return d.." "..v.." "..s[math.random(1,#s)]..""--I think my magic could improve
else 
if ljp then s=linkjudgepower(s) end
s[1]="is "..s[1].."."
return d.." "..v.." "..s[math.random(1,#s)].."" -- My magic is very weak.
end 
end
return nil
end
local function dialoguetype(v,s,d,style,o)--v is key=flying s is table of judgements d is "My
-- 
if style==0 then
if d=="My" then 
return styleroll(d,s,v)
end 
elseif style==1 then
local npcText=judgeobjectdistance(o)

return d.." "..v.." "..npcText.."."
elseif style==2 then
return d.." "..s[math.random(1,#s)].." "..v.."."-- I currently am flying.
elseif style==3 then
s[1]="is "..s[1].."."
return d.." "..v.." "..s[math.random(1,#s)].."" 
end
end
local boostpower=17
local states={Combat=false,Chatting=false,Damaged=false,Walking=Player.PlayerGui.Bars.Walking}
local TOLERANCE=160
local spoint=Player.Character.HumanoidRootPart.Position
local function pointtolerance(Vec)
local TOLERANCE=TOLERANCE
local Vec=Vector3.new(math.floor(Vec.X/(TOLERANCE*2))*(TOLERANCE*2),math.floor(Vec.Y/(TOLERANCE*4))*(TOLERANCE*4),math.floor(Vec.Z/(TOLERANCE*2))*(TOLERANCE*2))
if Vec==spoint then
return nil
else spoint=Vec

return true
end
end

local connectiontbl={}
local streamofthoughts={}
local chatthrottle=false
--local InternalObjectState=Instance.new("StringValue")
--InternalObjectState:GetPropertyChangedSignal("Value"):Connect(function()

local timer=game.ReplicatedStorage.GlobalSpells.Tick
local tracks = {
    --Order = 1 is beginning 2 second 3rd end
---Levels 
["awareness"] = {
        Player.Levels.Xpos,--Player.PlayerGui.Bonus.STR,
        function(v)
            return  pointtolerance(Vector3.new(Player.Levels.Xpos,Player.Levels.Ypos,Player.Levels.Zpos))
        end,
        function(v, s, t)
            if Player.PlayerGui.EnemyGui.Target.Value==nil then
            local awaretbl=awareness.getsurroundingobjects(Player.Character.HumanoidRootPart,Player)
            local tblroll=cm.mathrandom(2,#awaretbl)
print(awaretbl)          
  print(tblroll)
            if tblroll~=1 then
            return awaretbl[tblroll]
            end           
            end
            return nil
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=false}  
    },
 ["strength"] = {
        Player.Levels.Strength,--Player.PlayerGui.Bonus.STR,
        function(v)
            return { awareness.judge.power(v, bigpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0,true)
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=false}  
    },
   
["defence"] = {
        Player.Levels.Constitution,--Player.PlayerGui.Bonus.DF,
        function(v)
            return { awareness.judge.power(v, bigpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0,true)
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=false}  
    },
 ["magic"] = {
        Player.Levels.Magic,--Player.PlayerGui.Bonus.MAG,
        function(v)
            return { awareness.judge.power(v, bigpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0,true)
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=false}  
    },
    ["ranged ability"] = {
        Player.Levels.Range,--Player.PlayerGui.Bonus.RNG,
        function(v)
            return { awareness.judge.power(v, bigpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0,true)
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=false}  
    },
---Total Bonus
    ["strength boost"] = {
        Player.PlayerGui.Boost.STR,
        function(v)
            return { awareness.judge.power(v, boostpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0)
        end,
      {writetime=-1000 ,throttle=200,Combat=false,Chatting=false}  
    },

    ["defence boost"] = {
       Player.PlayerGui.Boost.DF,
        function(v)
            return { awareness.judge.power(v, boostpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0)
        end,
      {writetime=-1000 ,throttle=200,Combat=false,Chatting=false}  
    },
    ["magic boost"] = {
       Player.PlayerGui.Boost.MAG,
        function(v)
            return { awareness.judge.power(v, boostpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0)
        end,
      {writetime=-1000 ,throttle=200,Combat=false,Chatting=false}  
    },
    ["ranged boost"] = {
        Player.PlayerGui.Boost.RNG,
        function(v)
            return { awareness.judge.power(v, boostpower)}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0)
        end,
      {writetime=-1000 ,throttle=200,Combat=false,Chatting=false}  
    },
   
-------
    ["Magic prayer"] = {
        Player.PlayerGui.Bonus.Magic,
        function(v)
            return Genericdialogue(v)
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 3)
        end,
      {writetime=-1000 ,throttle=600,Combat=true,Chatting=false}  
    },
    ["Ranged prayer"] = {
        Player.PlayerGui.Bonus.Ranged,
        function(v)
            return Genericdialogue(v)
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 3)
        end,
      {writetime=-1000 ,throttle=600,Combat=true,Chatting=false}  
    },
    ["Strength prayer"] = {
        Player.PlayerGui.Bonus.Strength,
        function(v)
            return Genericdialogue(v)
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 3)
        end,
      {writetime=-1000 ,throttle=600,Combat=true,Chatting=false}  
    },
    ["Dragon form"] = {
        Player.PlayerGui.Main.Form,
        function(v)
            return Genericdialogue(v)
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 3,false)
        end,
      {writetime=-1000 ,throttle=600,Combat=true,Chatting=false}  
    },
    ["Angel form"] = {
        Player.Character.Angel,
        function(v)
            return Genericdialogue(v)
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 3)
        end,
      {writetime=-1000 ,throttle=600,Combat=true,Chatting=false}  
    },
    ["health"] = {
        Player.Character.Humanoid,
        function(v)
            local judgepowerSizes = {0.1, 0.15, 0.25, 0.5, .65, .80, 1}
            local healthjudgeStrings = {
                {"very low"},
                {"kind of low"},
                {"pretty low"},
                {"okay"},
                {"pretty good"},
                {"amazing"},
                {"perfect"}
            }
            return {
                
                    awareness.judge.thing(v, Player.Character.Humanoid.MaxHealth, judgepowerSizes, healthjudgeStrings)
            }
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 0)
        end,
      {writetime=-1000 ,throttle=200,Combat=true,Chatting=false}  
    },
    ["exhausted"] = {
        Player.PlayerGui.ragdoll.variables.ragdolltrigger,
        function(v)
            local dialoguetbl
          --  if v == true and Player.PlayerGui.Dcounter.Value>95 then
         
if v == true and Player.PlayerGui.Dcounter.Value>95 then
  dialoguetbl = {
    "I'm so exhausted.",
    "I'm so deeply tired...",
    "I feel like I can barely move...",
    "I don't feel so well.",
    "I need some rest.",
    "I have no stamina left.",
    "I'm drained.",
    "I'm worn out."
  }
elseif v == false and states.Damaged==false then
  dialoguetbl = {
    "I feel better",
    "I feel alright.",
    "I feel refreshed.",
    "If I use up all my stamina I might collapse.",
    "I should be more careful not to get so exhausted.",
    "I feel a little better now.",
    "Ugh, I should be more aware to not do that.",
  }
elseif states.Damaged==false then
states.Damaged=true
task.delay(6,function() states.Damaged=false end)
  dialoguetbl = {
    "Ow! That really hurt!",
    "Ouch, I fell down.",
    "Youch!",
    "That stings!",
    "That hurts!",
    "That's painful!",
    "That's sore!",
    "That's awful!"
  }
end

            return dialoguetbl
        end,
        function(v, s, t)
            return rollstr(s)
        end,
      {writetime=-1000 ,throttle=24,Combat=true,Chatting=false}  
    },
    ["party member"] = {
        Player.PlayerGui.PartyGui.PartyS1.Subject,
        function(v)
            return {v.Name}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 1, t)
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=true}  
    },
    ["pet"] = {
        Player.PlayerGui.PartyGui.PartyS2.Subject,
        function(v)
            return {v.Name}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "My", 1, t)
        end,
      {writetime=-1000 ,throttle=60,Combat=false,Chatting=false}  
    },
    ["flying"] = {
        Player.PlayerGui.Bars.Flying,
        function(v)
            local active = {"currently am", "am"}
            local deactive = {"am not"}
            return booldialogue(v, active, deactive)
        end,
        function(v, s, t)
            return dialoguetype(v, s, "I", 2)
        end,
      {writetime=-1000 ,throttle=600,Combat=false,Chatting=false}  
    },
    ["enemy"] = {
        Player.PlayerGui.EnemyGui.Target,
        function(v)
            return {v.Parent.Humanoid.DisplayName}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "A", 1,   Player.PlayerGui.EnemyGui.Target.Value.Parent) ..
                " " .. awareness.get.NPCDescription(Player.PlayerGui.EnemyGui.Target.Parent)
        end,
      {writetime=-1000 ,throttle=60,Combat=true,Chatting=false}  
    },
    ["stranger"] = {
        Player.PlayerGui.TalkGui.NPCTarget,
        function(v)
            return {v.Parent.Humanoid.DisplayName}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "I sense that a", 1, t.Parent)
         --..awareness.get.NPCDescription(Player.PlayerGui.EnemyGui.Target.Parent)
        end,
      {writetime=-1000 ,throttle=240,Combat=false,Chatting=false}  
    },
    ["some loot, a"] = {
        Player.PlayerGui.LootGui2.Loot,
        function(v)
            return {v.Parent.Parent.Name}
        end,
        function(v, s, t)
            return dialoguetype(v, s, "Their is", 1, t.Parent.Parent)
        end,
      {writetime=-1000 ,throttle=120,Combat=false,Chatting=false}  
    },
   ["time of day"] = {
         Player.PlayerGui.Musicplayer.DayNight,
        function(v)
            if v==1 then
          return {"night time","evening"}      
            elseif v==0 then
            return {"day time","dawn"}   
            end
            return 
        end,
        function(v, s, t)
            local part1={"It is currently ","Right now it's ","It appears to be "}
            --local part3= {}     
            return part1[cm.mathrandom(1,#part1)]..s[cm.mathrandom(1,#s).."."]
        end,
      {writetime=-1000 ,throttle=120,Combat=false,Chatting=false}  
    },

  --  [" some other loot, a"] = {
      --  Player.PlayerGui.LootGui.Loot,
      --  function(v)
      --      return {v.Parent.Parent.Name}
      --  end,
      --  function(v, s, t)
      --      return dialoguetype(v, s, "A", 1, t.Parent.Parent)
      --  end,
      --{writetime=-1000 ,throttle=120,Combat=false,Chatting=false}  
  --  }
}

local spoken={}

--local maxmemory=100--only cache 100 entries to not repeat
local function repeathandler(str)
local mp=100
if #spoken==0 then
spoken[mp]=str
return false
end
for i=1, 100 do
if spoken[mp]==nil then
spoken[mp]=str
return false
elseif spoken[mp]==str then
print("got repetitive output "..str)
return true
end
mp=mp-1
end
return false
end
local cachetable={}
--end)
local function interpret(i)
if timer.Value>tracks[i][4].writetime+tracks[i][4].throttle and (tracks[i][4].Combat==states.Combat or states.Combat==false) and (tracks[i][4].Combat==states.Chatting or states.Chatting==false) then
print(i)
local item
if tracks[i][1]:IsA("Humanoid") and #item>0 then
item=tracks[i][2](tracks[i][1].Health)--Get response
else 
item=tracks[i][2](tracks[i][1].Value)--get response
end
print(item)
if item~=nil or i=="awareness" then 
local datapoint
if tracks[i][1]:IsA("Humanoid") then
datapoint=tracks[i][3](i,item,tracks[i][1].Health)
else 
datapoint=tracks[i][3](i,item,tracks[i][1].Value)
end

print(datapoint)
if repeathandler(datapoint)==false and datapoint~=nil then
print("passed")
chatthrottle=true

task.delay(math.min(7,string.len(datapoint)/4),function() chatthrottle=false end)
tracks[i][4].writetime=timer.Value
table.insert(cachetable,datapoint)
end
end
end
end


local function awarenessloop()
spawn(function()
while Player.Character.Humanoid.Health>0 do 

task.wait(3)
if #cachetable~=0 then 
local cachecopy=cachetable
cachetable={}
print(cachecopy)
cm.displayString(TextLabel, table.concat(cachecopy," "),Player.Character,cm.getRGB("red"),false)
end
end
end)
end


local function process(i)
if  tracks[i][1]:IsA("Humanoid") or tracks[i][1].Value~=nil and tracks[i][1].Value~="" then
if i=="enemys" then
states.Combat=true
elseif i=="stranger" then
states.Chatting=true
end
interpret(i)
else 
if i=="enemys" then
states.Combat=false
elseif i=="stranger" then
states.Chatting=false
end
end
end

for i,v in tracks do
--print(tracks[i][1])
print(i)
if tracks[i][1]:IsA("Humanoid") then
tracks[i][1]:GetPropertyChangedSignal("Health"):Connect(function()
process(i)
end)
else
tracks[i][1]:GetPropertyChangedSignal("Value"):Connect(function()
process(i)
end)
end
end
 awarenessloop()
end

The basic premise in this example is to create a connection to the changing value. Define a label. and create output.

So for example losing health may imply negative reward finding loot and attacking enemies and winning may imply good behavior and then you could potentially just generate your own training data off your input. The less complex the pattern the less training you need. For example it took a simulated " In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months (up from about 10,000 years over 1.5 realtime months as of The International), for an average of 250 years of simulated experience per day."
So defining as much logic ahead of time would make your AI train faster potentially.
Also the reward function is key.
Knowing the right network and activation functions would take testing and intuition or perhaps research.

1 Like

I’m so lost and confused. :cry: :cry: :sob: :sob: :sob:.

1 Like

You need a way to feed inputs into your AI. The inputs can be anything like its movement function. then define what constitutes a reward like defeating a opponent. You should reference his sword fighting template. In his example the AIs are attempting to learn how to move from directional inputs. and are rewarded for defeating their opponent.

1 Like

Alright I think I have a simple understanding of the Q-learning he’s talking about.

So I’d make a “build Model” function that builds the Neural Networking model, and a “run” function that uses inputs and determines a reward/punishment to apply reinforcement learning. What I’m confused on is the inputs specifically. I’m making a hide and seek ai (so two npcs, each with their own model). What I know is that I’m going to input in Raycasts that the AI does. I’m not sure what else to input though as I don’t want to input the distance between the npcs as it’s supposed to be a hide and seek and that the AI will act similar to humans and “see” with their raycasts rather than just knowing where the other AI is and I’m unsure of how to do so.

2 Likes

Update Release Version 1.6:

  • New Models: REINFORCE and Dueling Q Learning.

  • New NeuralNetwork function: evolveLayerSize().

  • Fixed ActorCritic, AdvantageActorCritic and AsynchronousAdvantageActorCritic.

4 Likes

Would like to share this example of how to create a module script and input a table of datasets and write them to a module automatically. This would be useful for storing your vectors in a module script for fast execution and inference. I hope everyone understands the concept! Main thing to note is this can only be done via a command prompt. This may need slight modifications to your use case.

function cm.GenerateCustomData(tablenames,modulename,databases)--v is a personality function can only run from command prompt.
--ordered tablenames ordereddatabases
local c=Instance.new("ModuleScript")
local syntstr='personalities={person=function()'
for i,v in tablenames do
   syntstr=syntstr..v..'={"' 
    for t,o in databases do
        syntstr=syntstr..table.concat(o,'","')..'"},'
    end
end
syntstr=syntstr..' return {'..table.concat(tablenames,',')..'} end,} return personalities'
c.Source=syntstr
c.Name=modulename
c.Parent=game.ReplicatedStorage.GlobalSpells.ChatbotAlgorithm.Commands.SyntheticDatasets
--c.Source='personalities={person=function()Greetings={"'..table.concat(Greetings,'","')..'"},inquiry={"'..table.concat(inquiry,'","')..'"},'..'IDK={"'..table.concat(IDK,'","')..'"},'..'Database={"'..table.concat(Database,'","')..'"},'
--..'wisdom={"'..table.concat(wisdom,'","')..'"} return Greetings, inquiry, IDK, Database, wisdom end,} return personalities'    c.Parent=game.ReplicatedStorage.GlobalSpells.ChatbotAlgorithm.Personalities 
end

on second look tha function is designed for storing strings! Here is a modified function for storing tables of numbers.

function cm.StoreModelModule(tablenames,modulename,databases,directory)--v is a personality function can only run from command prompt.
local c=Instance.new("ModuleScript")
local syntstr='Model={Get=function()'
for i,v in tablenames do
   syntstr=syntstr..v..'={' 
    for t,o in databases do
        syntstr=syntstr..table.concat(o,',')..'},'
    end
end
syntstr=syntstr..' return {'..table.concat(tablenames,',')..'} end,} return Model'
c.Source=syntstr
c.Name=modulename
c.Parent=directory
end

I know you have a break at the moment, but when you are back again, how can I do smt like that with DataPredict?

ScriptOn auf X: „Here’s my first 6 hours into neural nets! Cars use a 2-layer system (Input → Layer1 → Layer2 → Output) and genetic scoring to allow pattern recognition. Each generation uses the best-scoring cars from the last generation and modifies them slightly. #RobloxDev https://t.co/o1JiHoQFzj“ / X (twitter.com)

Seems like it takes in multiple distance from raycasts. However, it doesn’t seem like it is using reinforcement learning, but rather mutating the model parameters. That being said, I’ll cover these 2 cases as you didn’t specify if you want reinforcement learning or not.

Reinforcement Learning
  • Use distance from raycasts as inputs.

  • Give reward based on how far it avoids the edges. Also based on how far it goes away from the starting point.

  • That’s pretty much it…

Mutation
  • Use distance from raycasts as inputs.

  • Create some sort of array that tracks how far each of those cars goes from the starting point.

  • Start the car race competition.

  • Choose the best car that goes far away the farthest.

  • Get the best car’s model parameters and make multiple copies of it. Use getModelParameters() function.

  • For each copy of model parameters (except for the first one), add some random values to matrices inside of the model parameters. The neural network model parameters have table of matrices.

  • Do the race competition again and repeat the steps above until you get satisfactory results.

2 Likes

Doesn’t this use genetic algorithm or something like that?

2 Likes

Update Release 1.7:

  • Added setNumberOfIterationsPerCostCalculation() function to BaseModel. This allows you to skip over unnecessary cost calculation and go straight to the model parameters update.

  • Fixed NeuralNetwork train() function where it does not auto generate model parameters when it does not have any stored model parameters.

Important Notice:

  • This library will eventually get less frequent updates because I will be focusing on another deep learning library, DataPredict Neural.

  • DataPredict Neural will be able to perform similarly to TensorFlow and PyTorch, where there will be convolutional layers, sequence-to-sequence models and so on.

Not Important:

  • Look at those view numbers! It has reached over 11K views! Thanks for the support you all have given me during this library’s development!
5 Likes

Update Release 1.8:

  • Added ProximalPolicyOptimization and VanillaPolicyGradient reinforcement learning models.

  • Dropout for neural networks!

6 Likes

Update Release 1.9:

  • Added KNearestNeighbours model.

  • Added LearningRateStepDecay and LearningRateTimeDecay optimizers.

  • Added ModelDatasetCreator and ConfusionMatrixCreator.

4 Likes

I’ve been looking at your module for the past few days and I haven’t found a way to get a vector 3 value or even a value asides from the ones I provide it using :setClassesList()

Not enough description. What are you exactly trying to do and what model did you use?

Hey guys! I just updated a new video for the “Self-Learning Sword Fighting AIs”. The AIs’ actions is less erratic now and shows some consistency. I also uploaded the updated code as well!

2 Likes