Thank you so much! The first code seems like it tries to mimic player’s playstyle, interesting!
Hiya! This looks awesome and I can see great uses for this within the platform.
However, it could be improved quite some more by including comments within your source code on GitHub. That way, it is much easier for people to go along your train of thought. Furthermore, I feel like the variable naming scheme is a bit too verbose than required. As mentioned previously, comments can be used instead to provide details about what data a variable holds.
Eh. I don’t think comments are necessary since majority of game developers do not have machine learning knowledge. Because of that, I doubt they would be interested in looking into the codes.
For the naming scheme, I kind of disagree with that. It provides clear and comprehensive descriptions of what each variable are. Even if others agree with you, the first reasoning still applies.
Hello again! Thank you for the pseudo-code it really helped! However there is one small issue, the gameplay is a little complex for just condition checking, the function would be more than 200 lines long! So I am wondering how would I make the neural network explore strategies on its own?
Kind of difficult to tell without knowing the details. May I know some more context please?
Well, I have a swordfighting arena with two spawns, one spawns the neural network, one spawns a hardcoded bot. I would like the neural network to try to find an optimal strategy to fight against the hardcoded bot
Try reinforcement learning code I provided, if you haven’t done so already. It would cut down quite a lot of codes.
Yes but it has the function:
local function evaluateRealActionNumber(predictedActionNumber)
local realActionNumber = 0
--[[
Put your own conditions if certain conditions are met.
For example, if the NPC takes a hit while fighting, then move back.
Another one is that if the NPC doesnt move forward because of something blocking it, then it needs to jump.
--]]
return realActionNumber
end
Which means with a complex gameplay would takes a lot of codes. The kind of learning I’m looking for is similar to this which the decision of an agent in an environment depends on the reward function:
I see. Unfortunately, I don’t think I have good solution for that.
I do however might implement new neural network function to take care of this issue. Are you okay waiting for a while so that i can implement new features?
It is completely fine by me! I don’t mind waiting at least for a month
To be fair, your point stands - but I’d love to review the code sometime soon (as I’m currently undergoing education in ML).
Regarding the variables, it just feels odd looking at extremely verbose naming in a repetitive manner, but guess I’ll have to get used to that lmao.
Heads up. I have added QLearningNeuralNetwork, but haven’t completed the documentation (it will take a while). You can have a look at the functions’ parameters to understand what it does.
It is available for Beta version only.
Edit: The documentation for Q-Learning Neural Network is completed.
I don’t plan on implementing this (I don’t have a PTW game), but would it be possible to use this for the following:
- Find out when a player is most likely to be willing to buy a product based on where they are, what they just won, what goal they just completed, etc.
- Use the AI to find out if certain aspects of players such as, how often they chat; what system they are on; how good they are at the game, could be used to do the same thing as above.
- Find out what aspects of the games make people happy \ more willing to spend robux, by just doing something as simple as prompting them (measuring happiness), or purchase prompting them \ noting when they complete a purchase (measure willingness to spend robux) and storing the data about where they are in the game and what they are doing at the time.
These are obviously … morally questionable things to use the AI for (although I could definitely see some front page games using something like this) , but in a purely theoretical sense, what is your opinion?
And less related to the previous three:
- Train a bot to model a certain player, or, in a sword fighting game, learn from the top 100 players (determined by kill to death ratio or something like that), to create a good sword-fighting bot.
-
Yes. All you need is to train the model and extract the model parameters. Then you can interpret the factors from the model parameters that leads to certain predictions.
-
Yes to all except for the “what system they are on”. Same explanation as above.
-
Yes. Same explanation as above.
-
Yes. Code has been provided in this post. Though that is if you wish to use the “Release 1.0” version. The “Beta 1.12.0” version may have slight variation for NeuralNetwork functions.
Also regarding with my opinion to your question, I’m very neutral about it. As long they don’t use AI to harm people, I just don’t really care.
Release Version 1.1 is here! Now we got new neural networks with reinforcement learning capabilities:
-
Q-Learning Neural Network.
-
SARSA Neural Network. (A modified version of Q-Learning Neural Network)
Also, got some changes to BaseModel and classification models.
Here’s a tutorial on how to build neural networks with reinforcement learning capabilities.
Links:
Source Code + Beginner’s Guide To Building Neural Networks
Edit:
It seems like I forgot to mention on how to initialize the featureVector. It is something like:
local featureVector = {{1, 2, 3}}
The first (outside) array determines the number of rows. The second (inner) array determines number of columns.
Release Version 1.2 / Beta 1.15.0 is here!
-
Q-Learning and SARSA neural networks now have experience replay.
-
Model Parameters Merger is now available.
-
SupportVectorMachineOneVsAll and LogisticRegressionOneVsAll have been replaced with more general OneVsAll under “Others”.
-
predict() functions also returns original outputs when set.
-
predict() functions also returns rows of matrices instead of one.
-
Bug fixes.
-
Two in “Others” category got new upgrades.
-
ModelChecking → ModelChecker
-
GradientDescentModes → GradientDescentModifiers
-
You should make a video showcasing you making AI that learns how to work with a realistic ragdoll. Would gather some interest.
Eh. I might try that if I can allocate my time to it. However, I’m not sure where can I get the realistic ragdoll stuff. Any ideas?
What a coincidence! I’m also trying to create sword-fighting AIs but have had no luck. One thing I can
definitely suggest though is looking into the self-play algorithm. What I learned from the self-play videos I’ve watched is that an AI that plays against an opponent that is too difficult, e.g a hard-coded AI, will fail to learn successfully. The AI needs to play against an opponent of similar skill level in order to learn successfully. And what better opponent than itself!
I would also suggest learning reinforcement learning basics if you haven’t already. Once you’ve got the basics down, choose an algorithm for your problem and try to implement it.