Hi, I would like some feedback on my system to detect chat trolls in a furry/LGBT game I’m releasing. This system uses advanced AI to detect the intent of a statement. This is NOT just using string.match.
You can view it HERE - YouTube (I’m talking to someone however their voice is cut out for privacy sake)
The message instance is there for debugging… In the actual game it won’t be there informing everyone
and the last thing is, you should make the message report only one time or give the reporting a long cooldown so players won’t get these too much if it was designed to inform everyone.
I’m skeptical of the use of the term artificial intelligence in this… surely, it’s some form of machine learning at most, if not just a dictionary of naughty words to tally up a score from. If you’ve detected the bad text AI or not, why not just hide the message instead of giving a warning? The bad message still goes to everyone playing. It’s not too much additional work to write a custom chat with the ability to sink text you deem naughty before it gets passed to Roblox’s filter.
Like I said, that message is for the sake of debugging. And no it’s true AI. In short it analyses the text for hate speech and will just return a positive or negative. If it’s negative the player gets a strike. After 3 strikes mods get notified and after 10 strikes the player gets kicked.
Also it loops the text back to the player so only they can see it