Feedback on "Advanced AI roblox troll detection"

Hi, I would like some feedback on my system to detect chat trolls in a furry/LGBT game I’m releasing. This system uses advanced AI to detect the intent of a statement. This is NOT just using string.match.

You can view it HERE - YouTube (I’m talking to someone however their voice is cut out for privacy sake)

The message instance is there for debugging… In the actual game it won’t be there informing everyone

(Accidentally deleted post earlier LOL)

2 Likes

it looks like it’s made in 2012 but one thing that i’ve seen in the video

are you using the old Message Object? Whenever i see the word being detected by your scripts, they are all displayed with the deprecated feature.

it was stated that you shouldn’t use it for a new work instead you should use TextLabel on this warning system.

and the last thing is, you should make the message report only one time or give the reporting a long cooldown so players won’t get these too much if it was designed to inform everyone.

You said that it doenst only using string.match, what else are you using to create this system?

The power of Artificial Intelligence :slight_smile:

It sends the message to my (private) AI which is trained to detect trolls and hate speech!

Its just for the sake of debugging lol. In reality it will be there in secret informing actual mods of the players

That system you designed it look so outdated. Every game i saw they use auto-kick if they spamming or capslock on it.

I’m skeptical of the use of the term artificial intelligence in this… surely, it’s some form of machine learning at most, if not just a dictionary of naughty words to tally up a score from. If you’ve detected the bad text AI or not, why not just hide the message instead of giving a warning? The bad message still goes to everyone playing. It’s not too much additional work to write a custom chat with the ability to sink text you deem naughty before it gets passed to Roblox’s filter.

1 Like

Like I said, that message is for the sake of debugging. And no it’s true AI. In short it analyses the text for hate speech and will just return a positive or negative. If it’s negative the player gets a strike. After 3 strikes mods get notified and after 10 strikes the player gets kicked.

Also it loops the text back to the player so only they can see it