Anti-Toxicism Script

Anti-Toxicism Script


This project has been discontinued. Please stop using the script.

Old post

About

This script blocks most forms of toxic-comments. It works by using sentiment-analysis, so it won’t be perfect and it can be easily bypassed, but it should be good enough for some people. Also, I only released this because it was not openly available on the DevForum.

This is only my second post, so please don’t go hard on me


Why an API?

So that I can update the wordset in real-time and it will update for all the servers.
You can see here (Send a post request to https://profess1onal.club/sentiment with the key: Message) to see how it works.


You have an obvious yet unfiltered word that you haven’t added in!

You can reply to this thread and I will add it.


Thanks to:

Synitx for helping to fix an exploit and Jeremy for telling me about the exploit.


Script:

Anti-Toxicity Filter - Roblox

Alternative:

local HttpService = game:GetService("HttpService")

game:GetService("ReplicatedStorage"):WaitForChild("DefaultChatSystemChatEvents").SayMessageRequest.OnServerEvent:Connect(function(plr, message, channel)
	local msg = tostring(message)
	local request = HttpService:RequestAsync({ 
		Url = "https://profess1onal.club/sentiment",
		Method = "POST",
		Headers = {
			["Content-Type"] = "application/json"
		},
		Body = HttpService:JSONEncode({ Message = msg })
	})

	if request.Success then
		local score = HttpService:JSONDecode(request.Body).score
		if tonumber(score) < 0 then
			plr:Kick("\nKicked for toxic behaviour.")
		end
	else
		return warn("The toxicity filter's server is down.")
	end
end)

Testing endpoint (updates will be put here before release to the actual API):
https://profess1onal.club/sentiment-testing/

You can try the script without putting it in your game here:
Anti-Toxicity test - Roblox

5 Likes

I wouldn’t really use this, since this kicks players.

I’d rather change automatically edit the message using ChatService (Specifically the RegisterFilterMessageFunction). I think its a better alternative to kick them.

Like this:

-- Paste this example into a ModuleScript within the ChatModules folder.

local functionId = "editText"

local function editToxicMessage(speaker, messageObject, channelName)

    if messageObject.Message == "L" then -- If the player tries to say "L"
        messageObject.Message = "gg" -- It will auto-edit it to "gg" instead before its visible in the chat
    end
end
 
local function Run(ChatService)
	ChatService:RegisterFilterMessageFunction(functionId, editToxicMessage)
end
 
return Run
25 Likes

Fair enough, but I want to keep this server-sided (and I probably will change it to a mute).
and the api is public, so you can just edit it to your own desires

i mean his method is serversided tho

3 Likes

True, but you’ll lose a lot more players when you kick them for being toxic.

Don’t get me wrong. Being toxic is bad. But kicking someone for being toxic is just as bad as having an anti-cheat that kicks people for exploiting, if not worse, since there are more toxic players on Roblox than exploiters.

Plus, people would have to exploit to be toxic. And I doubt most people who play the game would go out of there way to make an Anti-Exploit or use Darkdex just to say “ez” or “L”.

2 Likes

I have just changed the script to this, but the chat can be delayed if it’s handling a lot of requests, so chat messages will be sent slower.

i like the idea, though it could be executed just a little bit better

i recommend ChatService for these things

2 Likes

btw, not recommended for gameplay scripts yet because it checks every message so

1 Like

Looking through my visitor logs… yeah…

think you can guess from the image

2 Likes

I think it would be better if the toxic word was changed to something like [Content Deleted] instead of just “removed”, which looks like the player actually said “removed”.

Also, this is easily bypassable.

And how is this easily bypassable? This is server-sided.

He meant by bypassing like this

L

use instead

LL

like that, not by security

5 Likes

As said in the post:

It is good enough to cut out some forms of toxicism, but nothing is perfect.

It seems to only target words an not phrases
for example
“get rekt”
and
“get a life”
and
“you are garbage”
and
“you are annoying”

and then it incorrectly flags things like
“shut the door”
and
“take out the trash”

Otherwise it works pretty well.

2 Likes

I tested it with message “I hate it” and it say it’s toxic ;-;

I’ve just have removed the word “hate” from the wordset, thanks for reporting!

Based on the comments, I have come to the conclusion that something like this is almost impossible to make. Why? People can easily bypass this, no point in having it your game if so. You also can’t filter out ‘toxic’ sentences, you only filter words, which is a terrible idea unless you can detect the words being said in a sentence.

Kicking an exploiter is one of the most effective ways to stop exploits. Trying to block their exploits will keeping them in will easily allow them to figure out how to bypass the system. All exploiters that inject code into their clients should be punished, period. Meanwhile, people who are toxic are completely fine, and I’m worried that this will only exacerbate the problem as people bypass and are encouraged to be even more toxic.

If its a false positive then no. it would make you lose players instead. Thats why most anti cheats dont kick you but instead tp you to your old pos, etc.

1 Like