How to customly censor inappropriate language [Tutorial]

In this tutorial, I will be teaching you how to customly censor inappropriate language.

This bug is where furries are able to get harassed because the bug doesnt block messages which

 _____     _             _       _ 
|_   _|   | |           (_)     | |
  | |_   _| |_ ___  _ __ _  __ _| |
  | | | | | __/ _ \| '__| |/ _` | |
  | | |_| | || (_) | |  | | (_| | |
  \_/\__,_|\__\___/|_|  |_|\__,_|_|
                                   
                                   

Customly censor messages meant to harass furries
[1] Create a local script
[2] Make sure the local script is inside of **StarterCharacterScripts**
[3] Copy and paste this code into the local script:
local meanWords = {"suck", "wrong","unacceptable","hate","awful","horrible","terrible","disastrous","disgusting", "filthy","unkempt","annoying","bad","barbaric","dirty","dishonorable","feak","freaks","freakish","gross","putrid","odd","trash", "weird", "areanimals", "dogs", "dumb"}
local extras = {"fatherless", "nodad"}
local function removeSpaces(msg)
	local result = ""
	for i,v in pairs(string.split(msg, " ")) do
		result = result .. v
	end
	return string.lower(result)
end


local function censorMessage(msg)
	local gui = game.Players.LocalPlayer.PlayerGui:WaitForChild("Chat").Frame.ChatChannelParentFrame.Frame_MessageLogDisplay.Scroller

	local noSpacesMsg = removeSpaces(msg)

	for i,v in pairs(gui:GetChildren()) do
		if v:IsA("Frame") then
			local noSpacesGui = removeSpaces(v.TextLabel.Text)
			if noSpacesMsg == noSpacesGui then
				local newMessage = "                               ";
				for i=string.len(msg),1,-1 do
					newMessage = newMessage .. "#"
				end
				v.TextLabel.Text = newMessage
			end
		end
	end
end


local function checkForHate(plr)

	local lastMessageContainedFurry = false

	plr.Chatted:Connect(function(msg)

		local containsFurry = false
		local flagged = false
		local extraFlagged = false

		local msg = removeSpaces(msg)

		for i,v in pairs(meanWords) do
			if #string.split(msg, v) > 1 then
				flagged = true
			end
		end

		for i,v in pairs(extras) do
			if #string.split(msg,v) > 1 then
				extraFlagged = true
			end
		end

		if lastMessageContainedFurry and not flagged then
			lastMessageContainedFurry = false
		end

		if #string.split(msg, "furry") > 1 or #string.split(msg, "furries") > 1 or #string.split(msg, "furrys") > 1 or #string.split(msg, "furry's") > 1 then
			containsFurry = true
			lastMessageContainedFurry = true
		end

		if containsFurry and flagged then
			task.wait(.5)
			censorMessage(msg)
		elseif lastMessageContainedFurry and flagged then
			task.wait(.5)
			censorMessage(msg)
		elseif extraFlagged then
			task.wait(.5)
			censorMessage(msg)
		end	
	end)
end

for i,v in pairs(game.Players:GetPlayers()) do
	checkForHate(v)

end

game.Players.PlayerAdded:Connect(function(plr)
	checkForHate(plr)
end)

Before:

image

After:

image

14 Likes

Wait, thsi is also making a lot of words that’s re not offensive be ceonsored (awful is an example) but good tutorial

4 Likes

It only censors it if it mentions furries and says awful. For example, if they say pineapples are awful it wont censor but if they say furries are it will. Saying furries are awful is categorizing an entire group of people as awful just because of their interests and hobbies which is unethical.

here is an example of it
image

8 Likes

And what if someone says

furries
really
suck

?

Assuming I read the code correctly, lastMessageContainedFurry should reset to false when “really” is processed.

if lastMessageContainedFurry and not flagged then
	lastMessageContainedFurry = false
end

You should have a log of the last 5-7 messages and check if your flagged words and “furry”/“furries” are matched, but even so, I can see many false positives.

1 Like

the system is not fool proof but it should be able against most attempts. If you want, you can make it so that when someone sends a message, it doesnt censor for them. If you don’t censor the message for them, it will be sufficiently harder for them to bypass the chat as they cant use trial and error to see what works.

Why is “wrong” in the list of mean words?

“Furries are wrong”
“Being a furry is wrong”
“If you’re a furry, you’re wrong.”

some examples of why it is added

do you have any ideas on the issue regarding false positives?

You could make an array to loop through and check words other than furry or furries. Such as pronouns like ‘You’, ‘he’, ‘she’, races, nationality, religion, etc.

This can prevent toxic messages like: “You suck at this noob” ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

Or it already does that.

1 Like

The reason why I did furries specifically is because for some reason roblox doesn’t have preset things for them even though I believe they should. I think the way it currently is, is good for referring to a single individual, however not an entire group of people just because of their interests. for example, I don’t think its fair to censor the words “You are wrong” but I do think its fair to censor “Furries are wrong.”

Words like ‘You’, ‘he’, ‘she’ refer to single individuals. Roblox already censors saying offensive things based off races, nationality, and religion.

A quick solution could be concatenating the last few messages (if furry is found in one of them) and running it through an AI tone detector. If the results come out positive, flag the message.

wouldnt it cause a ton of false positives? I am bassicially “Creating my own AI tone detector” right now. I think I have to make the script more engaged with the human language. For example, instead of just checking if the message contains a hateful thing, I could check if the hateful thing is following by furry or if furry is followed by the hateful thing.

Therefor:
I hate furries would get flagged
Instead of
Furries are cool but I hate pineapple.

The title of this topic seems irrelevant. Why?

Also, this tutorial isn’t a tutorial if it only has 3 steps. This is more of a community resource.

1 Like

You are if-checking, which is either a yes or no, and cannot ever outperform a tone detector. An AI tone detector is a machine learning model that can take any text and return a value ranging from 0-1 (or multiple of that range) based on how derogatory or overall “mean” the given text is.

https://www.google.com/search?q=ai+tone+detector

Make it so it supports any language other than English like you filtered “wrong” but if someone knew Arabic, then they can say خطاء which means wrong in Arabic

I’m pretty sure we have a constitutional right to have those opinions.

Its true that in amercia you have the freedom of speech however its fair to censor offensive content. You are saying because you have freedom of speech that I am not allowed to stop you from treating others disrespectfully which in reality the developer is allowed to moderate your behavior as they please.

It’s not fair to censor offensive content in America, because that counts as freedom of speech.

Censoring what you say in a video game is not a violation of your constitutional rights. I dont know why you care so much about calling some random furry fatherless in game but its not a violation of your constitutional rights for me to censor that.

For example, freedom of religion, assembly, press, petition, and speech is the first ammendment right you have. It specifically states that private businesses can censor any content they please.


Source: The First Amendment, Censorship, and Private Companies: What Does “Free Speech” Really Mean? - Carnegie Library of Pittsburgh

Roblox is a private business and therfor censorship is allowed.

This is because my script adds text such as “fatherless” and “no dad” to roblox’s built in chat censor as it doesnt have that originally and you are protesting against my script. Its fair to assume the only reason you would protest against me adding those words customly to roblox’s chat censor is that you seek to use that language and you do not want me censoring it.