Clarification on code flagged for safety review

Original announcement

Hey developers,

Recent discussions have brought up questions regarding Roblox policies around reading developers’ code. We want to be transparent about when, how, and why Roblox would ever read your scripts.

Before addressing those, we want to assert the following:

  • Your code is your own.
  • Your code is subject to the same kind of review as other assets if it gets flagged for Roblox to review.

We have a system that automatically checks code for inappropriate content. If code gets flagged, the script then gets sent to a specially-trained team for manual review. If any of the following are found in the flagged code or assets, you can be subject to moderation against your game and account:

  • discriminatory language
  • personally-identifiable information
  • real-life threats

Note that this list is not exhaustive; other inappropriate content could get your code flagged. Make sure to review our Terms of Use and Roblox Community Rules to ensure that everything in your code aligns with our rules!

We are always striving to make our moderation efforts as accurate and non-invasive as possible. However, since we’re all human, mistakes may happen. Fortunately, in the many months that this system has been in place, there has only been one incident of mistaken action against a game and developer, which was quickly resolved. We do and will strive to promptly address all instances of mistaken action against developers, and in the process of implementing this program, we will adjust our processes as needed to remedy and prevent any of these instances.

Developers should think of scripts as being subject to the same basic rules as the Developer Forum regarding language and decorum.

Furthermore, adding additional safety for the sake of your players in the form of an extra chat filter will not get you punished. We applaud you for wanting to go the extra mile to ensure your players have a safe experience! Just make sure the text you’re filtering is not visible to players (i.e. sent to the client) and all should be fine.

Developer Relations


This topic was automatically opened after 29 minutes.

This update looks cool, but I feel uneasy with this update. While it is VERY cool to see automation being used to protect users, I do not want people reading my code unless I’ve open sourced it.

Few questions:

  1. Who are these “specially-trained” indviduals? Why should I trust them?
  2. How does said “specially-trained” team handled obfuscated code? What if code was randomly generated and it just happens there is innapropiate references?
  3. Is this done on ALL code? Or just code I open source (models, or with team create)?
  4. What are the punishments? Is it the same as normally breaking the rules?
  5. What about old games and disabled scripts? This was not there when the code was written and an exodus of game deletion isn’t fair if the person was unaware at the time what can happen for the way they write their code.
  6. How do you identify “personal” information and “real-life” threats using automation? This seems pretty broad and sounds almost impossible.
  7. Would there be a place we could have our “extra chat filter” reviewed to make sure it complies with regulations properly?

Regardless, keep doing what you’re doing! I assume ALL updates are meant to improve the quality of the platform, and that has held true! But up until now I am questioning that.


This is great and all, but one of the things that should also really be added to list you guys moderate is bullying players, harassment, off-site links that leads to adult content or even discord to say. Also be great if roblox would also use that kind of system in place to detect exploiting modules or scripts exploiters upload then use to exploit into game, this should help cut back exploiting on ROBLOX.

Would this be counting comments or all of it inside of the script?

What else are you guys starting to enforce?


Does this extend to licenses?
I know that several developers include license information in the top of their scripts, and sometimes include their full names in said licenses (see open-sourced projects on github for examples)?


It appears all “non-ROBLOX-friendly” content will get removed when found, not just the ones explicitly listed.

This would be very cool! I would prefer it to be a widget of some sort. “Referenced modules” or something, that references all dynamically required modules, or just all modules in general.


This makes me really uneasy.

Basically, Roblox pretty much has the right to see all of our code (open/closed-source) so that they can moderate the platform? I get why they would moderate to keep the platform safe - however, it’s really concerning.

Hopefully this automated moderation doesn’t mess up when moderating code. An example of an infamous moderation system is YouTube’s, which is sorta flawed and could demonetize someone’s channel even if their channel content doesn’t violate their ToS.


Does this flagging system detect concatenated string inappropriate content?
(e.g. “F”… “…”)

Also, how does this differentiate in team create places? Is the owner held responsible for other peoples’ actions in scripts?


I appreciate the transparency, but…

Why do you need to scrape developers’ game code? If it’s private, then it’s hurting no one other than the developer.

I’m also worried about offsite links being found in my code, because I sometimes use URLs in conjunction with HttpService to do various things, such as to pull text from Pastebin.

I suppose I need to add obfuscation to the publishing pipeline, just for the sake of not getting moderated randomly?


Whatever, I guess, if it’s been around for a while without many issues, though it is a bit concerning.


Does this apply to actual code only or does it include comments? Unless Roblox has issues, no player will ever see the comments in code so it seems weird to enforce this on them.


So if I print “My name is Yoshikage Kira,” then am I going to be under review? What is considered identifiable information?


IIRC Lua literals are reduced down in the compilation step? Now I’m not entirely sure about the Luau VM that ROBLOX employ’s but this is done in the vanilla 5.1.

Notice how the dumped functions for a and b are both equal, however b and c are not equal to c. 1 + 1 from a appears to have been reduced to 2, therefore that is why they’re equal.


This is an important question though, will they read both my source and compiled code?


It’s nice to see Roblox taking the verrry long time to address this issue. I’d just like to clarify on some parts

I assume this is a script filter which works similarly to the chat filter? But what exactly is the system flagging here? Is it the final code or raw code?

Specifically how do they review the flagged code? Do they look at the context or just review it very literally?

In what ways? Script context is a very personal and general concept. What adjustment are you specifically referring to? In addition to that and like what others have mentioned, there’s no accurate way to identify personal data and real-life threats.

Anyone can consider this as personal data

local Data = {
    ["Name"] = "Mike",
    ["Age"] = 17,
    ["Gender"] = "Male"
    ["Address"] = "2008, Blox Street, Robloxia"
    ["Job"] = "Programmer"
    ["Income"] = "500,000",
    ["Marital"] = "Single",
    ["Language"] = "English"
    ["Nationality"] = "Robloxian"

But this isnt with the exact context. This could be used for a roleplay game as a character preset.

How about this, is this a real-life threat?

local StringForRoblox = "Hello, \n I saw you the other day entering your house. Your address is 2008, Blox Street, Robloxia. I'm going to ask you to send me Robux, if you don't do this within 24 hours, I will kidnap you and your family. Your timer starts now!"

    if Client.Name == "Roblox" then
        Remote:Fire(Client, StringForRoblox)

What if I used this for roleplay purposes? Would the moderator believe me that this isnt an actual threat?

I write my code the way I want to regardless it looking as if it’s against the rules and I shouldn’t be worrying if a moderator thinks otherwise.

I personally have no problem about my code getting reviewed, but how about cloud scripts that are sent with HttpService and then later executed with loadstring()? Is this filtered as well?


What about those scripts that if you put a free model into the game that person who made the model can hack into your game and put bad stuff into your game? That’s been a problem for a long time.


This is stupid, who is going to read my code? Some exploiter?
I dont think Roblox should be moderating our code, if we are keeping it private.


I would support this change, but I have sensitive keys & such inside of my scripts. I don’t want anyone to have access to anything like that, regardless if hired by Roblox or not. That opens so many security issues.

Additionally, code can just be obfuscated. The is nothing that can’t be determined from simply joining the game.


Does this also include for local variables? For example:
local “Badword” or “Private Name”


It means if any of your code contains explicit material is it subject to moderation as in an action taken against your account, like a warning / asset getting deleted / or just in general your account getting deleted.

I do not suggest uploading a script with tons of blacklisted words to test this!


Roblox is reviewing the source code though from what can be implied from what’s written in the post. This makes my question valid. Obfuscation is a similar method that would bypass this system.


I support this indeed, I don’t have anything personal to hide in my code, and you shouldn’t do that as well. Although this can definitely cause trouble to those who took free models, as if they were responsible.