We hear your feedback and we want to provide a clearer explanation of how, when, and why code can be flagged as a safety concern and what to expect during this process.
This process was put in place to identify and prevent malicious activity on the platform and is intended to stop such activity without disrupting legitimate developers.
We’re looking to flag content that’s dangerous or harmful to our community, not find swear words in scripts that would never be seen by players.
Code Privacy Concerns
One of the biggest concerns we observed was around code privacy and protecting personal keys.
Our automated review system looks for malicious behavior in code. In the rare event that code gets flagged by the system as a potential safety concern, a very small, specially-trained team goes in-game as players to check it out.
In certain cases, parts of the game’s code may be manually reviewed by that team, who will check to see what the specifically concerning code does. We have strict rules in place determining when developer code can be seen and this is only done in the context of platform safety concerns.
The majority of the folks reading this will never have code flagged by the system.
While we are not looking to punish anyone for using profanity in their code that is never seen by players, we do take seriously content that, when shared, is dangerous or harmful to others in our Community such as making threats to others, posting someone else’s personally-identifiable information without permission, etc. We generally do not consider Team Create sessions to be shared content, these sessions are private; however, if someone with access reports offensive content, we will investigate the author. Please refer to the Roblox Community Rules for more information on what is not allowed when content can be viewed by others.
We hope that this better explains the policy and we appreciate having this open dialogue with you. We’ll continue to follow up on developer feedback.