We’ve heard your feedback to refine our content maturity and compliance policies, especially regarding moderation. Today, we’re providing an update on our moderation process to provide more clarity on how we’re reviewing content maturity, checking compliance requirements, and applying moderation actions. We want to reiterate that it’s very important you answer the Maturity & Compliance Questionnaire correctly, and to keep it up-to-date with any changes to your experience, as we use these answers to provide accurate information to users, parents, and regulators. Our goal is to ensure that experiences on our platform have accurate content maturity and compliance information so that users around the world can choose what type of content they want to interact with, and parents can choose what is appropriate for their children.
Regular Content Maturity and Compliance Moderation
We regularly review your experience’s Maturity & Compliance Questionnaire answers in addition to abuse reports about inaccurate content maturity. If there are any discrepancies between your questionnaire answers and what our moderators find, your experience’s content maturity label may be removed and your experience will no longer be available for users under 13, and you will have to retake the questionnaire to receive a new content maturity label. When this happens, you will receive feedback via email or a message in your Roblox Messages inbox on the violation that the moderator discovered to help you better understand our policies.
[New] Moderation Actions for Repeat Violations
If we find that your questionnaire answers are repeatedly incorrect, there will be further consequences to your experience or your account depending on the severity of the violation. Repeatedly answering the questionnaire incorrectly can impact the visibility of your experience on Roblox, and your account could be suspended. We understand that it may take a few tries to understand how to answer the questionnaire accurately. More severe consequences will only be applied after repeated warnings and violations.
[New] Appeals for Content Maturity & Compliance Moderation Actions
If you believe any moderation action was applied to your experience by mistake, you can appeal the decision by following these steps:
After you confirm your contact information and device type
Set Type of help category to Moderation. A new dropdown menu will appear.
Set Help Subcategory Type to I was wrongly moderated for other content I created.
In the input field, describe why the moderation action was a mistake. Make sure to mention ‘Content Maturity’ in your appeal.
Click the Continue button to submit your appeal.
These changes are now live. If your experience’s Maturity & Compliance Questionnaire answers are accurate and up to date, there is no action required. If you have any further questions or concerns, please share below.
Waiting for the “but roblox wont moderate X Y or Z post” here.
This is actually a decent change, I’ve known the questionnaire team to give good feedback if a maturity label is incorrectly assigned. It’d also be nice to explicitly set the age of an experience above what the content maturity label suggests.
If you don’t mind, I would greatly appreciate ways to control content maturity in-game.
Currently, if you want a experience to be available to a wide audience, you gotta publish 2 of them.
A 13+ and a 17+ version.
With a simple API in-experience, I could remove / replace or tone-down certain elements if the player in question is not 17+.
I require some things in experience to be 17+ to remain accurate to a vision, world building and whatnot.
But I’d like an option to be able to “censor” things through Luau and scripts so I can make gameplay available to more players.
Edit:
Also something about the whole “malicious users can misuse the information” thing, this is already sorta possible and I would assume that Roblox has consequences for people that abuse the age system.
It’s already possible to know if a user is or isn’t 17+, only 17+ users can join age-restricted experiences and you can just use datastores or award a badge and read it to know if the user is age-verified.
But that would be ridiculous, an API would allow us to censor/allow specific content in the same experience without having to upload and maintain 2 versions.
I would love this as well, but more so for the 9+ to 13+ ratings. Currently, things like dead bodies and blood effects will push your experience from 9+ to 13+. It would be really great if we could have an API to detect if a user is under 13 and limit the effects shown to them (in this case removing blood and dead bodies) while still allowing those users to be able to play your experience.
Problem with my game design is also that for some things, blood and flesh is basically part of the gameplay itself.
Stuff like strategic dismembering to disable certain enemy attacks or using blood as a source of magic.
Beer I can easily replace with apple juice with just a few lines of code.
If the player isn’t 17+ I can just make the blood look more discolored and cartoonish.
But disabling blood is not always a option, especially with enemies that bleed acid.
Best I can do is tone it down so it’s more 13+ appropriate.
Maybe read up on the guidelines to try and find the most innocuous thing to put in your game to make it 13+, like putting a single poker chip on a table and saying “unplayable gambling content”. Should work but don’t take my word for it.
Hi there! We’re looking into tooling for devs to add minimum ages to their experience (in addition to the questionnaire). That way, you could receive a Minimal content maturity label but age-restrict to 13+, for example. Is this what you’re referring to?
This might actually be a good change that might actually help protect the platform a slight bit more than before, if it’s properly executed
I still wish that they just directly told the age ratings instead of these indirect terms though (or just tell both maturity and age rating at the same time)
Yes, adding a setting that allows us to restrict the minimum age of our experiences (despite it being rated Minimal, for example) would work equally as well!
I feel like it might be worthwhile making a new subcategory relating specifically to reviewing content maturity violations, is this only a temporary thing or will a subcategory be added?
I removed all blood, weapons and ragdoll in my game and made characters just teleport to spawn if they moved on red light green light.
They still repealed my maturity rating 3 times which I had correctly label as being All Ages no violence or anything.
I gave up and added it all back and had to put on
[Maturity: Mild] Violence (Mild/Repeated)
If you’re going to start putting moderation on accounts and experiences the least you can do is have the moderation team ACTUALLY play the game and not click Deny on everything
There were shooting games with repetitive violence and blood with 20,000 concurrent players with a maturity rating of all ages and no repeated violence/blood yet my game with like 100 players would get continuously denied.
As with the others, I myself repeatedly get the wrong maturity rating by moderation. I’ve changed and even removed systems that would give me a high age rating but it still gives me the wrong rating.
I would delay this change until proper protocols and tools are available for developers to minimize false moderation.
I agree with the others – I highly highly suggest an API to manage age-based content visibility so we can have our experience available to all users while ensuring the content is appropriate for all users.
Hi there, can you share more about what feedback you received? Every questionnaire rejection should be accompanied with feedback. Happy to discuss over DM if this is private information.
Mods do play through every experience they give feedback on.
If you see experiences where the maturity label is incorrect, please submit an abuse report, because that populates our moderation queue as well. Thank you!
FINALLY! Super good that you guys are taking steps to make the platform safer and to improve moderation. I do hope this is a start to a series of moderation improvements.
Seriously, this is a huge step in the right direction. Thank you for finally taking action! Let’s keep the ball rolling this year.
The content maturity level for social hangouts is a poor attempt at ensuring user safety. It doesn’t do what it’s promised to do and only hinders from more players playing the experiences.
I maintain various of dance experiences subject to a forced social hangout label, which continuously affect the retention and user growth. While I appreciate Roblox in trying to ensure user safety, it’s a half baked attempt at ensuring user safety. I have reported bad actors to developer relations with success before and work with multiple voluntary staff teams that manually review reports based on the experiences I maintain, and we come across many children with access to VC and various of PolicyService invocations likely due to the sheer fact the average new user will just fake their age upon registration.
Instead of forcing a label that nukes user growth, please work with the respective developers of these experiences closer. The experiences I named above aren’t solely social hangouts, but also used to record short-form content on platforms such as TikTok and/or YT Shorts which your official social media pages often use & endorse. You’re not advocating much user safety out of this, just another way in trying to poorly reduce it.
Experiences such as R6 Dances, Emote Blox, TTD 3 and more are actively moderated by voluntary and well-managed staff teams that receive reports through in-game means & the respective community servers with custom built management tools. We ensure safety for users much faster and more responsive than the built-in report tools Roblox offers.
If you’d like ensure safety for these experiences, please work with the developers for more fleshed out PolicyService API to ensure users in all ages can enjoy social hangout experiences even if limitations have to come in place. We ourselves already do a lot to mitigate extremely bad actors.