Hi Creators,
Today, we’re thrilled to introduce the new Safety Analytics dashboard. For the first time, creators will have access to insights on the frequency of abuse reports generated within their experiences. This will help you understand how your experience compares to others, and measure the impact of changes that you make.
You can find the Safety Dashboard in your Creator Dashboard for your experiences under Safety > Overview. The dashboard provides a top-level view of user-submitted abuse reports within your experience.
Here’s a look at what’s new:
1. Understand Abuse Report Submitters per 1,000 Playtime Hours
This chart shows the rate of abuse reports in your experience, normalized per 1,000 hours of playtime. You can see this chart if you have 1000+ daily playtime hours in the past week.
- How to read it: This metric allows you to monitor the frequency of abuse reports related to your experience (includes all reports submitted in-game and experience reports submitted from experience details page). If you see an increase in this number, this means that more abuse reports are being generated in your experience and this can be an early indicator of growing toxicity. You may need to investigate further (e.g. Did reports spike when you introduced a new feature?) and take action before it becomes a larger issue. The 90th percentile benchmark is provided as a point of comparison. If your experience is above this value, it is in the top 10% of experiences receiving the most abuse reports.
For example, a spike in abuse reports on February 2nd, after the launch of your new custom avatar editors, could indicate that users are misusing the feature or creating combinations that violate community standards. Potential solutions include rolling back the new functionality, increasing moderator support, or providing more proactive education on policies and community standards.
Please note, this metric is primarily for your information. Roblox always reviews each abuse report on its own merit. However, if the abuse in your experience becomes pervasive, we may ask you to make changes to improve safety within your experience.
2. Pinpoint negative behavior with abuse reports category breakdown
To help you take more targeted action, this chart breaks down abuse reports by category. The “Other” category groups all categories outside the top 9. These are the categories selected by users and reflect their understanding of each category. Our moderation teams confirm both the category and whether the content violates our policies before taking action on an abuse report.
- How to read it: This breakdown helps you pinpoint the general types of negative behavior occurring in your experience so you can take more targeted action. For example, a high percentage in the “Romance or sexual” category might prompt you to review your in-game avatar editor tools or in-game chat systems. Or if you see a high percentage in the “Cheating” category for an Obby game, you might want to design mandatory checkpoints before granting rewards or add custom events to flag players who have extreme capabilities.
3. Filter by channel for targeted insights
You can filter both the abuse reports trend chart, as well as the abuse reports category chart by channel to isolate reports related to a specific part of your experience:
- Experience: Direct reports about the experience itself (e.g., inappropriate content).
- Chat: Reports related to in-experience text chat.
- Voice: Reports related to in-experience voice chat.
- Audio: Reports related to audio assets used in the experience.
- Avatar: Reports related to user avatars, clothing, or accessories.
When you apply a channel filter, the benchmark will also update to show you a comparison relevant to that channel.
4. Stay proactive with automated safety insights
To help you stay ahead of potential issues, the dashboard will automatically show an insight if your rate of abuse report submitters rises above the 90th percentile benchmark, either for your overall experience or for a specific channel.
5. New safety documentation for creator tools
We compiled new Safety documentation with an overview of safety tools you can use to maintain a safe environment in your experience. Here are few tools you can use to improve moderation:
Manage Player Behavior:
- Ban API: Permanently remove disruptive users and help detect their alternate accounts.
- Kick API: Temporarily remove abusive users from a server.
- IsVerified API: Limit certain features (like ranked play or trading) to identity-verified users to discourage bad behavior.
Filter All User Text:
- Use
TextServiceto filter all non-chat player-generated text, such as signs, or pet names, to block inappropriate content and personal information. - Use
TextChatServiceto deliver messages for communication features.
Ensure Policy Compliance:
- Use
PolicyServiceto adapt your experience based on a player’s region or platform policies. You can use this for things like advertisements, paid random items (loot boxes), and social media links.
We are continuing to develop new analytics for violating scenes and a new moderation API to help you moderate UGC tools. Let us know if you have any questions in the comments below!





