A Solution to Chat Message Exploits That Ban Developers

Introduction:

Many individuals on the platform(such as developers, traders, influencers, or regular players) have been falsely banned in the past due to unethical developers taking advantage of the in-game chat systems to send inappropriate messages on their behalf. On some if not all occasions this has caused economic and psychological damage to said individuals, due to their hard work vanishing overnight.

How code messages are being sent:

For the legacy chat, this could happen from the client if they fired the SayMessageRequest remote and from the server if they used the SayMessage function of the Speaker object within the ChatService module. Also due to the legacy chat not being part of the CoreGui, they could just inject text messages as text UI’s to it directly, bypassing all modules related to it.

For the new chat, according to my knowledge, this can only be done from the client through the SendAsync function of TextChannels(due to running under the CoreGui).

So what a malicious developer would do is spoof messages using the methods above to get important individuals banned.

Solution:

The solution to this problem is complicated, especially for the old chat due to it not running under CoreGui. Also, the functionality mentioned above is important to keep because many developers like to make custom chat systems or ways to communicate between players.

So what I thought of is a way of marking every message being sent, as being sent by a real client or a script(either client or server one, client scripts with elevated access exploiters use don’t count).

For this, we have to separate chat systems into authentic and custom. Authentic is a chat system that can only be accessed through an API, works inside the CoreGui, and has a chat filter that can’t be modified by developers. Custom is a system like the legacy chat, that can be directly modified and tampered with, either by sending fake messages or manipulating the filter.

According to this, the scenarios are the following:

  1. Inappropriate message was sent by the client directly(through an authentic system) - Punish said individual on the platform.
  2. Inappropriate message was sent by a script(through an authentic system) - Give the player the same punishment they would get on the website, but localize it to only the current experience.
  3. Inappropriate unfiltered message was sent by a script(through a custom system) - Terminate the experience until the issue is fixed. Ignore the client because this wouldn’t happen under normal conditions(fake filters should fall under this category, and reported messages should be refiltered to check if the filter is identical to the real one).
  4. Inappropriate filtered message was sent by a script(through a custom system) - Same with case 2, localized in-game ban.

An inappropriate message sent by the client through a custom system isn’t a case, because we can’t detect if it’s a script or not, so it’s assumed to be a script.

Unless the legacy chat system is updated to be considered an authentic system, from now on it will be considered a custom system, and experiences who use it will only give in-game bans.

Implementation of an in-game ban system:

For this to work, Roblox needs to implement a new ban system that is only accessible by them and not the game developers. The system’s purpose will be to give players in-game bans due to inappropriate chat messages that have been reported through the report feature, and the only way to receive unbans will be to contact Roblox support directly for an unban request due to being falsely banned. That way developers won’t be able to keep offensive players within their game(because Roblox systems will be the ones judging) and also experiences that are made to ban users will only make localized damage within that experience. This system should not add actual punishments to a user account(like taking away the ability to devex) and give punishments identical to those a user would get on the site for that message(a 1-day ban, 3-day ban, 7-day ban, etc). The account deleted punishment can be expressed as a permanent ban and a data deletion request(but only for this specific experience). Also, the system should be inactive during studio testing(so testing is easier and malicious plugins can be avoided).

Edge Cases:

  1. The user to be banned is one of the experience developers(who has the developer badge and access to studio edit) - In this case, the user can’t receive a punishment worse than a 7-day ban(like a perm or month ban) and the ban isn’t being applied during studio edit. This is about in-game bans from their game and not platform-wide bans.

Notifying the punished player:

If a player gets banned from an experience due to an offensive message, they should receive a message on the website, identical to a ban message(with the message, reason, etc) with the difference that it has a reference to the experience in which they got banned and an accept button to continue to the website. A new player menu should be introduced with a list of all the current game punishments their account has and the time left until they expire(or just some text saying the ban is permanent). When a user tries to join an experience from which they’re banned they should be kicked before even loading in with a new default ban message(instead of “You have been kicked from the experience”) which contains the reason, message, and time until the ban expires(or perm if it doesn’t). Also, the play button can be disabled/gray on the website and mention the user has been banned from the experience.

Opt-out:

Experiences can “opt out” from the system described above if they eliminate all things that can trigger it. Basically, if they use an authentic chat system for 100% of the game time and they don’t make any player messages through scripts/code unless they’re 100% sure the message is safe. Which also eliminates the main issue the post is trying to solve.

Lastly, everything related to a chat report should be moved to the server. So an exploiter that can tamper with CoreGui can’t pretend that they received an inappropriate message from another player.

7 Likes

I think this would be a great system!

But for in-game bans:

Players may think that the developer of the experience banned them, and may go against the developer/their games in the future…when working with other developers, I’ve seen their games spam-botted with fake dislikes because of in-game bans and other punishments. Unless there is a system put in place that would prevent this from happening, then it may backfire for developers. It should be considered to make this an optional feature for experiences (especially for games that rely on the current chat system) if this idea is implemented. Though I would like to use a feature like this in my experience.

This can be avoided if the kick message isn’t the default “You have been kicked from the experience: reason” and it clearly defines that it’s a ban from Roblox due to inappropriate chat messages, not the game devs.

This unfortunately isn’t an option because the entire point of the post is to safeguard accounts from malicious devs. Roblox will have to ensure backward compatibility with older games.

However, a way developers can “disable” the system is by using the new chat system and not using SendAsync in any of their scripts unless they know the message within it is 100% safe for said user.

Then it would make more sense to stick with banning the user altogether.

No, it wouldn’t because it completely ignores the point of the post, safeguarding accounts from bad actors.

Sorry, I misunderstood that. I just think that Roblox should have less lenient solutions to these exploits so the player isn’t encouraged to exploit in the future.

Didn’t ROBLOX fix this after the cross woods incident?

1 Like