Detecting Exploiters Idea


With the recent ban wave (although reverted), it is clear that Roblox is still trying to do something to deter the ridiculous amount of exploiters on Roblox so here’s an idea for detecting exploiters.

Most if not all top games have some sort of exploit detection system in place and it’s unlikely that they detect falsely (or else they wouldn’t be popular since a lot of potential players would be banned). When one of these top games detect an exploiter, they can report the player to a database through some kind of method (maybe a new Service?). If a player is reported by 40+% of the games they play, then it can be concluded that the player is almost definitely an exploiter and should be logged for the next ban wave. Obviously, 40% is just an arbitrary threshold and can be adjusted. Only reports by games with a large number of players will be recorded to prevent troll developers from falsely logging people.

Perks of this method:

  1. False detection is very unlikely especially if the threshold is calibrated well.
  2. Effective at stopping serial exploiters (since they exploit in a larger percentage of games that they play).
  3. Does not rely on detection of the exploit program directly which can often be difficult and is a constant battle.

Example: Bob has played 10 different popular games in the past month. Of these 10, he’s reported and banned by Phantom Forces, Adopt Me, Arsenal, and Jailbreak. It is safe to conclude that he is an exploiter and should be included in the next ban wave.


  1. Top games report exploiters that they detect to a database
  2. If a player is reported by enough games, they are IP/hardware banned on Roblox

Exploiters will, for the most part, stick to games they know and “like”. If an account gets banned from it, they’ll use another account and it’ll continue once more.

Hacky “detection” methods shouldn’t be relied upon 100% for simple game bans, let alone IP banning a user. It’s unreliable, and even one false positive is one too many. In addition, not all top games have incredibly good exploit detection, if any. This means that those games can, and will, falsely report users.

In addition to all that, this opens a large hole for abuse. Many “top game developers” all in a group could easily get any random user banned merely because they didn’t like them or similar. It could be used for blackmail and other malicious activities.

Overall, this is unreliable, unsafe and just really wouldn’t work out at all. I can see where you’re coming from, but it just wouldn’t work out the way everyone would want it to. Leave the detection up to Roblox and just make sure your events are secure.


There are many simple exploits that can easily be detected upon injection with 100% accuracy. I am sure top games have these implemented. Aside from these simple exploits, there are of course exploit detections that vary from game to game and admittedly, they’re not 100% accurate. But this is why there’s a threshold in place. If all bans are accurate, then a report from a single game would be all that it takes,

This is a very unlikely situation and again, this is why we have a threshold. Additionally, even if this did happen, think about how many exploiters that can be caught by this method. The guaranteed benefits FAR outweighs this very unlikely scenario.

1 Like

I really don’t think this would happen. This would be the equivalent of saying the US government could agree to make taxes 90%, and because of this risk they shouldn’t be able to change our taxes. It could happen, but the chance of this ever happening is so unlikely that it’s not worth considering as an outcome at all.

To be honest, the system @PhoenixSigns suggested is a pretty solid one compared to the many, many exploit detection ideas that have been posted.


I find 2 major flaws in this:

  1. What if the number is too big or too small?
  2. What defines the game as one they “play”?

Say the threshold would be set to 50%. What if a player only plays two games, and gets false detected by one of them? Since they only play 2 games, being detected in 1 automatically fulfills the threshold.

You could say an easy solution is to add a minimum number of games or such, which is where I see the 2nd issue: what if the exploiter would join 200 random places to decrease their detection ratio?

Easily solved problem by setting a lower limit for number of games played.

First of all, it’s easy to factor in play time. Second, this wears down exploiters’ patience and deters the use of exploits. No single solution can stop exploits completely but every obstacle contributes to hindering exploiters.

1 Like

This seems very high-effort to implement. Why not just have Roblox engineers focus on patching the exploits instead or improving platform-wide detection for it? (since they already do this, based on the last banwave) No need to arbitrarily trust developers with this. Developers shouldn’t have to add complicated anti-exploit measures to their games, since that’s a platform-wide issue (all games use Roblox engine).


Yeah, no.

A few things:

  1. There is no “perfect” anti-exploit detection. If there was, Roblox would roll it out to every game (and then it’d get mitigated by exploiters anyway). Most big games don’t have to rely on crude anti exploit measures to make sure their games are enjoyable for everyone, and a lot of those who do have a poor success rate / too many false positives.

  2. Popular game ≠ Trustworthy game - we shouldn’t base the weight of a ban just because of how many players a game has. Game popularity is not related to, if not sometimes inversely proportional in some cases, the credibility and integrity of its developers.

  3. “Perfect Calibration” of these kinds of systems are impossible. Systems (somewhat) comparable like VAC (Valve Anti-Cheat) have clear standards involved, and it’s a reviewed process. Only a few games have it enabled and those who do have to adhere strictly to the guidance put forth by Valve. Allowing game developers to decide the reporting threshold on a whim will lead to disaster, and Roblox dictating the threshold would make the idea pointless anyway.

  4. Hardware / IP bans are terrible methods of banning people. It’s the equivalent of nuking the block because of one bad apartment. People can change IPs, and the other people who get that IP from their carrier-grade NAT DHCP will be banned for no reason. Same deal for shared IPs such as some colleges or apartment buildings.

Banning exploiters isn’t a sustainable or particularly effective way of getting rid of them. Roblox did the ban wave to send a message, not to actually make a dent into the number of exploiters. Your game should rely on industry best security practices as the only effective way of reducing exploiters.

Patching exploits is a constant cat and mouse game. This is a detection method that is effective without constant updates.

In theory, developers should just be able to add one extra line before banning the exploiter such as:


High effort on the platform side – not the “developer who already has “anti-exploits” in their game” side.

1 Like
  1. As stated previously, this is not meant to be a standalone “perfect” solution that’s going to magically exterminate all exploiters. This is just a simple detection method meant to catch a percentage of exploiters.

  2. I agree that not every game can be trusted. This is why there’s a threshold so no one game controls whether or not a player is banned.

  3. Like I said in the example, if a player is banned from Phantom Forces, Adopt Me, Jailbreak, and Arsenal, do you really think the player is innocent? If you want to play it safe, then make the threshold higher. It’ll detect less exploiters but it’ll still catch some.

  4. Then ban with whatever method that Roblox is currently banning players in ban waves with.

We don’t have access to many of game industry tools and 3rd party anti-cheat software like BattleEye making it almost impossible for us to secure our games to the level of game companies outside of Roblox.

I don’t think high effort should be an excuse to not implement anti-exploit measures. Exploiters are probably one of the biggest problems facing Roblox. They are way too common and completely ruin the game experience of other players.

Yep, that’s my point: Roblox should focus on preventing the exploit and adding global detection mechanisms on game servers/clients. That way this idea can be implemented without needing to trust developers with firing a signal first, and it’s a more platform-ready way of tackling exploits. No need for them to waste effort on this when the effectiveness is not proven.


BattlEye is not a best practice. It’s heavily criticised, and hence why basically nobody uses it (plus its ridiculous licensing).

Most games have nothing near the amount of anti-injection, anti-tampering or the built-in abstraction that you don’t have to worry about that Roblox has. All you have to worry about is the final stage, at the application layer. The reason they’re so successful is because:

  1. They don’t care if a few exploiters jump higher than everyone else, since the overall damage is minor.
  2. They focus on robust, theoretically secure server-side solutions. If your game needs this to be “secure”, you’re approaching security wrong.

As @buildthomas has stated (and I think you might be misinterpreting what he’s saying) Roblox shouldn’t be wasting resources on methods that the industry has proven don’t work. Instead, they should focus on what does work - giving developers the tools to make their own choices, without arbitrary systems that share bans across games.

We shouldn’t be exposing bans to additional human error - and how exactly would these bans be appealed if they’re accidental? This just won’t work, and even if it worked as you intend it’d have no noticeable effect.

This is the same method that plenty of private module systems have used - shared ban tables. There is plenty of evidence that proves these systems just lead to exploiters creating new accounts, and legitimate users having little / no recourse.


Does Roblox even need to monitor individual users exploiting at all? I have seen it as my own responsibility for years. Not only because Roblox has, for years, entirely abandoned (or otherwise failed in) their endeavors to combat exploitin, but because it’s a challenge that any developer creating a public-facing server application must face.

Think about web development - when one creates a web application, their users can absolutely modify everything about their client. They are forced to validate any requests or input to their server from the clients. This will never change.

And, ultimately, I don’t think that it will ever change on Roblox either. Roblox is notoriously unreliable when it comes to preventing exploiting. Even if they suddenly started doing so efficiently, I don’t think that any experienced developer would actually trust them to continue doing so consistently.

Exploiters are fairly limited when interacting with a server that has been programmed well. Roblox vulnerabilities aside (things that allow exploiter to crash servers or make replicated changes that are outside of the developer’s control) the only things they can do are alter their character’s position and send requests to the server. Character movement can never be validated by Roblox, as they don’t know what might happen in our games, but it can be validated by us as individual game developers. Requests sent to our servers are easy to validate, and it is a good practice to do so in all applications, regardless of whether or not Roblox happens to sometimes provide an additional layer of security.

TL;DR: Roblox game developers should secure their own games and not try to rely on Roblox to do it for them. You can do a better job of it than Roblox ever can, and the practices you’ll implement doing it are no different than what you would do to secure a server outside of Roblox.