Developers are not equipped to deal with exploiters

Most exploits are done through a tool that is actually exploiting the Roblox client. I thought this was a given since once you have a tool that allows you to run code arbitrarily on your client, it’s far easier to exploit the game. One client is used for millions of experiences. Successfully exploit the client that loads the experience and now, you can potentially start exploiting million of experiences by extension.

Large games are just the most targeted because, they’re large games but, if you have a tool that can exploit the client like that, you can do just about anything on any game on this platform. Hence why multiple developers have the same baseline issue. Variation is common because each game is not the same but, in reality those variations are individual scripts that are being used in conjunction with a tool.

Here’s a cool example of what we can relate this to: It’s pretty similar to how jailbreaking (on iOS devices works). One person finds an exploit on the device and successfully gets arbitrary code execution to work, they push that to people who want it. Soon after, you have an entire ecosystem based on that one exploit. Once that exploit gets patched, most people abandon it. That’s why you don’t see jailbreaks as common anymore. Because it’s been extremely hard to find an exploit.

It’s pretty much a 1:1 with how this place works. One person makes a tool (we all know the name but, I won’t say it) that allows arbitrary code execution on the client. Other people who know how to script start using that tool to develop an exploit. Exploit finds it’s way to the game’s player base as a script that runs on that tool. Kids go wild.

If you trace it all the way back to how the tool works, it’s using a clever vulnerability in Roblox’s client and not your actual game. It’s literally, in a smart way, using Roblox to exploit your game. Because without it, the frequency of those exploiting would be go down because the learning curve to reverse engineer an exploit like that is high.

That in essence is what I meant when I said this:

A game engine is a game engine. It provides you abstractions to graphically and logically formulate a game. This is a given. However, that’s not what I meant. This is what I meant: It’s a known fact that the freedom and by extension ecosystem is far easier to adapt to than on Roblox because you have very low level abstractions and you, as a developer, can control how your game is compiled before it reach any end user. This means that you can use any third party solution and with these abstractions. One could be that you can make proper use of a kernel-level anti-cheat. I.e the ability to see tasks that are running at the same time as your game.

Common ones are Battleye, Easy Anti-Cheat, etc. which are levels above what Roblox provides.

Point is though, Roblox wants to handle a large bulk of this themselves and leave the rest that’s not as severe (inherently subjective) to developers. While that’s a normal concept, it starts to be tiresome when the line of what Roblox or the developer should be doing in this scenario is blurred.

Most places it’s not. A corporation or entity makes a game engine. The people using it are responsible for what happens. Unfortunately, this is a concept that cannot properly survive on this platform. And constantly finding some genius way to overcomplicate “protecting remotes”, bans, custom ban system or whatever is a bandaid fix if the tool can still run. It’s [ideas suggested] not inherently bad since this would be very useful but, it doesn’t actually solve the root problem. It’s limited damage control.

This right here is my point.


Fundamentally, the exploiter has full control of their client. They could decide not to use the Roblox client and just create their own client to do the exploiting (there are actually mass-like-bots and mass-chat-bots that have done this in the past!).

This means that no matter what Roblox does, the developer will never get out of their responsibility to use server-side validation on remote traffic. This is not going away. Similarly, Roblox has the same responsibilities to validate its own systems that are black boxes for developers.

I did point out in my previous post which issues specifically Roblox needs to address (safeguards for physics, let devs write their own checks for weird physics behavior, or more levers for this) and I don’t feel like repeating myself here.

TLDR; you need to change your perception: the “bandaid fix” is anything you do on the client, not what is validated against/on the server-side. That’s the real fix. These exploiters have enough time to waste that they will work around whatever client-side measure is in place.


A lot of this is in the context of a kid to be quite frank. A kid who’s just running a script they found online on some tool that does the hard work for them. You can probably mitigate the damage that scripts are running but, you can never actually get rid of the tool because as a developer, it’s simply not in your capacity. So, once a really smart person rewrites the exploit, you’re going back to the same root problem.

That’s kinda what this thread is really echoing.

I don’t disagree with what you’re saying but, there’s definitely a lot more that Roblox can be doing on their side to help make these tools harder to survive and that saying “exploiter has full access to the client” is a bit of a catch-all.

Roblox is not an open-source platform where you know exactly what’s happening the second you get it. It’s a binary. Reverse engineering a binary (even one that has client checks) is hard.

Being able to run code arbitrarily on any game using a singular or a combination of tools is purely a Roblox-issue. It’s stops being just the developer when the same tool can be used across millions of experiences without major changes.


It is hard, but:

This still applies no matter how many changes Roblox makes to the binary to stop client-side injection. It doesn’t matter that the people exploiting are kids: it only takes 1 clever person in the exploiting community to open up the flood gates again.

Totally agree that they need to focus on comprehensive and sustainable fixes that target their black box client and systems.


I feel that my post that I made back in February of 2018 would hold some valuable discussion around various types of bans and such;

How there is still no sort of system for this is baffling to me.


On July 1st we pushed a sizable update to our game and transitioned from paid access to free to play. In the one month we’ve been F2P we’ve racked up:

  • 6 million visits with an average player count in the ~2500 range (higher at launch)
  • 2,964 moderator issued bans (~40% assisted by user reports)
  • 25,342 automatic anti-cheat bans at a rate of ~817 per day

Keep in mind that these stats represent our first month after a major release. We expected them to be high. Trying to sample parts of our data to get a strong guess at our monthly ban count moving forwards still gives us a huge number of bans.

It is exhausting and demoralizing dealing with exploiters all the time. The enjoyment I get out of developing games on Roblox doesn’t even start to outweigh the burnout myself and my moderation teams suffer through. It’s so much larger of an issue than just an anti-cheat problem and it never feels like any meaningful progress is made by Roblox to assist developers with these issues.

All the security work in the world won’t make up for the fact that I can’t even keep a person out of my game let alone report them to the platform they’re breaking the rules on. I’m tired of kicking individual accounts after they join. I want to ban a person from my game because I caught them cheating, and I don’t want them coming back or even being able to press play. I want these people to have real consequences for their actions. I want the effort I put into my anti-cheat to not be totally undermined by a social platform that only has one developer facing moderation tool (with outdated docs and no tutorial for making the ban system the docs refer to).

I understand that there is no actual way to keep people from getting around a ban but quite literally anything is going to better than the current nothing we have to work with. Ban evasion getting harder and harder to pull off implies less and less people ban evading. As a developer I’d much prefer to manage a lake instead of an ocean.

There is often a lot of general advice thrown around in anti-cheat and exploit conversations. The sage “Don’t trust the client” tip, securing your remotes, do physics checks, so on and so forth. It’s all great advice and well worth following but it doesn’t change the fact that we are limited in our ability to actually combat problems with our experience. I’m not advocating for increased trust levels for developers, I’m just stating that this is a problem developers are stuck with. We have a by-design disadvantage when it comes to dealing with exploiting problems.

If developers are going to be hand-held when it comes to what we can and can’t do with this platform, then Roblox needs to have a far bigger presence in assisting with serious exploiting issues that developers cannot properly deal with.


I wonder what ever happened to community management tooling that was announced at RDC21? I feel like this could be a great starting point given how prominent groups are used today for hosting communities and experiences. I don’t think it’s merely enough but already a large majority of creators on the platform would significantly benefit from this alone.

There is already an option to restrict play access to group members. Although it does restrict a number of options like having private servers for whatever reason, group bans would still provide a good impasse to joining the group and by extension the experience(s) hosted in said group. Open Cloud APIs could help facilitate this process by issuing a group ban if a user is banned in-experience. At the very least, we still have to build our own tools in this case but now we actually have a means of issuing a real consequence to a bad faith actor.

Like I mentioned in my previous post here and the above post, having a way to employ real consequences for bad behaviour would be a saving grace for myself, my team and my communities. It can really change the notion that developers don’t do enough to combat exploiting in our experiences (we do, but they don’t know Roblox holds our hands and then proceeds not to even give us adequate assistance in a very real problem they acknowledge).

Additionally, I like the analogy above about dealing with a lake rather than an ocean – I couldn’t think of a better way to phrase it. I hate having to think about exploiters constantly and butcher so many core features in my experience because some people relish in ruining others’ entertainment. I hate that there isn’t more clear priority to this issue - and by priority, I’m not talking about what Roblox already does, but rather what they don’t. Roblox can only do so much with the client. We can’t be robbed of a pivotal management aspect of our experiences and be left stranded with no tools to use besides applying our own knowledge to the “review your own code” advice that’s simply regurgitated ad nauseam on anti-exploit threads.


I am with you up until I saw “User with the kick lower-ranked member permission will also be able to ban users from a group”.

If you’ve ever had a group in the last decade or so, kick and ban are very different things. Seeing them merged into a “catch all” is dumb.

If someone were to be in your community and be given that permission, you could hit a few endpoints and ban every user from the group.

Keep ban and kick as separate permissions. This way it’s easier to have some sort of damage control when things go wrong. I can think of a bunch of groups that would be hit by this and people actively abusing the feature.

Still doesn’t really positively affect this post though. A similar system would work. It just needs to be able to be easy enough for developers to employ in their games.

It could be as simple as a :ban(userid) that gets sent to an internal datastore and developers can have the option to unban by clearing an entry out.

As far as the exploiting problem goes, this will help mitigate people but, the majority of kids simply don’t care and will make a new account. Very few people exploit on a main account. Most do it on an alt.


As developers, we don’t have any tooling to deal with bad actors, period. If someone gets reported using the Roblox report system in my game, I would like to know who the reported user is, when they were reported, and the reason. I am not comfortable bringing my users to third party chat websites to report users because the majority of my player base is under the age of 13.


There are games on steam that have to connect to this application where if it detects another sort of injection or application assisting them, it won’t let them play. But Roblox can just imbed this kind of system inside their application, and it’ll be all fine.

I feel like another issue is exploiters able to delete body parts or anything in their character and have it replicate to the server. That’s extremely scary for games that aren’t too popular, since it can break lots of server scripts. I also wish that body objects didn’t replicate to the server, so that exploiters wouldn’t fly, and it’d just be jump power and speed hacks to deal with.

This all can be fixed by an application detection system to see if there’s something out of Roblox communicating with them.

1 Like

Roblox has some very in-depth alt detection code where it tries to pull as many unique identifiers from your machine to figure out who’s playing said game and if it’s an alt

It would be really powerful if developers could use this alt detection with a Ban method, which bans the account and all associated alts from a game.

Obviously, it wouldn’t be perfect, but it’s up to the developer’s discretion on whether to use it or not, and it would probably work more times than it fails given how in-depth it is.

Moderation even outside of exploiting is ridiculous, if your game gets targetted by spam bots, even though its likely coming from the same IP address or machine, we physically cannot do anything to stop the accounts joining.


It’s a bit tone deaf to reply to a post more to do with moderation tooling to handle cheaters than it is about the technical aspect of anti-exploiting. The issues you mention are separate topics worth separate feature requests or, if possible, solutions that can be implemented developer-side.

I feel like the problem isn’t so simple as “install this and that and all will be fine”. No such catch-all exists be it technical or customer support/moderation related. Anything done on the application side is client side. What we’re asking for is tooling to deal with cheaters themselves and their ability to play and ruin our experiences without meaningful consequence.


As a Roblox developer, it is currently too hard to moderate your game. There’s a huge hole for banning, verifications for banning, and unbanning. :Kick() isn’t enough, you can’t kick them or ban them from out of the game. If they’re offline. You can use a table full of user ids, but those get bulky and are hard work maintaining them, since every day I get like 20 exploiter reports probably.

If Roblox is able to address this issue, it would improve my development experience because I would be able to hire moderators extremely easily, with just a simple feature: the Game Dashboard.

There are a few other solutions I must discuss before this, and that’s a couple new functions: The :Ban() and the :BanOutOfGame() function. They do exactly what you’d think they’d do: ban them if they’re in-game or ban them if they’re out of game.

Now, when any of these new kicking functions run, it sends a message to the Game Dashboard saying something like, “Moderator rubixxman banned HackerAlt123 for fly exploiting.” There would also be an option to revoke these bans too.

In case a moderator is going beserk, you can demote them and choose to kick or ban them from the game (both would kick them but banning would not let them come back).

When trying to join the game when banned, you can get met up with a custom kick message set in the Game Dashboard that can say something like, “Appeal in the Discord server” or something.

I almost used a suspicious Discord webhook to hook up ban messages to my Discord server, since Discord blocks Roblox requests, but I realized that Roblox should add a feature where the messages go to THEM.

Now for some backstory

All my moderators joined a clan that the demoted community manager had made, and I’m left with only two moderators (including myself. Though me and the only moderator left hardly ban people). If Roblox were to add this, it would be a complete lifesaver since I’d be able to hire moderators again. But in the meantime, I cannot since they can be teaming with that dangerous clan and just backstab while I’m sleeping, and I’d probably wake up with no players in-game. My game currently has a concurrent player average of 400, yet exploiters are everywhere.

So please Roblox, if the Game Dashboard were added, this would LITERALLY SAVE MY LIFE!!! PLS ROBLOX LISTEN!!!


To prevent API bloat

Ban should take a userid and then work dynamically based on their state

The reason a Ban method would be useful is that Roblox could tie it into their alt detection system and prevent any alts of the same user entering the same game.

The fact this doesn’t exist when its got more positives than negatives.


Sadly, there’s devices to block your IP address and device, so there’s no way to stop alts. I believe it should be harder to make alts in a way, but all I thought of was mandating an Email. But I guess people spam creating Emails happen, so if that happened it’d just put extra security onto everyone. I genuinely thought that Emails were hard to make, but nope I guess not lol.

1 Like

There are no tools that “block your IP address”. There are VPNs, which are limited. IP bans work in practice, I utilize them frequently in non-Roblox games. VPNs have limited IPs. When someone is dedicated enough, they exhaust them. When they’re not, they’re not playing your game. Plus, the barrier of using a VPN already eliminates more than zero people (in fact, a lot more than zero people), which gives it added value.


Yes, but I have a sneaking suspicion that Roblox needs another type of “injector” to play its games and I know this from the fact that when you open task manger there are 2 Roblox applications open so my logic is one of them is some sort of injector and the other is the game

So if they banned injections then they might also be banning the own injector, which again is a guess but I don’t see any other reason

Which Roblox already has in the form of a DLL injector detection system, which is required for exploits. I’m not sure how it works, but it seems like Roblox manually gives it the DLL file hash of known exploits every update, which makes it very easy for exploits to bypass it.

According to the latest release of rbxfpsunlocker, it is related in some way to security, presumably to DLL injection in specific;

// Roblox has a security daemon process that runs under the same name as the client (as of 3/2/22 update). Don't unlock it.


1 Like

While there can be some workarounds to the players being able to enter the game after he was already kicked, like a code to check if a player is banned and prevent him from entering the actual playable server until he is clear, or stucking him on a “You are banned for …” screen, this feature to prevent even players from joining could really be useful and doesnt seem like even a bit hard for roblox to implement. Currently you have to code so much for something that might even annoy honest players, and that could easily be replaced with something simple like BanService:BanPlayer(userId,releasedate,reason(string)) and BanService:UnbanPlayer(userId).