Developers are not equipped to deal with exploiters

But how would said account bans be organised, how would they be managed.
An implemented ban function would be very lacking in many features custom ban systems can have, besides one could give users a false sense of security as they don’t know how it works, and they may think that using alts isn’t possible.

Also such function wouldn’t really have much of a use anyways as it can be replicated by :Kick() in a much better way.


There is no custom ban system that users can implement that will work against alts. Roblox doesn’t expose IPs or HWID information to developers. All ban systems in Roblox are account-based.

The point is to make it easier for people without a lot of code experience to keep the things they build pleasant. Think about how easy it is to implement a GUI reset button - it’s 3-4 lines of code. Literally “if button clicked, kill player”. It could be that easy to ban a problem user. Instead, you have to:

  1. Call Kick() on that user.
  2. Set up a datastore to keep that user’s UID.
  3. Every time a user joins, check against that datastore to see if they’re banned.
  4. If they are banned, call Kick() on them again.
  5. Implement a way for moderators to interact with this custom function, which might be chat, a GUI, whatever.

If there was a built-in Ban command, you could open the console, type game.Players.Jerk:Ban(), and that would be it. You could include an optional message just like with Kick(), and Roblox could even monitor how often an account gets banned from games as part of moderation.


Bit of a bump since I didn’t see this mentioned (correct me if I’m wrong), but besides making it a lot easier to new devs and smaller studios to handle exploiters having a first party ban system could also allow for easy banning across multiple games at once if they’re owned by the same group or person.

What would be really useful would be an API like:

Player:Ban(banMessage, banDuration, offenceType, banFromAllCreatorsExperiences)

Something like this would allow developers to easily ban users permanently or for a set time, log an offence type (e.g. exploiting, cheating, offensive language, abusive behavior, etc.), and choose whether to ban them from just this game or all games published by the game’s creator.

An interface on the website to ban users would also be really nice for banning reported users that weren’t caught by automated systems.

An interesting aspect that could be useful for Roblox is that they’d end up with a crowd-sourced report/moderation system where it’d be easy to see if a user has a pattern of being banned from multiple games for things like exploiting or abusing other players. I’d be wary of using this to automatically ban users for the most part but it could be useful for catching the obvious exploiters.
For example, if multiple decent size games banned a user for exploiting within a short time period the account could be flagged so someone can look into it.


Major support for this.

Giving us the tools to make it as easy as possible to tailor an anti-cheat to our game would be great, since a broad solution likely doesn’t exist.

Things like:

  • Ban APIs (as suggested above by @EndorsedModel)
  • Maximum walkspeed (enforces that a player cannot move faster than their walkspeed / teleport)
  • Easy tools for game moderators to assign bans

Right now we have to build fairly complex moderation tools ourselves to let people perform bans, undo bans, etc. We usually build these tools for our own moderators to use since we want to spend our time developing the game. Having a built-in dashboard to manage this for our games would be fantastic, so we don’t need to build one for each game.


Exploiters have brought my game down from 7,000 players to 2,000 and eventually was down to 1,000. I tried to create an anti cheat which was banning innocent players. I have given up on anti cheats because the ones I could make and be effective would be too laggy and still would probably ban innocents, and the ones I would use on the client side are just deleted by the exploiters. So it’s an ongoing issue in my games but there’s nothing I can do about it. All my group wall is full of is exploit reports, never can do a question of the day or any other fun community activities because all I see is exploit reports. It’s very sad and humiliating but nothing I can really do about these issues.


This is kind of out of range, but would it be a good idea (or even possible) for roblox to send an event once a player teleports or if roblox detects an instant change of position once it happens? It would be very easy to combat teleporters.


Being able to read the hardware on the client doesn’t help when the exploiter controls the client.

No other game engine provides anti-exploit tooling out of the box. This has to be programmed in by DRM or watchdog software provided by paid license or you may get some rudimentary support from cloud platforms you’re working with, but you can bet on that being equal or (most likely) much more leaky than what Roblox provides.

FWIW, Roblox frequently ships updates that attempts to break exploit tooling. The problem is that the exploiting tools quickly adapt to the fixes. This is because ultimately, the exploiter controls the Roblox client since it runs on their device.

The dev needs to fix the exploiting by securing their remotes. For physics, the story is different and Roblox should provide more configurable controls here for developers for sure (e.g. some things to prevent common cases like teleporting or speed-hacking, bugs against forcing physics ownership, etc) but this is not as straight-forward as it seems, because many developers have features in their game that rely on teleporting or high movement speed and such.

False – a large number of top games have specifically crafted exploits for those games. Some of the most popular games may even have dedicated clients to exploit vulnerabilities that the developer introduced with how they (don’t) validate remote traffic.

This kind of grandstanding is not an appropriate way to give feedback and detracts from the topic. There’s almost nothing in here that is actually useful towards the issue and comes across mostly as self-gratification.

Let’s stick to the real issue and take the off-topic “big bad corp” appeal elsewhere. As above, Roblox does ship fixes against exploits, so your statement about them supposedly not caring about it falls completely flat.


Well, so far so good for Chickynoid. Hopefully sometime soon roblox will introduce their own version of server authoritative player movement, but until then Chickynoid works pretty well!

Maybe consider using Chickynoid if your game can get away with supporting it?


Most exploits are done through a tool that is actually exploiting the Roblox client. I thought this was a given since once you have a tool that allows you to run code arbitrarily on your client, it’s far easier to exploit the game. One client is used for millions of experiences. Successfully exploit the client that loads the experience and now, you can potentially start exploiting million of experiences by extension.

Large games are just the most targeted because, they’re large games but, if you have a tool that can exploit the client like that, you can do just about anything on any game on this platform. Hence why multiple developers have the same baseline issue. Variation is common because each game is not the same but, in reality those variations are individual scripts that are being used in conjunction with a tool.

Here’s a cool example of what we can relate this to: It’s pretty similar to how jailbreaking (on iOS devices works). One person finds an exploit on the device and successfully gets arbitrary code execution to work, they push that to people who want it. Soon after, you have an entire ecosystem based on that one exploit. Once that exploit gets patched, most people abandon it. That’s why you don’t see jailbreaks as common anymore. Because it’s been extremely hard to find an exploit.

It’s pretty much a 1:1 with how this place works. One person makes a tool (we all know the name but, I won’t say it) that allows arbitrary code execution on the client. Other people who know how to script start using that tool to develop an exploit. Exploit finds it’s way to the game’s player base as a script that runs on that tool. Kids go wild.

If you trace it all the way back to how the tool works, it’s using a clever vulnerability in Roblox’s client and not your actual game. It’s literally, in a smart way, using Roblox to exploit your game. Because without it, the frequency of those exploiting would be go down because the learning curve to reverse engineer an exploit like that is high.

That in essence is what I meant when I said this:

A game engine is a game engine. It provides you abstractions to graphically and logically formulate a game. This is a given. However, that’s not what I meant. This is what I meant: It’s a known fact that the freedom and by extension ecosystem is far easier to adapt to than on Roblox because you have very low level abstractions and you, as a developer, can control how your game is compiled before it reach any end user. This means that you can use any third party solution and with these abstractions. One could be that you can make proper use of a kernel-level anti-cheat. I.e the ability to see tasks that are running at the same time as your game.

Common ones are Battleye, Easy Anti-Cheat, etc. which are levels above what Roblox provides.

Point is though, Roblox wants to handle a large bulk of this themselves and leave the rest that’s not as severe (inherently subjective) to developers. While that’s a normal concept, it starts to be tiresome when the line of what Roblox or the developer should be doing in this scenario is blurred.

Most places it’s not. A corporation or entity makes a game engine. The people using it are responsible for what happens. Unfortunately, this is a concept that cannot properly survive on this platform. And constantly finding some genius way to overcomplicate “protecting remotes”, bans, custom ban system or whatever is a bandaid fix if the tool can still run. It’s [ideas suggested] not inherently bad since this would be very useful but, it doesn’t actually solve the root problem. It’s limited damage control.

This right here is my point.


Fundamentally, the exploiter has full control of their client. They could decide not to use the Roblox client and just create their own client to do the exploiting (there are actually mass-like-bots and mass-chat-bots that have done this in the past!).

This means that no matter what Roblox does, the developer will never get out of their responsibility to use server-side validation on remote traffic. This is not going away. Similarly, Roblox has the same responsibilities to validate its own systems that are black boxes for developers.

I did point out in my previous post which issues specifically Roblox needs to address (safeguards for physics, let devs write their own checks for weird physics behavior, or more levers for this) and I don’t feel like repeating myself here.

TLDR; you need to change your perception: the “bandaid fix” is anything you do on the client, not what is validated against/on the server-side. That’s the real fix. These exploiters have enough time to waste that they will work around whatever client-side measure is in place.


A lot of this is in the context of a kid to be quite frank. A kid who’s just running a script they found online on some tool that does the hard work for them. You can probably mitigate the damage that scripts are running but, you can never actually get rid of the tool because as a developer, it’s simply not in your capacity. So, once a really smart person rewrites the exploit, you’re going back to the same root problem.

That’s kinda what this thread is really echoing.

I don’t disagree with what you’re saying but, there’s definitely a lot more that Roblox can be doing on their side to help make these tools harder to survive and that saying “exploiter has full access to the client” is a bit of a catch-all.

Roblox is not an open-source platform where you know exactly what’s happening the second you get it. It’s a binary. Reverse engineering a binary (even one that has client checks) is hard.

Being able to run code arbitrarily on any game using a singular or a combination of tools is purely a Roblox-issue. It’s stops being just the developer when the same tool can be used across millions of experiences without major changes.


It is hard, but:

This still applies no matter how many changes Roblox makes to the binary to stop client-side injection. It doesn’t matter that the people exploiting are kids: it only takes 1 clever person in the exploiting community to open up the flood gates again.

Totally agree that they need to focus on comprehensive and sustainable fixes that target their black box client and systems.


I feel that my post that I made back in February of 2018 would hold some valuable discussion around various types of bans and such;

How there is still no sort of system for this is baffling to me.


On July 1st we pushed a sizable update to our game and transitioned from paid access to free to play. In the one month we’ve been F2P we’ve racked up:

  • 6 million visits with an average player count in the ~2500 range (higher at launch)
  • 2,964 moderator issued bans (~40% assisted by user reports)
  • 25,342 automatic anti-cheat bans at a rate of ~817 per day

Keep in mind that these stats represent our first month after a major release. We expected them to be high. Trying to sample parts of our data to get a strong guess at our monthly ban count moving forwards still gives us a huge number of bans.

It is exhausting and demoralizing dealing with exploiters all the time. The enjoyment I get out of developing games on Roblox doesn’t even start to outweigh the burnout myself and my moderation teams suffer through. It’s so much larger of an issue than just an anti-cheat problem and it never feels like any meaningful progress is made by Roblox to assist developers with these issues.

All the security work in the world won’t make up for the fact that I can’t even keep a person out of my game let alone report them to the platform they’re breaking the rules on. I’m tired of kicking individual accounts after they join. I want to ban a person from my game because I caught them cheating, and I don’t want them coming back or even being able to press play. I want these people to have real consequences for their actions. I want the effort I put into my anti-cheat to not be totally undermined by a social platform that only has one developer facing moderation tool (with outdated docs and no tutorial for making the ban system the docs refer to).

I understand that there is no actual way to keep people from getting around a ban but quite literally anything is going to better than the current nothing we have to work with. Ban evasion getting harder and harder to pull off implies less and less people ban evading. As a developer I’d much prefer to manage a lake instead of an ocean.

There is often a lot of general advice thrown around in anti-cheat and exploit conversations. The sage “Don’t trust the client” tip, securing your remotes, do physics checks, so on and so forth. It’s all great advice and well worth following but it doesn’t change the fact that we are limited in our ability to actually combat problems with our experience. I’m not advocating for increased trust levels for developers, I’m just stating that this is a problem developers are stuck with. We have an by-design disadvantage when it comes to dealing with exploiting problems.

If developers are going to be hand-held when it comes to what we can and can’t do with this platform, then Roblox needs to have a far bigger presence in assisting with serious exploiting issues that developers cannot properly deal with.


I wonder what ever happened to community management tooling that was announced at RDC21? I feel like this could be a great starting point given how prominent groups are used today for hosting communities and experiences. I don’t think it’s merely enough but already a large majority of creators on the platform would significantly benefit from this alone.

There is already an option to restrict play access to group members. Although it does restrict a number of options like having private servers for whatever reason, group bans would still provide a good impasse to joining the group and by extension the experience(s) hosted in said group. Open Cloud APIs could help facilitate this process by issuing a group ban if a user is banned in-experience. At the very least, we still have to build our own tools in this case but now we actually have a means of issuing a real consequence to a bad faith actor.

Like I mentioned in my previous post here and the above post, having a way to employ real consequences for bad behaviour would be a saving grace for myself, my team and my communities. It can really change the notion that developers don’t do enough to combat exploiting in our experiences (we do, but they don’t know Roblox holds our hands and then proceeds not to even give us adequate assistance in a very real problem they acknowledge).

Additionally, I like the analogy above about dealing with a lake rather than an ocean – I couldn’t think of a better way to phrase it. I hate having to think about exploiters constantly and butcher so many core features in my experience because some people relish in ruining others’ entertainment. I hate that there isn’t more clear priority to this issue - and by priority, I’m not talking about what Roblox already does, but rather what they don’t. Roblox can only do so much with the client. We can’t be robbed of a pivotal management aspect of our experiences and be left stranded with no tools to use besides applying our own knowledge to the “review your own code” advice that’s simply regurgitated ad nauseam on anti-exploit threads.


I am with you up until I saw “User with the kick lower-ranked member permission will also be able to ban users from a group”.

If you’ve ever had a group in the last decade or so, kick and ban are very different things. Seeing them merged into a “catch all” is dumb.

If someone were to be in your community and be given that permission, you could hit a few endpoints and ban every user from the group.

Keep ban and kick as separate permissions. This way it’s easier to have some sort of damage control when things go wrong. I can think of a bunch of groups that would be hit by this and people actively abusing the feature.

Still doesn’t really positively affect this post though. A similar system would work. It just needs to be able to be easy enough for developers to employ in their games.

It could be as simple as a :ban(userid) that gets sent to an internal datastore and developers can have the option to unban by clearing an entry out.

As far as the exploiting problem goes, this will help mitigate people but, the majority of kids simply don’t care and will make a new account. Very few people exploit on a main account. Most do it on an alt.


As developers, we don’t have any tooling to deal with bad actors, period. If someone gets reported using the Roblox report system in my game, I would like to know who the reported user is, when they were reported, and the reason. I am not comfortable bringing my users to third party chat websites to report users because the majority of my player base is under the age of 13.


There are games on steam that have to connect to this application where if it detects another sort of injection or application assisting them, it won’t let them play. But Roblox can just imbed this kind of system inside their application, and it’ll be all fine.

I feel like another issue is exploiters able to delete body parts or anything in their character and have it replicate to the server. That’s extremely scary for games that aren’t too popular, since it can break lots of server scripts. I also wish that body objects didn’t replicate to the server, so that exploiters wouldn’t fly, and it’d just be jump power and speed hacks to deal with.

This all can be fixed by an application detection system to see if there’s something out of Roblox communicating with them.

1 Like

Roblox has some very in-depth alt detection code where it tries to pull as many unique identifiers from your machine to figure out who’s playing said game and if it’s an alt

It would be really powerful if developers could use this alt detection with a Ban method, which bans the account and all associated alts from a game.

Obviously, it wouldn’t be perfect, but it’s up to the developer’s discretion on whether to use it or not, and it would probably work more times than it fails given how in-depth it is.

Moderation even outside of exploiting is ridiculous, if your game gets targetted by spam bots, even though its likely coming from the same IP address or machine, we physically cannot do anything to stop the accounts joining.


It’s a bit tone deaf to reply to a post more to do with moderation tooling to handle cheaters than it is about the technical aspect of anti-exploiting. The issues you mention are separate topics worth separate feature requests or, if possible, solutions that can be implemented developer-side.

I feel like the problem isn’t so simple as “install this and that and all will be fine”. No such catch-all exists be it technical or customer support/moderation related. Anything done on the application side is client side. What we’re asking for is tooling to deal with cheaters themselves and their ability to play and ruin our experiences without meaningful consequence.