Hello developers!
You may have read one of my old articles on this topic with a similar title:
Network Optimization - Preventing High Latency & Reducing Lag - Resources / Community Tutorials - DevForum | Roblox
This is my rebooted version of that now fairly out of date article. Over time, the way Roblox has transmitted remotes has changed quite a bit, and, works wildly differently than it used to. Because of this, most of the content in my old article is no longer accurate. This post is meant to have the latest information on network optimization.
This article is based on my own testing and prior knowledge of Robloxâs data transfer. These are things I have a lot of experience with, and principles I follow all of the time, but, not everything here will be perfectly accurate and behavior can change a lot as time goes on. Some assumptions on how Robloxâs data queue works may be incorrect, however, since this is based on testing, the âwhat to do and whyâ sort of information will still be pretty accurate.
Remote data transfer
Remotes transmit requests at variable rates. Remote data is transmitted up to 60 tps, however, most mass remote requests are coalesced into one big request. The throughput limit of remotes is completely dependent on the network speed of the server and client and is highly variable, but, still respects Robloxâs global throughput limit per play.
IMPORTANT Unit Discrepancy
In this article I use these units:
- kbi - kilobits (This âshouldâ be kb)
- kb - kilobytes (This âshouldâ be kB)
The limit for this data was previously listed in my article and on the devhub as 50 kb/s (50 kilobits/s) for any player. This is 8 times less than 50kB/s (50 kilobytes/s), which is the actual data limit as shown through testing. This confusion came from the fact that kB is often improperly written as kb. Additionally, Robloxâs network debugger lists âKB/sâ which is again, unclear and technically improper since capitals matter. Robloxâs network tab uses kilobytes per second (kB).
The proper way should be kb vs kB however unfortunately this is not well respected by anyone (even I personally disagree with these units being listed this way and donât respect it myself). In this case, for clarity I will use kbi to reference kilobits and kb to reference kilobytes as this is how I prefer to write my units.
How much data can Roblox transfer?
Roblox has a soft transfer limit of 50kb/s between a playerâs client, thatâs 1024 * 50
bytes per second. A Roblox server has no limit to how much data it can send or receive globally, but you are prevented from sending more than this limit or receiving more than this limit for any given player. Itâs important that you send well below this data limit, otherwise, you will slow down (or even halt!) all replication throughput. Since data is effectively queued, the connection between the client is not lost so your ping can be well above the 30 second timeout, even minutes behind. Sending too close to this limit is a no no, since this limit encompasses all data throughput, not just remotes.
What can go wrong if you send too much data?
Here is an example of what can go wrong. Letâs say you send 60kb/s via remotes constantly. Roblox will coalesce many of your requests into one bigger one, which, has the effect of usually allowing Roblox to send data for a few frames before some of your data is sent.
In this case Roblox likely is only sending 50kb of your data every few seconds, meaning most of your data is going to the queue. This means that most of the queue is taken up by your data, and, due to how Roblox prioritizes packets in the queue, eventually, the queue can get big enough where Roblox isnât really being prioritized anymore, since there is a lot of old remote data from several seconds ago that Roblox sees as data that needs to get sent sooner.
This can result in measured pings alone reaching over 100k ms after only a few minutes. But, thatâs only measured ping, that doesnât include any info about how data us prioritized. That means, even though measured ping is 100k ms, which is over a minute, the effective ping for things like replication could be ten, twenty, thirty minutes making your game completely unplayable in only a few minutes.
How can you manage your data throughput better?
I would recommend firstly giving yourself a goal limit. I would say 25kb/s is a reasonable hard limit to give yourself. This reserves half of Robloxâs data throughput for yourself and half for Roblox & replication. Itâs okay if you occasionally go above your limit in rare cases, or even above the 50kb/s limit, but, you never want to go above your limit for more than a few seconds. Going over the 50kb/s limit will gradually increase ping as more and more of the data throughput is allocated to your remotes and more and more replication data is queued up.
You should not only take into account remote requests, but, also property changes. If you change more than a few properties at once or change more than a few instances at once, you should unparent the target instance(s) first with as few .Parent sets as possible, set your properties, and then reparent them. This turns what could be hundreds or thousands of property changes into a few property changes (or rather, ancestry changes) and a few instances being sent.
For example, letâs say in your game you have an entity system, letâs say you have coins that spawn in a folder in the workspace called Coins. Letâs say you clear the coins on your map at the end of a round. What you should not do is loop over each coin and delete it individually. Instead, you should :Destroy()
your Coins
folder, and create a new, empty one.
Continuing the example, letâs say you want your coins to spin, or change color. What you should not do is create this behavior applied by the server. Instead, you should have the client do the property setting and simply have the server occasionally tell the client âhey, hereâs a list of coins and what colors you should make them in the future.â Even better would be to simply keep this behaviour on the client.
Doing all of this might require restructuring of your gameâs code, or even rewriting how entities work in your game. But, there is unfortunately no way around it. This is similar to having good game security, if your game is designed without security in mind, and you want to improve security in the future, it could require large changes to how your game works.
This isnât just about ping!
Good network practices can also massively improve the FPS and general performance of your playerâs clients as well. This is because processing incoming network data is expensive, and can be extremely performance heavy in mass quantities. Processing one big packet is easier than processing one thousand small packets, since there is small CPU, network, and memory overhead for every packet sent. 10000 packets (also referring to things that might be combined into one packet) times an overhead of 0.01 for each is 1000, but 1 packet times an overhead of 0.01 is still only 0.01.
This is exactly why Roblox coalesces your remote requests into one big request every second or so. It might increase perceived ping a little, but, a lot less data is sent, a lot less CPU is used on the server and client, a lot less memory is required, and generally, a lot less everything is needed, and, you should take inspiration from this property of data transfer wherever you can.
Conclusion & Special note on instance based terrain, voxel & entity systems
Terrain and entities are both cases where you expect to potentially be making many thousands of changes. Often times, and in the case of terrain, always, you will find that parenting things after you do the work is surprisingly very fast. For example, letâs say you generate some voxel-style terrain. If you parent each voxel to nil, and when youâre done generating terrain, parent each voxel to the workspace, you will find you will get better performance than if you parented each voxel to the workspace immediately. On top of that, if you parent each voxel to a folder parented to nil, and when youâre done generating the terrain, parent that folder to the workspace, you might see even close to a 100x speedup!
This is again, due to the property of overhead above. There is almost always overhead to having a large quantity of things, even when you least expect it. You can always expect to see better, or at least equal performance by coalescing things together into bigger chunks. Youâll never see worse overall performance.
The caveat to this is that if your chunks are too big, youâll see a lot of stuttering, which can be more distracting than overall performance being a little low. For example, say youâre getting 100 FPS in a game, but, every second you get a lag spike that takes you down to 1 FPS. This can be a lot more distracting, and a lot less enjoyable than a stable 50 FPS.
So, the takeaway is that you should do as much as you can but still maintain balance. The more you practice balancing these things, the better you will become at it, and, you might see that adapting your entire style in favor of these behaviors might also make it easier to develop performant games, reducing your overall time cost since youâll do less going back and optimizing.