Network Optimization (2019 & 2020) - Preventing High Latency & Reducing Lag

This article is out of date!

You can find the new version of this article here, which is a bit more future proof and is more accurate to how Roblox currently works. It also addresses some incorrect or inaccurate information from this article

Hello developers! :smile: I feel like I haven’t really been seeing much content related to remotes, network throttling, etc, and I’ve both encountered, and seen a lot of issues with remote data taking place across multiple games. I thought that I’d share what I know from my own experimentation and what I have seen. I apologize if some information is not fully accurate. Please correct me if I’ve gotten something wrong, and if you have some information you think should be added, definitely say so!

Remote Data Transfer

Remotes transfer their data at a rate of 30 times per second (30 tps). They can transfer at 50kbps per player (~1706 bytes per transfer). This is the soft limit of data transfer across remotes, and hitting this limit will cause data to be queued. If this queue backs up, it can artificially inflate what I call “perceived ping.” Perceived ping is essentially how responsive your remotes appear. If you have too much data queuing up, actions appear to happen slower and slower, eventually taking seconds or more.

Preventing Remote Queuing

Preventing data queuing is a super important topic for all games. I’ve seen many games which run into data queuing issues which appears like server lag even though the server is running at 60hbps. Preventing this is thankfully pretty simple: Just don’t send too much data.

Firstly you want to make sure you aren’t making too many remote requests. You can technically make more than 30 requests per second and still be fine since data can be batched into one network frame. What’s important is that you’re sending under the 1706 byte limit in one network frame. Unfortunately there are no tools available to measure how much data you’re transferring across your remotes, so it’s important that you gauge things accurately enough that you can stay under this limit.

Gauging Remote Data Size & Staying Under the Limit

Usually it’s not important to try and find the size of data accurately. We can use a few simple rules to help us stay under the limit:

  1. Do not make too many requests. You should limit frequent requests, or requests which occur per frame to use very small amounts of data, such as a few numbers or small strings.
  2. Only send large tables once. After you’ve sent your table, you can sync values as they are changed within the table. For example, if the player has a list of items they own, you should only send this table once at the start of the game, and then tell the client to add or remove certain entries.
  3. Create item/object registries. You should give certain objects and items unique identifiers or names which the client can use to identify them. You can send these items in one big list, or keep them in shared modules between the client and server.
  4. Do not send a lot of stuff at once unless the player has just joined. Try to spread out lots of information over a few seconds if you nee to.

Extra (estimating actual data size)

For those that want to gauge data size as accurately as you can while being safe we can make some loose assumptions of how Roblox transfers data types. We can expect that numbers are transmitted as 64 bit floats (8 bytes) and we can expect strings to be sent as a list of bytes. We can assume that tables and arrays use the format commonly seen in other areas of Roblox: 4 bytes for the length of the array, then 1 + keyLength + valueSize bytes for entries. For arrays, this will be zero. We’ll want to give some extra leeway for other information that might be included, so we can give each value as well as the overall request a few extra bytes (e.g. 2).

Replication Lag

Replication lag, similar to remotes, causes a ton of either FPS lag, or latency. This frequently occurs in games which build lots of terrain at once, such as Eclipsis, which to this day struggles with replication lag. Replication lag can be caused by mass ancestry or property changes, mass deletion or creation of instances, and other stuff as well.

Mitigating Replication Lag

Don’t Initialize Properties After Parenting

This means don’t use the second argument of, and don’t set a lot of properties after you’ve parented instances. This argument is actually deprecated for this very reason. Setting a lot of properties on a replicable instance can cause a ton of replication lag since each property change is individually replicated.

Instances which are parented to nil, Cameras, ServerStorage, ServerScriptService, and other locations which do not replicate to the client also do not send property and ancestry changes to the client. This is important because it prevents the client from being overloaded with too many property changes before you’re done creating your objects.

Group Your Instances

When creating, or even deleting objects, such as terrain, buildings, etc, you should group up a lot of objects into instances before parenting. Parenting a lot of objects at once can cause a lot of replication lag just like mentioned above. Generally it’s important that you balance the number of groups you’re parenting with the amount of instances within them.

Yielding (e.g. waiting for RunService.Heartbeat) between groups is an extremely effective way of preventing crashing and freezing when replicating a ton of instances. Sending too many groups, or too large groups will freeze the client, or cause high latency. Grouping instances effectively prevents the player from experiencing any extremely low FPS (which effectively slows down the rate at which replication data is processed, thus causing even more lag in the process) as well as preventing high latency from too much data being transferred at once.


Can you elaborate here? I’m not sure what you’re referring to.

Wouldn’t the client then be able to modify this table and change the UID(s) to confuse the server?

1 Like

The wiki used another notation (KB/sec), so I’m assuming they meant kilobytes per second.

Could you show what experiments have you done and how can we replicate them?

If you could show us from where you got the this data it would help a lot!

MicroProfiler might include some labels about packets.
sala can likely tell the data sizes for you.


Thankfully checking the rate of Heartbeat is easy to verify. You can press Shift+F5 to view the client heartbeat and server heartbeat is shown in the dev console under server jobs.

I may actually have made a mistake when testing the 30tps rate. I used a RemoteFunction and measured the time difference from both the client and server. The thing is, it must return its result before it stops yielding, so this may actually have taken two network frames (which run at 60tps). This technically does put you at a limit of 30 tps for remote functions but I’ll need to test RemoteEvents to be sure.

Additionally with the last bit of information there I do make some assumptions on based on what staff have said prior. This is based on the fact that 1. Grouping instances causes less replication lag which probably means the whole instance is serialized and 2. Changing properties mirrors this effect. An alternative explanation could be that property changes are just really slow but I don’t necessarily think this is why.

In terms of experimentation I’ve done a lot to measure how the speed of remotes changes over time. I’ve basically sent really large tables with a bunch of random numbers. Generally from what I’ve found remotes will get slower and slower the longer you send a large amount of data, which is due to queuing. I’ve also done some tests to try and see how laggy different replication types are. Generally parenting super large instances in one group causes really bad but really short lag and parenting a lot of instances individually causing very bad FPS lag over a long period of time (which is why remote transfer rates can get really slow as well).

Also you’re correct that the project linked probably does have accurate data size. I’ll look into that more as it seems like a pretty interesting project.


Sorry to bumb this, but I heard that encoding tables with HTTPService and then sending them through remote events reduces the amount of transfer, is this true?

1 Like

The short answer is yes and no.

As far as I know that shouldn’t be the case (usually). Technically, this can reduce size due to how data gets encoded. Basically, when you send the table over the remote normally it gets serialized, and depending on the way it gets serialized it can take up a larger or smaller amount of space. That means stuff like numbers will get serialized (similar to how the new string.pack/string.unpack functions can be used).

JSON is not a particularly compact storage system (and is why people often write special encoding for data they save in datastores) and can inflate size because certain characters will get encoded with extra stuff.

If you really want the best space efficiency when sending stuff over remotes, you should create your own encoding and decoding. Afaik, strings are the most compact way to do this, and for special types such as instances you could, for example just pass them in the order they’re referenced and then keep some sort of type identifier or tag for what stuff needs to get referenced. Then on your decoding end you can just fill in each reference and “unpack” your tagged data.

(If you wanted that’d even allow you to send very basic metatables, probably not functions though, although, it’s certainly possible)


How expensive is sending instances via a remote event. I know it sends the reference to said instance you fire, but is that expensive?

No, that shouldn’t be the case. Instances replicating to the client certainly is but instances being sent over remotes is probably more similar to instances being sent over strings in terms of efficiency (might even be a tiny bit faster)

1 Like

Perhaps those sections require an update

Credits to @Corecii

It seems like the limits / capabilities of Remotes have been increased

1 Like