What point does sending large data cause lag?

I currently have this setup:

  1. Character Sends Data to Server via Remote Event
  2. Server checks data size
  3. Server Validates Data
    3.1 If the data size is larger than expected, the server won’t send out the data.
    3.2 If the data is below the max data limit, the server will send out the data.
  4. Clients get data

However I see a potential issue between #1 and #2, if an exploiter let’s say sends a huge amount of data via Remote Event and the server processes that data, will the server lag or will the remote event stall? etc. I’m pretty sure the clients that will recieve the data won’t get the lag since the server still has to authenticate the data, so yeah. I think of this like downloading a huge file off the internet, most of the bandwidth will be allocated to that process and everything else will slow down.

The explotor wouldn’t be able to send enough data to cause the server to lag I wouldn’t think. The exploiter however could spam the server with the data, firing the remote function multiple times per second. In this situation the part that would cause the server to lag would be your data validation, you should add a rate limit to make sure that clients are not spamming data.

1 Like

I ran some tests with this to see how much the server would stall when a remote event was fired. I started by sending “ack” as a test, and as expected, there was basically no change:
image
Up next, I generated a 30,000 character long string of random numbers and sent that. Still pretty much no difference:
image
Finally, I generated half a million numbers in a string and sent that. It’s worth noting that I crashed Studio just trying to do this. And finally, when it sent, the result was…
Nothing. The client crashed every time just generating the text. I think you’re in the clear.
EDIT: I managed to send 300,000 characters. Took almost a minute to generate, but after it stabilized, I sent the text and:
image
Nothing. You’re definitely in the clear.

7 Likes

May I ask what do those numbers represent? I currently use #HttpService:JSONEncode(Data) to check data size, which is probably not the right method.

Oh, right. The numbers are the time (in seconds) between updates on the server. 0.033 is normal.

There shouldn’t be any problems, Roblox wouldn’t be so careless to let something like this occur and if they are it would have happened to multiple games by now.

The String limit is 250k Characters and Tables do have size limits so does Remote Objects, I would assume that even if an Exploiter would send the Max amount of disturbance they could it wouldn’t be able to bother the server.

Why don’t you use a RemoteFunction instead?

RemoteFunction actually does the work of sending data to the server and expecting it back to the client again. Pretty sure that the client is lagging if they tried to send an excessive amount of data(must be an alienware to be able to store that). Have you ever seen if an exploiter have sent an excessive amount of data through a RemoteEvent and successfully crashed the server?

Wiki’s Description About One Limitation


Bandwidth is the limitation when sending the data.

For most devices and connections, a Roblox server can only send and receive about 50 KB/sec of data to each client, not including physics updates. That said, it is highly recommended to maintain a lower value during normal gameplay as sudden spikes in data transfer can cause lag and an overall subpar user experience.

Every time a remote event is fired or a remote function invoked, a packet of data is sent over the network between the server and client. This packet is counted towards the 50 KB/sec limit. If too many packets are sent (remote event or function used often), or too much data is sent per packet (lots of large arguments to the event or function), the latency of the connected clients can be adversely affected.

I forgot to mention that I will not send the data back to the client who sent it, but the other clients in the server.

Well, I’ve been told to not just forward the data sent from a client to all the other clients, so I must do checks on the server (learned it the hard way too)

Oh, then I did not see “clients”. Perhaps rewriting it to all clients would be easier to interpret.

Sanity checks. Sanity checks. Always do that before sending.

Clients can send 60 KB/s through remotes (not sure if this limit is per remote or overall). I do believe there is some kind of silent rate limit done on the backend to prevent remotes from being spammed. That being said, your communication model looks relatively fine.

If you’re concerned about spam from an exploiter, you can implement your own rate limitations. One method is the “leaky bucket” rate limit. This will add a step between 1 and 2 which is the server checking the “bucket”. To explain that: a client can send a maximum of n requests which each (not all of them - each) decay after s amount of seconds. Another method is just doing a standard n amount of requests per second (or minute).

Size of data sent just becomes a general issue after all things considered, exploiter or not. At that point, you’ll want to start considering what data you’re sending, how, why and if there’s a better alternative. That being said, remotes can handle more than you think. You should be fine, all things considered.

1 Like