Hi,
In a game I work for we have noticed a massive surge of “Total Data KB/s” on the server-side (sometimes reaching up to 40 MB/s) correlated with replication/streaming tags in the server-side microprofiler along with a new tag we’re unfamiliar with, SysEventLoop.runOnce(), which is present in 100% of our frames and we couldn’t find any documentation for, labeled under the “RbxTransport” group.
We noticed this issue after our most recent game update. However, we have been unable to determine any correlation between the update itself and the issue - instead, we believe that the increased player count due to update hype may be part of the problem. However, we have noticed the problem appearing, in certain occasions, even in low-player count servers (as low as 8 players), so we can’t be sure.
On the client side, the “In Data” counter in the Shift + F3 menu has also increased, reaching up to 200 KB/s at times, way above the standard.
The game affected is our SCP: Site-81 Roleplay.
I’ve attached a series of server-side microprofiler logs in the private content section.
Thank you for the report, sorry to hear you’re having issues with unexpectedly high send/recv. You mentioned you noticed this after an update to your game, what was in the update? If you compare bandwidth usage between game versions in studio, do you see a difference? And lastly, have you tried reverting to the previous version to see if that fixes the issue?
The update was pretty large - it was a major update that included a variety of additions, notably new in-game mechanics and important bugfixes/changes. Personally, I don’t believe these contribute at all to the issue; however, I’ll include the update log along with the other things I’ll send over.
We are currently on version 0.3.0, with the previous version being 0.2.1. We created a temporary place in the game that has version 0.2.1 inside of it, and requested some players join it to assist with data collection.
Although the issues weren’t as consistent as in the main game, this can likely be explained with player count - the test had ~20 players, whereas a full server harbors 50. The problem shouldn’t occur anyway, regardless of player count. We made sure to check RemoteEvent incoming data as well as the ScriptProfiler and can confirm that nothing in either one appears abnormal. Incoming data from RemoteEvents had an average of 5 KB/s, and ScriptProfiler showed minimal computation time for scripts.
I’ve been able to capture a number of microprofiler logs, all of which I’ll attach in the private message of this bug report. Similar results came up.
Thank you for the additional details. Just to confirm, when was the first date (and time if you know) that you noticed this? And have you figured out any way to reliably reproduce this? I.e. does it occur when a player joins, when they move to specific part of the map, etc?
We noticed this just before the update released and especially after it released, with the sudden rise in player count.
Version 0.3.0-alpha was released on February 21st, 2026, at 5:18 PM UTC. Us noticing it occurred during the time period between Feb 19th - Feb 21st.
Unfortunately I was not able to find a consistent way to repro this… it sort of just happens in general, sometimes even in a server with 1 player, by myself, I can experience increased network usage for no clear reason. However, it’s a lot more common in fuller servers.
Hi there! In internal metrics, we are seeing that the RCC claims it is sending ~33KB/s of MeshParts per player (after join, during play) per connection.
Do you expect this MeshPart bandwidth usage? Maybe you guys added some new models, or changed the Streaming radius or something like that?
MeshParts shouldn’t be using that much, no.
The building team did make some changes to rooms, replacing some repeated Parts with MeshParts (all with the same MeshId, which should improve performance due to caching iirc?), but nothing major enough such that it would cause the immense network load we’re experiencing.
As for streaming settings, those have remained constant since the release of the game.
The building team is currently working on reviewing all MeshParts to ensure they have proper CollisionFidelity, RenderFidelity, etc. to try and minimize MeshPart lag.
But I don’t understand how the server is sending out 33KB/s of them per player after join. I’d expect larger traffic on join, but not that much during gameplay.
Most zones are 500x500 studs, and StreamingMinRadius = 200 and StreamingTargetRadius = 500.
RenderFidelity / CollisionFidelity / other settings were tweaked and published to the game as part of update 0.3.1. The performance impact on the server was negligible; the issue remains.
@YAYAYGPPR Prior to making my own report, I wish to validate something; this has been an issue for you alongside your game for around a week, and mainly coorelates to player count?
If so, I have experienced very similiar issues. For the past week I have experienced ping spikes that are horrendous. Ping will go up to around 1,000/ms for a moment, then go back down.
Such a thing seems to be relatively new. It seems I cannot attach profiling dumps, however you’re saying seems to align with what I have also been experiencing.
There doesn’t appear to be any outlandish data usage, or anything alike that.
I can make another report if needed, which includes profiling, just not entirely sure how I’d go about getting the data in which you’ve presented via the profiler – I’m not super adapt to it.
Touching on the conversation in which took place about StreamingEnabled, this is occuring for me without StreamingEnabled on – additionally we’ve made no changes to our maps in these past weeks, yet the issue remains.
The issue we’ve experienced is more closely related to massive data amounts being sent out from the server to the clients, coinciding with server heartbeat drops.
Yeah sounds fair, since your issue, despite maybe being related, has its own uniqueness.
Most of my profile data was sent via the private message that bug reports have, so they’re visible exclusively to Roblox engineers, as they include specific server data. You can do the same for yours; capture microprofiler logs that coincide with your issue and zip them (into a .zip file) and include it in your bug report.
Our game uses StreamingEnabled. I’ve been considering disabling it and instead creating a script that culls Models that are too far to test whether it’s a streaming issue, but I haven’t had the time.
@GroupyClumpy / @asdfiji123
Hi, have there been any updates on the matter? I’ve provided additional logs in the private message associated with this report.
Please let me know if any additional information is necessary.
We’ve also noticed this on our game - also starting around Feb 19th/20th. We had 1-2 publishes during this period but they were minuscule in nature - literally just removing an unused modulescript.
Not sure if this is a side-affect of streaming - as the main thing bringing this to our attention is the uptick in reports of streaming being much slower. We initially believed this report to be related, however this one also matches several things we are seeing (in respect to the bandwidth, SysEventLoop.runOnce() and the timeframe).
The SysEventLoop.runOnce() appears to occur in games without StreamingEnabled as well, refer to my screenshot earlier which shows the profiler – this is done in a game without StreamingEnabled.
If either of you two do decide to disable StreamingEnabled, and notice better results, please let me know! My issue could be something different.
We could not find anything else from the game causing this issue.
The game itself is too big for us to disable streaming so we cannot do that as a temporary fix.
Was linked here by @NodeSupport, our game is also experiencing SysEventLoop.runOnce() during our community events. While we formerly handled 250+ players in one server flawlessly, we are now having to split into two 60v60 player servers and still experience absurd lag, which I can also reaffirm occured around Feb 19th despite the game not receiving an update around this time.
Hi @YAYAYGPPR, thanks for the report and the microprofiles. A couple of questions about the game:
I see nav mesh generation taking extremely long in a few of the traces (212 ms) in one - is this expected on the server side? This could be taking compute time away from other work and causing the spikes
I also see a few chunks of streaming work taking a very long time in the latest profile. Does your game have any large Atomic or Persistent models in any parts of the game, in terms of number of instances in the models? If so making those models smaller or Nonatomic can prevent lag spikes from streaming processing and replicating those
Is there anything else unusual about the game from a streaming perspective?
No. Usually there’s only ~4 pathfinding NPCs that actively target players (thus repeatedly re-computing paths, but that shouldn’t consume too much server performance). Occasionally there are more (up to 10) but very rarely.
I don’t understand why streaming would take up so much work on the server. We have some persistent models, none of which are large. We could reduce the number of persistent models, but I don’t think they are anywhere near large enough to cause the immense load we’ve been seeing. We have already done multiple optimization steps.
No, I don’t think so. But to elaborate on our game: it is separated into zones, all of which are separated and accessible via teleports. There are four of these main zones (around 400 studs wide by 400 studs across) and a couple smaller ones. All characters use default streaming mode and I cannot come up with anything that should be impacting the server this much.
We are also experiencing critical performance issues on Case Unboxing
Incident
We had several complaints about server performance and during routine investigation, we could not find anything related to our Luau stack (neither memory runaways nor script performance) that indicate an issue on our end. Our microprofiler is completely flat when these lag spikes happen.
When this does happen, we notice quite a few things:
Physics replication completely freezes for 4-5 seconds
Humanoid replication completely freezes for the same time
Developer Console diagnostics graphs stop updating and when they resume, there is a discontinuation in the graph, meaning the line is not connected. This happens to all graphs, regardless of what they measure.
In our analytics, we noticed a jump from under 1ms to over 1,000ms at 8AM Pacific on 3/10. The metric is named Allocate_Bandwidth_And_Run_Senders. This is not something I understand that we can control with our own code.
FWIW, we only have 4 moving, tumbling, unanchored parts in our map. Their network ownership is explicitly on the server for security purpouses. Everything else is anchored.
Microprofiler shows SysEventLoop.runOnce at >471ms
This is the biggest red flag and what lead me to search for this post. When this issue happens, we notice SysEventLoop.runOnce goes from 2ms all the way to 471ms. There seems to be direct correlation between the lag and this specific timer blowing up in frame time.