Are you able to create/share a repro place? Does this happen everywhere in your experience or only in certain areas?
Is this new behavior?
You mention you don’t use atomic models but I think I see model removals in your image, which is very puzzling. Having a repro place would help us investigate this.
Unfortunately, I don’t have a repro place. If you’d like, I can provide steps to reproduce the issue in my experience.
I’ve double checked and ran a script checking for models with ModelStreamingMode set to Atomic, and there are zero. I’ve gone with this approach because I assumed streaming models in piecemeal would be more performant compared to in batch. However, it seems like when I drive around the map, masses amounts of instances are being streamed in/out at once, and there’s major performance problems.
I’ll also mention that I do have CollectionService tags applied to some models that are being streamed in and out. I run a couple (at most, usually) IsA and FindFirstChild calls (through the t library) on instance’s hierarchy to determine if the model is “complete”. This is part of my safe streaming system to make sure that streamed models are accessible via dot notation.
I don’t think there’s a performance problem with my code, and if there is something wrong with my code, it’s not being reflected in the microprofiler at all-almost all of my streaming in/out lag is attributed to engine streaming in/out instances, terrain voxel writing (especially laggy as well, but maybe that should go under a different post), etc
I forgot that I switched it to LowMemory mode on the first game link as a temporary solution. I’ve changed it back to Opportunistic, (and you can give that another try) but it might help more to test the second game link, as it’s a newer version of my game.
I’m sorry in advance because this might be a lot of steps:
Get in the car, hit Y to start the engine, Space to disengage the parking brake, and Shift to put it in gear.
Drive around at high speeds.
Extra steps to ensure that my streaming-related code isn’t running. Perform them after spawning the car, or else you won’t be able to get in:
Hit F4 to open the ECS debugger.
In the list of systems, right click on these systems to disable them:
interactionRegistry
watchStreamTag
guardModels
rain (will improve performance)
Close the debugger by hitting F4 and continue to drive around at high speeds.
Here’s a video of me driving after performing all of these steps. The part model removals are more accentuated when driving out of large towns as it streams all of the parts out.
I’ve also recorded a video of me driving on the first game link. There is some user code here that I’m not able to deal with easily right now, but there is no way it should be this high.
We also see large chunks (sometimes quite larger) of time taken to write or delete Terrain
Are you using tags a lot in your experience? What percentage of instances that are subject to stream out do you have tags on?
Looking at the microprofiles it looks like a lot of the removal time might be being spent on triggering tag removals and scripts that respond to those.
In the last video you shared it looks like a script called “Loader” is being invoked on stream out?
I don’t have a script named Loader in my experience! This Loader microprofiler is something I’ve seen in several of my games, and I have no clue what it is or where it’s from. Are you able to double check that it’s not something on Roblox’s end? I’ve searched both the explorer and the Find All window and don’t see that.
Are you using tags a lot in your experience? What percentage of instances that are subject to stream out do you have tags on?
We have about 8000 entities simulated in the ECS and something to the tune of 100000 parts in the entire level. Every entity has a tag called “stream” which is what lets the client know it’s a server entity it needs to account for. We don’t tag anything unnecessarily-these tagged entities might be lights, doors, glass panes, etc-all things that are important for the game to know about. So… 8%?
Even if you are disabling scripts that respond to tags I would still recommend not tagging things you don’t need to be informed about. If an instance with a tag is streamed out there will be some overhead to inform CollectionService and respond to the removal even if there are no scripts actively looking for the tag in question.
I don’t think I’ve seen that Loader script in microprofiles for other experiences, but I’ll do some research. I’m not aware of anything on our end that would behave like that.
I’ll note that additionally, I don’t run code directly on these tags being added. Because systems in an ECS are run every frame, we use an API that will collect the events given to us by a connection, and then run we code on each one in the system. So still, any heavy lifting I’m doing by responding to these tags would occur in heartbeat, in the system, that’s disabled.
If an instance with a tag is streamed out there will be some overhead to inform CollectionService and respond to the removal
It’s surprising to me that there’s this much overhead, if that’s what you think the problem mainly is. When these systems are disabled, the connections are disconnected. This might be naive, but inserting or removing from a hashmap or similar structure to keep track of tag > instance shouldn’t be that slow on the engine side…
The microprofiler tells me generally “part/model removals” is taking a long time, and there are some CollectionService blips, but it’s also not telling the full story because there’s a very large amount of unaccounted for frame time.
And yeah, the “loader” thing is super weird and I’d love to know what that is. Maybe it’s a microprofiler issue, not sure. I’ve had tons of issues spanning many projects where the microprofiler labels hallucinate.
I’m hoping to find a solution here that doesn’t involve removing tags from instances, whether it’s a performance improvement on Roblox’s end (ideally), streaming related tips for performance, or both.
I’ve made a configuration change that should reduce the occurrence of long frames when parts are being streamed out. Can you do some more testing and see if the performance is improved?
I’m still receiving large spikes from “Replicator ProcessPackets” (~21ms) and many "meshTask"s (~10ms). I also have in this same frame a 6ms label in the render thread in Perform, “updateTerrainPerform”
Upon startup, we use a content loading screen before spawning the player in. This runs in ReplicatedFirst. Now, this error suddenly appears and does not allow us to test anything since it never returns a true value and gets stuck on 99% loaded.
Sorry, the change I made is specific to the stream out behavior. If you are concerned about the other long scopes you should probably file separate bug reports for those ones.
@unmiss I’m going to close this ticket because I believe we’ve fixed the issue related to stream out performance.
If you are concerned with the spikes in the other scopes like “Replicator ProcessPackets” I encourage you to file separate reports for the different issues. Different teams own different parts of the engine, so it is best to be able to assign specific issues (with repro steps) to the responsible teams.