Massive increase in client crash rate in the last 24 hours without updating our game

Thanks for clarifying, and thank you both @newtonmetre and @LazerPengu for the support, it is massively appreciated!

3 Likes

Hi, so did the workaround work? Or did you guys find another way around it.

1 Like

This is a problem that is currently affecting nUSAā€™s new game Boulder County, which crashes players within 1-15 minutes from joining.

3 Likes

Yup, the workaround works. We still get rare crashes but after our first pass at implementing the workaround crashing stopped for a large majority of players.

2 Likes

I have no idea if this is related to this particular issue but my game has recently started leaking large amounts of memory out of nowhere. I removed all installed plugins & even deleted everything in the game except a spawnlocation and there is still 300MB of untracked memory that is ever increasing. This has resulted in increased crash rates on all mobile devices and my player count has dropped drastically.

This is the place file I used incase anyone is interested. This issue only seems to be affecting my game, and if I publish this file to another place it uses much less memory. Very confusing.
memory-leak-test.rbxl (30.2 KB)

1 Like

Adding onto this, I also went back around 2 months to an older version when the game wasnā€™t leaking memory, and the when I tested it was leaking extreme amounts of memory and using up much more memory than it did previously.
I hope this issue is fixed soon.

1 Like

Unfortunately, I think this is likely a different issue. All of the memory leaked due to this bug should be categorized under signals or one of the Lua-related categories, not untracked.

Some background information: Memory allocations are categorized by tagging code scopes with a specific category. For example, when we enter the signal code, we add a scope to tag the allocation as instance/signal. When a memory allocation occurs, we inspect the scope to see what to tag it as. If a scope is missing a tag, it is categorized under default. Untracked memory is even more difficult to pin down, because the fact that it doesnā€™t appear under a specific tag or default means that it did not use our memory allocator at all. The common use-case for this is allocations performed by libraries used by Roblox (e.g. the system graphics drivers). This means that untracked memory is highly variable between platform and different hardware configurations. (Side note: you may wonder how we measure it at all, if we cannot track it. Answer: Untracked memory is what the operating system reports that our process is using, minus the total of all tracked memory.)

Increases in untracked memory can be very difficult to narrow down. To properly investigate this issue, I suggest you file a separate engine bug report about the increase in untracked memory so that we have all of the information available to investigate.

2 Likes

Is this issue occurring for all instance signals, or for particular signals?

EDIT: Nevermind, just saw the explanation in a post above!

This issue happens with all object types and all events. It happens when any event handler destroys its parent object.

My game (Clashes in Taloqan) has also had a massive increase in server crash reports since after the 22nd, and I believe weā€™re also affected by this issue, as weā€™ve observed strange server-sided memory ā€œleaksā€ from connections that shouldnā€™t be happening.

Okay, that makes more sense. Something additional I forgot to add, in the old version of my game that I tested, LuaHeap was constantly growing which was not the case when that version was released so I believe this issue is affecting my game as well. The extremely high UntrackedMemory is most likely just an issue with my device.

I took a look at the server crash reports for your game. The sample size is not great, but I can confirm that most recent crashes are out-of-memory errors. However, it appears that these issues ramped up somewhere between Nov 22 - 25, whereas this leak originally started with the Nov 9 release (Nov 8 for the server).

This memory leak does affect the server as well as the client, but I suspect that there is another factor in play. Have you deployed an updated version of the game somewhere around Nov 22 - 25? If not, there is likely a different bug somewhere in the engine. If you have deployed a new version recently, you can attempt to narrow down what changes you may have made that increase memory usage. (In particular, if you added any additional places where Instances are destroyed by event handlers, it might be worth inserting the workaround to see if that significantly improves the situation.)

2 Likes

Thank you for such a detailed writeup. Not every engineer takes the time to write out detailed reports, so this sort of information is very useful.

8 Likes

Has it been determined whether or not this is an engine issue? One of my games saw a historic drop in retention on Nov 22-23. I can provide more information privately if need be.

1 Like

Iā€™m not aware of any issues starting on the 22nd-23rd. The issue in this thread started happening around Nov 9. If you believe thereā€™s another engine bug that started happening around the 22nd, please create a new bug report thread so that we can investigate it. Thank you!

Good news, everyone!

We deployed an engine fix for this leak in release 555, and the fix was enabled about 30 minutes ago. This issue will be fully resolved once all players upgrade to client release 555. Once all of your users have upgraded, you can remove the workaround code.

To be safe, wait for the official release notes for 555, or possibly even for release 556 before removing the workaround. Some platforms take longer to upgrade.

8 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.