Server Memory Allocation Inaccurate

I’m fairly confident, unless I missed something that server memory allocation isn’t being handled properly for our servers, that roughly contain ~100 players on average.

Our servers seem to be crashing at 8gb of memory, due to the high player counts memory easily can rack up in our complex game. Following the formula and update posted in late 2024.

We should be roughly getting 6.4GB + 100MB * ~105 peak = ~17gb of server memory. I also wonder if the CPU Allocation (Although stated for certain experiences), works.

Expected behavior

Server Memory to follow the formula stated. (Even More Server Memory)

3 Likes

Hey @Dev_Badge2. Could you provide the place id in question? It’d help with the investigation.

For your context, in the 6.4GB + 100MB * peak_players formula, peak_players refers to the maximum number of players this specific RCC instance has seen since its start. May it be the case that your servers go out of memory before that many players join?

1 Like

Sure thing, I’ll send the placeID, along with whatever else through a message.

1 Like

Thanks for providing the necessary details. I checked the telemetry and didn’t find a lot of OOM terminations for the place id you shared. Also, I verified that the memory limit was recalculated correctly for instances of this place, keeping up with the number of players as they join in. If you still encounter problems, additional context would be helpful. For instance, if you see an instance terminated, any details that would help to identify it would be appreciated. Such as date and time of start and/or termination, known player ids who joined this server, etc.

Hey, I am also experiencing this issue as highlighted in my thread: Servers crashing despite being below the server memory limit

The issue appears to still be re-occurring.

Our most recent server crashed at approximately 1:17 PM CST 7/7/2025
The server had reached a peak player count of at least 85 players before hard-crashing

The server crashed roughly around the same range of memory as referenced in Hyperant’s bug report.


2 Likes

@Dev_Badge2
I pulled up logs for that RCC instance. There was a crash, though; there’s no evidence that it was due to memory consumption, as the memory limit was correctly updated. The bad news is that it’s not easy to determine why it crashed. I’ll see if I can get more clarity on that.

@Hyperant
Feel free to send me the place id in question. I pull up telemetry for your place as well. Any details (like crash date/time and player occupancy at that moment) would help the investigation.

1 Like

One more server crash you could take a look at.

Same PlaceID
9:13 PM CST (7/8/2025)



2 Likes

Game ID: 2498556598
Place ID: 7230977870

I deactivated the server limit on this particular server just a few minutes ago and sure enough, it crashed around 24 players at just below 8100 mb server memory

2 Likes

Thanks for the additional context to both of you.

It turned out that in some cases, a different tool limited memory consumption on another level, which is why I didn’t see any issues on the engine level. I deployed a configuration change that should have addressed that (only for RCC instances started after 3 pm PST today). Please let me know if you still see the problem with the new instances.

Is there any chance your fix is causing this issue?

It begun around 3PM PST, maybe a little later.

1 Like

@bvetterdays
I can see this error started to occur at around 4:50 PM PST. I’m looking into it and also let the oncall team know.

2 Likes

It will appear that seemed to solve the issue!

2 Likes

It was a separate issue and should be solved by now.

1 Like

Our servers are as well doing good now, thank you for your assistance.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.