I don’t know how to reproduce this issue. It is seemingly random and does not seem to respect documented datastore budgets.
Expected Behavior
I expect my datastore code to function as it did 24 hours ago, no rate limiting when not hitting the documented limits.
Actual Behavior
Datastore write requests are failing frequently even though the appropriate datastore budget is well above 0.
Error:
502: API Services rejected request with error. HTTP 503 (Adaptive Concurrency Limits Enforced)
“Adaptive Concurrency Limits Enforced” is new and undocumented. This began happening very frequently November 23rd, 2022 (yesterday). Our players are experiencing data rollbacks due to the frequency of this happening, despite not hitting any documented limits.
Issue Area: Engine Issue Type: Performance Impact: Very High Frequency: Often Date First Experienced: 2022-11-23 19:11:00 (-07:00)
I am also experiencing this in my game, it gives me that error when trying to load a player data, which never used to happen before.
I hope this can be fixed or we are told of the new limits, its hard to keep up when there’s no documentation about it, i thought i was the only one experiencing it since there wasn’t anything about this issue in here, but i saw some other games such as Grand Piece Online were also having the same issue.
Hey @Fm_Trick, I’ve forwarded this to the team that manages datastores. Could you give a rough impression of how many players this is affecting on your game per time span?
I am not entirely sure as I’m currently relying on a rate limited proxy to receive these save fail messages. I’m seeing 5-25 messages with this error at an interval of 10 minutes (with about 8k concurrents at the time), however it’s suspiciously exactly every 10 minutes so I suspect that’s the rate limiting I mentioned and it’s actually much higher. I have received an increase in reports of progress loss / data rollbacks since this issue started happening.
Also experiencing this issue. I have not experienced any clear data loss yet on my game with about 300 concurrent players, however datastore requests are directly being turned in to the queue, which I worry could have devastating effects if way too many requests pile up.
this happens in my group game aswell, with around 20/25 concurrent players, failures have been logged 50 times today. It’s not that frequent but data loss can happen and we don’t want that. (note: database errors happened 0/1 times per day before this issue)
Are you using DataStore2? One of our games uses it with ordered backups enabled and it’s the only one getting these errors along with data loss issues.
Sorry about the inconvenience. This issue was caused by some bad machines that we have since fixed. We are also going to clarify the error message to make clear that this is not the developer’s fault, should this issue reoccur in the future.
This doesn’t really answer the question and I believe this reply is very inadequate. We need to know what limit we are hitting so we can address the issue.
What are we supposed to do with this? In your second message you said you will make the error more clear while still leaving us clueless in this moment.
Many of my experiences are going through extreme data loss due to these errors currently. The game is totally broken in this state and there seems to be nothing we can do to address it due to a lack of clarity.
This is not solved… issue is happening for Experiences and causing data issues and even contributing to the reset of player stats.
We do reads on initial updates on our place to validate the data being stored is fresh and not a result of Roblox server having stale cache, and it’s causing game saves to fail.
As a follow-on to my previous post showing the in-game errors, this is clearly not an experience issue and definitely a problem with Roblox servers when I get these when doing simple Data Store query from Studio to check if a players data is still intact … and my last lookup was like 15 - 20 seconds prior.
This is still an issue. Our game is just saving on player left and loading player data on join, no way we should be running into this issue. This is majorly affecting our user experience.