Removal of 6s Cool Down for Data Stores

Hi creators,

We are excited to announce that the 6-second (6s) limit interval to write to the same Data Store key has been removed! This applies to both standard and ordered Data Stores.

Previously, writes to the same key of a data store within 6s in the same server would be added to a queue or rejected if the queue is full. The mechanism was set to protect the backend system but also led to challenges for your scripts to work around it.

As we continue to improve the reliability and scalability of Data Stores, this mechanism has become outdated, and thus we decided to remove it to simplify your experience.

Note: Data Stores write access is still subject to the per server limit, i.e. 60 + NumPlayers x 10. We recommend adding backoff and retry logic when throttled.

Please let us know if you have any feedback. We’ll keep improving our services to better serve your needs.

Thank you,
The Roblox Creator Services Team

Update on 7/10/2023

Per key limit

  • To prevent the hotspot issue (i.e. one universe consumes all the resources of a backend server), we have set per-key throughput limits for Data Stores but missed document them. Here are the details:
    • Write: 4 MB per minute (so that you can write the maximum sized object every minute)
    • Read: 25 MB per minute
    • The throughput is rounded up every 1KB for each request. For example, if you write 800 bytes and 1.2 KB in two write requests, it will be counted as 3 KB throughput in total (1 KB and 2 KB for the two calls respectively).
  • The limits have existed for years but it is hard to hit them due to the 6s cooldown. Now with the restriction removed, it’s much easier to be throttled by it. We may adjust the limits (most likely upwards) in the future based on system capacity.
  • We have updated our documentation with these limits.

This topic was automatically opened after 10 minutes.

Awesome! Data stores just got so much better. Theres been quite a lot of issues with data in games lately, so this is a great improvement.

Keep up the great work :slightly_smiling_face:


I didn’t even know this limit existed, but for those whom this change benefits, have fun with it.


So this means that the warning itself was basically removed, but it continues to do the same thing? Or the new request will just override right away the previous write request?

The following message “We recommend adding backoff and retry logic when throttled.” means that we must implement a similar system to avoid data store from being throttled or kind of implement a queue system ourselves?

The good thing is I won’t be seeing that ugly warning message that often just because my game auto-saved their data and they just leave in those 6 seconds causing the warning. Thanks for that!


will DataStore2 be affected with this update?


Previously if you write to the same key within 6s, it won’t immediate send to the backend service but queued in the game server. With the new change, the request will immediately go out to the backend as long as you are within the write quota of the server. Basically you can update your key faster but need to be careful of hitting the server limit. Is the explanation clear to you?


Yay! Can’t wait to write In-efficient code

In all seriousness, GG Roblox! I remember my game had issues because of limit before.


Wow this will make things easier although I never encountered it before but I guess I never will:))


There is currently no way of getting the budgets of the new V2 APIs

Any update on this?


Oh my- I was literally just going to search for a feature request that addresses exactly this.

This is an absolute godsend for my projects, thank you Roblox!

I was previously relying on dodgy MemoryStoreService back-ups in order to determine when a key may have been written-to recently as to ensure two servers didn’t queue a request to the same key. This was the only issue I’ve ever had with DatastoreService and has slowed down development time dramatically for me.

However, @dragonknightflies I do have one question, what is the process for if two servers attempt to UpdateAsync at the same time, will the backend reject one request as the value is outdated or will I encounter data loss / will a second request be counted to my quota?

Also, on the topic of removing limits from DatastoreService, any chance we could see the ability to disable GetAsync caching?


This is a pretty awesome change, will help with my development since I would run into the issue at times.


Here’s a post explaining what happens if the parameter passed to UpdateAsync is outdated before the save finishes. The game server that submitted their save request later has its UpdateAsync callback re-evaluated with a new value until it gets to save properly. The new documentation for UpdateAsync also details this.


Yes, I’m well aware that an attempt to ‘fix’ the outdated data will be done however, I’m unsure if such retries count as extra requests to my DataStore limits.


Thank you so much!
Working with Datastores and MemoryStore for cross server session handling was annoyingly frustrating because I can never really trust memorystore to house the most truthful data value, I can now work with my own queue code without being limited for 6 seconds globally.


Interesting update, wasn’t really expecting this one. More than anything I think this will be a nice quality of life update for example if a game as an auto save and someone leaves right after the auto save. Cool none the less


so basically, this has improved data stores but we still need to do “retry” methods? Nice I guess :smiley:


This just made my day. Thank you for listening!!!


Unexpected but very welcome QoL change. Thank you!


Such a welcome change!! :tada: For years this has plagued me and caused so much confusion as to why so many data store requests were being put into a queue, making me think I was pushing the rate limits too hard, yet it turns out after all this time, there has been a 6 second write cooldown for the same key!!!

This occasionally happened since auto-saves and saving on player leave can happen within that same timeframe, so for servers that have been up for a while, it would have quite a few of them. I’ve implemented custom queues for this since, but I’m sure it’d be even more complex if I had to write to the same key from multiple servers.