So this means that the warning itself was basically removed, but it continues to do the same thing? Or the new request will just override right away the previous write request?
The following message “We recommend adding backoff and retry logic when throttled.” means that we must implement a similar system to avoid data store from being throttled or kind of implement a queue system ourselves?
The good thing is I won’t be seeing that ugly warning message that often just because my game auto-saved their data and they just leave in those 6 seconds causing the warning. Thanks for that!
Previously if you write to the same key within 6s, it won’t immediate send to the backend service but queued in the game server. With the new change, the request will immediately go out to the backend as long as you are within the write quota of the server. Basically you can update your key faster but need to be careful of hitting the server limit. Is the explanation clear to you?
Oh my- I was literally just going to search for a feature request that addresses exactly this.
This is an absolute godsend for my projects, thank you Roblox!
I was previously relying on dodgy MemoryStoreService back-ups in order to determine when a key may have been written-to recently as to ensure two servers didn’t queue a request to the same key. This was the only issue I’ve ever had with DatastoreService and has slowed down development time dramatically for me.
However, @dragonknightflies I do have one question, what is the process for if two servers attempt to UpdateAsync at the same time, will the backend reject one request as the value is outdated or will I encounter data loss / will a second request be counted to my quota?
Also, on the topic of removing limits from DatastoreService, any chance we could see the ability to disable GetAsync caching?
Here’s a post explaining what happens if the parameter passed to UpdateAsync is outdated before the save finishes. The game server that submitted their save request later has its UpdateAsync callback re-evaluated with a new value until it gets to save properly. The new documentation for UpdateAsync also details this.
Yes, I’m well aware that an attempt to ‘fix’ the outdated data will be done however, I’m unsure if such retries count as extra requests to my DataStore limits.
Thank you so much!
Working with Datastores and MemoryStore for cross server session handling was annoyingly frustrating because I can never really trust memorystore to house the most truthful data value, I can now work with my own queue code without being limited for 6 seconds globally.
Interesting update, wasn’t really expecting this one. More than anything I think this will be a nice quality of life update for example if a game as an auto save and someone leaves right after the auto save. Cool none the less
Such a welcome change!! For years this has plagued me and caused so much confusion as to why so many data store requests were being put into a queue, making me think I was pushing the rate limits too hard, yet it turns out after all this time, there has been a 6 second write cooldown for the same key!!!
This occasionally happened since auto-saves and saving on player leave can happen within that same timeframe, so for servers that have been up for a while, it would have quite a few of them. I’ve implemented custom queues for this since, but I’m sure it’d be even more complex if I had to write to the same key from multiple servers.
Excellent improvement to the DataStore Service. I can see this update fixing a plethora of issues across a great number of experiences, especially ones with trading features. Thank you guys for implementing the change.
Data stores became better than before and that’s very great update in my opinion, but does this now mean that data is prevented from being lost from players?