Datastore throttling question

When a datastore request is throttled, it will run after the number of requests go below the limit. But if the server ends before the throttled requests have a chance to run, what will happen? Will those throttled requests never get processed, or will they get processed after the server ends?

The reason I ask if that sometimes when someone purchases a developer product for cash (which uses datastore), the request gets throttled.


You can bind a function that keeps waiting until all the requests have gone through using

If all the requests still don’t go through after 30 seconds (very unlikely unless you’re using DataStores inefficiently) then they’re lost. But you probably shouldn’t be hitting the DataStore limits anyways.

1 Like

Yup, I need to prevent datastore use if used within 6 seconds. It only really throttles when people have lots of points to spend.

It sounds like you’re using the datastores inefficiently if you’re hitting the throttling limit a lot.

For my own projects, I keep the data cached serverside as a table, and only save/load the data when the player joins, leaves, occasional autosaves and upon the server closing. As long as you save data when player leaves or if the server closes, you should be good. The autosaves are for good measure.


Is it even documented anywhere what actually happens when that orange warning message appears in the output?
Does the function call simply keep on yielding until the request goes through, just notifying you in the output it’s waiting longer?

It would be really bad if it silently returned, acting like the request was successful if the request is actually queued up internally, actually being executed at some unpredictable time (or not at all)…
That could much more easily lead to out-of-order execution when a player joins another server in which datastore api is not being throttled. Or discard critical data completely, even though the api acted like the request succeeded, as stated in the OP…

Very good question!
The closest you find documentation on this is in the “Data store manual” on the Wiki. It does not specifically cover what happens to the throttled requests, but the closest statement you find is the following: “If a game requests the same key too many times, then its requests can be throttled or even error”.

I have poked a few of the staff to hopefully enlighten us on this.
I think - regardless of what happens to throttled requests - that good and efficient management of data requests is crucial to a solid system. If you’re hitting the limits a lot, you should restructure how you’re requesting and saving your data.

The big vague point here is definitely what “being throttled” means: does it just yield longer or does it pretend like it succeeded? I’d rather have it error all the time…

Of course, hitting the limit all the time is no good too. However, if for whatever reason you hit the limit (e.g. a spike happens, all players buy a ton of dev products right after a bunch of autosaves occur…), it should still result in predictable behavior and requests should never be able to silently fail.


I have a few ideas of what I could do.

For currency purchases, I would restrict it to every 6 seconds because of the write limit. The cached data would be changed and would be saved in datastore.

All the other processes with currency would involve the server side cached data values which would change and be saved on leave.
The player could possibly purchase a dev product and within 6 seconds leave, possibly causing the server to close while throttled, but I doubt there would be much change in the data between those 6 seconds.

Quick question, is it a bad idea to leave data cached within the player object as value objects instead of serverside table like @Ravenshield suggested?

You can do both. If the object is visible to the clients, it can be exploited easier as well. Not saying it will be exploited, or that data in a table cannot be exploited. The point is if it’s visible to the client, they know quicker what to try to adjust through abusing Remote-objects and so on.

You can use both by always saving new data to the table and updating the value-objects based on this; or just serialize the folder with value objects, but then you must keep my aforementioned scenario in mind.

1 Like

Ok, I will make these changes. Thanks for the help everyone.

1 Like

Decided to test this warning, apparently it yields the datastore function call if the same key is requested/set/updated within 6 seconds of the previous request. It doesn’t return until the operation actually succeeded and it doesn’t secretly shove the operation in a background queue, which is a good thing.

As a result, that warning shouldn’t ever cause unexpected data loss but just yields the function for a few additional seconds.