Details on DataStoreService for Advanced Developers

I noticed an inaccuracy. In this, you state that the documentation is wrong about the 50 character limit for keys. I don’t know if this was true in the past, but it doesn’t seem to be anymore.

image

In this image, you can see the error happens on line 14 and not 13.

1 Like

You can query this directly by the looks of it.

print(settings():GetFVariable("DataStoreMaxValueSize"))
4194304

Which makes sense - that’s 4MB. The real limit is probably that minus one.

4 Likes

Yep looks like they fixed this. Edited the post.

3 Likes

The write cooldown says it is 6 seconds, however, when I try to use UpdateAsync every 6 seconds, it gives a warning.

Code in question:

while os.clock() - Start < TIME_BEFORE_FORCE_STEAL do
   print('trying')

   local LoadFuture = LoadProfileData(profileStore.DataStore, key)

   wait(6)

   if not LoadFuture:isResolved() then
      LoadFuture:await()
   end
end

It’s hard to see what’s happening here because your code sample is not self-contained, but you probably don’t want to wait exactly 6 seconds between requests. The network time taken for requests / for wait(6) will vary a bit so you could be hitting the server a few tens of milliseconds too early if you do it this way.

As far as I understand, the write time is set upon datastore call completion, and read upon datastore call invocation, so you need to wait more than 6 seconds in practice. You’re doing this with a future so that wait(6) is measured between the start times of the requests, not the span between a request ending and a request starting.

3 Likes

If I waited for the call to be complete + 6 seconds, would that work?

I think that should work because then you’re sure the time between the request ending and starting is >=6 (since wait(6) guarantees >=6 seconds wait)

3 Likes

Thanks a lot for this comprehensive material.
One question: Write cooldown 6 sec - it has its own queue? does it ever throw?
BTW, the read cache issue is very important to me. Has it not been fixed since your OP?
Ok, it’s three questions…

I believe there are only queues for every kind of operation. So write cooldown violations go in the same queue as writes out of budget (SetIncrementAsync).

If your queue is full (30 requests max) it can throw, yes.

I believe that was fixed:

Let me update the thread accordingly

3 Likes

I don’t think these limits are correct. With 10 players I am managing 1,380 requests for Set & Get. This means the rate is actually what this user posted. Request Limits on Data Store Errors and Limits page are not accurate

3 * ( 60 + ( 40 * 10 ) ) = 1,380.

1 Like

I’m not fully convinced these rates are intended, see: Request Limits on Data Store Errors and Limits page are not accurate - #5 by buildthomas

If they do end up being intended and the official documentation is updated to reflect it, I will update this post.

The rate calculation is still the same, it’s just the multiplier seems to be 40 instead of 10 currently.

2 Likes

It’s possible they might not have reset it back. It might still be good to put a little note somewhere on your post that it is currently 40 per player just so others know. It can always be removed if they decide to reset it back. I will try and get in contact with someone and find out if it’s permanent or not.

1 Like

The note already exists (your reply). :slightly_smiling_face:

I can update it in the main post here once I know more, I don’t want to accidentally mislead anyone into thinking they can design something that requires many more req/s than they will actually be permitted to long-term.

1 Like

I came across your post while making a game that’s datastore limit sensitive, and this was very helpful, thank you!
Sorry to revive this topic, but it would be helpful for anyone coming across nowadays it as I did.

And from my testing, I can confirm the limits @RoyallyFlushed mentioned are still in affect till now, therefore it’s probably intentional.

And it appears not only the per-player limit increased, but also the per-player rate in which the budget increases.
From my testing on GetAsync, I found that:
The base rate is still 60 per minute
The per-player rate is 40 per minute

2 Likes

Updated with these rates.

2 Likes

If a write call errors, does it still activate the write cooldown?

Sorry for the late reply – in my testing only successful requests seem to affect the write cooldown for a given key. This is reflected in the MockDataStoreService implementation.

2 Likes

Is there any sort of budget when contention occurs with an UpdateAsync call involving many servers for the same key?

The rest is why I’m asking the question

If you’re wondering, I’m creating a seasonal leaderboard using MemoryStoreService and I’m worried about hitting some back-end queue(? Could just be an array of requests that is randomly chosen from) designed for UpdateAsync contentions. It shouldn’t ever hit any of the other limits (The keys are sharded and cached to reduce the amount of MemoryStoreService calls), but this one is worrisome since it is mostly unavoidable if the other limits were not reached.

The most worrisome part is a season ending because the code fires an UpdateAsync in every reserved server currently running a game in order to get the most up-to-date scores at a season’s end. The lobby servers wait 30 seconds for all of the updates to complete, then they all use UpdateAsync() to check if the season has already been updated and update the leaderboards if no other server has performed the season change yet.

The only problem is the reserved servers all fire the UpdateAsync at the same time and could potentially fill up some contention queue on the back-end. I suppose I could give the reserved servers some random time to update in those 30 seconds to spread out the requests, but if there is a hard-limit then I very well may have dropped requests and data loss; especially if the limit is very low.

Your question is solely related to MemoryStoreService and not about DataStoreService, right? This post is only about datastores and does not consider memory stores, they are completely different services with different under-the-hood implementations and different properties w.r.t. persistence and consistency.

Generally speaking, MemoryStoreService seems to be implemented as a layer around a caching cluster (most likely Redis under the hood is my guess) and so it does not suffer from a “write cooldown” like datastores as much. However no system scales infinitely with the number of requests against a single key, so you should try to avoid creating “hot key” situations in your datastores/memorystore usage design as much as possible.

2 Likes

I do apologize for being outside the scope of this resource, but this post was the closest information I could find regarding my question.

Thank you for the quick reply as well as answering an out-of-place question.

It should be okay because the reserved servers carry a cached leaderboard last place that is used before an UpdateAsync is ever called. Only the amount of players with a current leaderboard-ranking score will ever call UpdateAsync from the reserved servers. It should be a maximum of 200 calls with an expectation of only a few calls, but I’ll spread them out over some time just to be extra safe.

1 Like