DataStoreService.AvailableRequests

Basically have a int value of how many requests are available at that moment. It will automatically update, and can basically be used to check if any requests are available for datastores. This could be useful for queue systems, though I currently have one, but its not accurate, and it needs to wait 60 seconds to ensure it has more.

Possibly even do the same for all other types of requests, as there isn’t an easy way to know.

Should be fairly simple, as the server keeps track of how many requests you sent, so it doesn’t go over the limit.

Yes please

This isn’t particularly useful. No matter what you’ll have to wrap your requests in a pcall or similar so you can retry if they fail.

This ultimately won’t change how your queue works though.

local queue = {
keyToUpdate = 5;
key2ToUpdate = 6;
}

Current:

while wait(interval) do
    for keyName,newValue in next,queue do
        local success = pcall(function() 
            datastore:Update(keyName, newValue)
        end)
    if not success then break end
    end
end

After change:

while wait(interval) do
    for keyName,newValue in next,queue do
        if datastoreservice.AvailableRequests == 0 then break end
        datastore:Update(keyName, newValue)
    end
end

I don’t see the benefit

Note also that you should be wrapping in a pcall anyway because the request can fail for other reasons, so it even makes less sense in your example to have this property.

You don’t seem to understand the problem. Roblox doesn’t automatically regulate data store requests, and it also doesn’t provide a means to lookup throttling limits programmatically. Any API that has throttling does one of these two things, and here we have Roblox doing neither.

If I want to write my own request regulator right now, I must have my script navigate to this page and parse this arbitrarily-defined limits table. And when Roblox decides to change the limits, everything breaks because someone didn’t keep the page up to date.

pls explain what a request regulator is and why an update queue isn’t favorable / can’t be used instead of a request regulator

The point is that this feature would be helpful. If some people find it helpful for saving data, is there a reason not to have it?

“why not”

-It takes work to add it
-If it’s not needed it’s just API bloat
-In the professional programming world, you don’t add something because “why not”
-From a popular programming book that a staff member linked on a thread once “The Clean Coder”: “Everything starts out with -100 points. You have to have enough reason to get it above 0 to implement it.”

[quote] -If it’s not needed it’s just API bloat
-In the professional programming world, you don’t add something because “why not” [/quote]

Lots of people would find it useful.

Seeing as I have barely ever run into issues with reaching the DataStore cap, I don’t see any use.

"Lots of people would find it useful. "

You have yet to explain how

Ok, I can sort of agree, however the Data store service already checks if its over the limit, so it could easily be added with a single Int value that is updated every time the data store request is called, which could update the limit once the request is done. Since it already checked the limit

Let’s say the request limit is 60 per second.
Let’s say I make 100 requests over 10 seconds.
60 of them go through, while the remaining 40 fail.
Since the limit was reached in 10 seconds, we now have to wait an entire 50 seconds before we can make the remaining 40 requests.

This is an extreme example, but what a regulator would do is spread these requests out so that the limit is never exceeded. A simple regulator would queue up requests and dequeue them evenly across the time limitation (i.e. 1 per second). A smart regulator would know that requests generally occur in short bursts, and reduce the spread for batches of requests, and increase the spread for single requests, while never exceeding the limit.

For an example of request limit information, if you look at https://api.github.com, it returns the following headers:

[tt]x-ratelimit-limit[/tt]: The total rate limit.
[tt]x-ratelimit-remaining[/tt]: The number of remaining requests.
[tt]x-ratelimit-reset[/tt]: A timestamp indicating when the remaining requests will be reset.

The point of this is that it’s a basic component of a throttled API that Roblox has failed to provide. It would be forgivable if Roblox automatically regulated requests, but they don’t do that. They leave it up to the user to implement, but they don’t provide the tools needed for it to be implemented stably and correctly.

Why can’t you use what I posted earlier? You don’t need to know the request limits for that to work.

The example you posted doesn’t allow for a request regulator. It lets you know when you’ve reached the request limit; and at that point, it’s too late.

You do need to know them if you want your data to be propagated as quickly as possible while also being stable. You can’t assume that rate limits will never change (there’s even a warning on the wiki page). Personally, I don’t want to deal with a mass of errors and/or delayed requests, followed by hours of debugging, followed by some engineer saying “oh btw we changed the throttling rates have fun lol”.

The queue system I posted earlier propagates data stably and quickly – it’s not some hacked together monster that breaks.

Generally, most data you’d need to write to the DataStore doesn’t need to be written immediately. You can easily get away with a minute long queue delay, and if you really need to save values quickly (probably inventories when a player leaves the game where they could potentially hop into another server), you can just use PlayerRemoving to trigger the queue, or have a priority queue that iterates much more frequently. There’s nothing wrong with the data cache + update queue that I’ve been using, and there’s been no need for me to ever know the limits for the datastore.

ROBLOX’s philosophy regarding data store limits is to give the developer an absurdly high limit, so high that if they reach the limit, they are definitely doing something wrong.

Are any of you actually running into the data store limits? I suspect that this is a theoretical problem rather than practical one.