Warning from DataStore when 6 second limit was removed?

This is my code that is responsible for updating DataStore data from an external Database, I’m getting a warning and it confuses me because I thought the updating cooldown limit that’d trigger these warnings was removed. What am I missing here?

while task.wait(30) do
	GlobalDataStore:UpdateAsync("Database", function(oldValue)
		oldValue = oldValue or data
		_G.Database = oldValue

		G.Functions.Moderate()

		local lastUpdate = oldValue.LastUpdate or 0
		if os.time() - lastUpdate >= 60 then
			oldValue.LastUpdate = os.time()

			task.spawn(function()
				local _, data = Database:GetAsync("")
				G.Functions.Try(function()
					data.LastUpdate = oldValue.LastUpdate
					GlobalDataStore:SetAsync("Database", data)
				end)
			end)

			return oldValue
		end
	end)
end

Try function for context, if the function inside of it failed it retries with a minimum of 5 seconds cd (itll increase with the amount of retries)

1 Like

That warning happened because you’re exceeding the per-server rate limit and requests are being throttled. This snippet doesn’t seem to come close to reach the limit though. Does G.Functions.Moderate() do anything funky with datastores?

Something else is fishy here.
What you’re doing is copying whatever Database:GetAsync("") is, updating its LastUpdate field, and putting that into GlobalDataStore’s “Database” key. Why are you doing cross-key operations? Datastore entries already possess read-only metadata that includes the last moment a key was updated, so you don’t need to use a custom solution. If 60 seconds had elapsed since LastUpdate, your script spawns a new thread to set the “Database” key, but also returns the oldValue. You end up writing stale data and immediately after you write data taken from Database:GetAsync(""). That would’ve triggered the 6 second cooldown if it still existed, but the cooldown being lifted isn’t a reason to start making pairs of writes to the same key. What is this code supposed to do?

1 Like

Moderate has nothing to do with datastores. What I’m doing is taking data from an external database and updating the experience’s Datastore with this data to act as a cache and prevent calling the external database and exceeding a limit. And I didn’t know that there is a solution with metadata so thanks for that! Question is how would I get that data exactly? I didn’t understand the last part you were talking about, kinda lost you there

1 Like

Use request budgets



Photo from

1 Like

I’m assuming Database:GetAsync("") comes from the external database? I understand now. The last bit I talked about was about the fact you return in UpdateAsync and also spawn a thread that does SetAsync so two write requests are made whenever it detects out of date information. I’d suggest only running this on one server at a time so this external database isn’t overloaded by multiple roblox servers trying to update their own copy of data within a short time frame.

Whenever you GetAsync or UpdateAsync, a DataStoreKeyInfo instance is provided as a second returned value. It holds a read-only UpdatedTime property that you can use for your own purposes. There’s also other information and two methods to get developer-given metadata and user ids but that’s not necessary right now. Here’s an example of getting that information:

local expiryMillseconds = 24 * 60 * 60 * 1000 -- 24 hours
local value, keyInfo = someDatastore:GetAsync("someKey")
local creationTime = keyInfo.CreatedTime -- milliseconds since epoch
local thisMoment = DateTime.now().UnixTimestampMillis
if (thisMoment - creationTime < expiryMilliseconds) then
	-- data is out of date, fix that
end

Making sure that you check the budget before making a request like NotReal suggested is a very safe way to avoid being throttled.

1 Like

You’re exactly right about updating the roblox datastore with the external database once across the whole experience per x minutes thats what I tried to do but to achieve this I have to use UpdateAsync no? Because that’s the only function I can use to make sure that it’s only done once because of the queue. I would have to change something in the data to make sure that the next UpdateAsync after it knows not to call the external database and just cancel until in my case 1 min passes

1 Like

I have no reservations against UpdateAsync, but the way you’re using it feels like you’re working against it to accomplish a task; stale data is being returned, it is then overwritten it with the intended data from a spawned thread that you made so you can yield to fetch the external data, which unfortunately jeopardizes the ordered queue protection you’re striving for since that fulfills after the UpdateAsync is finalized.

Now if only one server was delegated this task, it could just wait 60 seconds between each update. There’d be only one moment to care about out-of-date information: the first cycle of this loop. No UpdateAsync ordered queue assurances needed. Just sparse SetAsync calls.

1 Like

Well how would I go about it? How do I get only one server throughout the whole experience to do that. Also I didnt understand how the ordered queue is jeopardised by what I’m doing rn

1 Like

You said you used UpdateAsync because of a “queue” and I’m assuming you meant the fact that while you’re doing an UpdateAsync call, no read/writes are done between the read/write that UpdateAsync does. None of that matters if only one server is doing the work.

There’s a couple ways of designing a system where one server does an important job until it’s emptied and another server is selected. I don’t want to push you to one method of accomplishing this because programming always presents dozens of ways to accomplish any task. You can probably figure out a solution that works best for you. If I had to do this, I’d probably rely on the fact servers can notice that the datastore key is stale and then participate in a MessagingService topic exchange with other servers to determine which should take on the task. That decision would be based on the player count or which server’s randomized JobId is sorted lowest/highest. If a server is shutting down, it could immediately send an alert through MessagingService so a new server can be delegated ASAP. Memory stores are also pretty fast and have sorting/ordering capabilities which might make those server picking decisions a lot simpler.

1 Like

Can’t I just use UpdateAsync for this but with a different approach? UpdateAsync as you said doesn’t allow reads or writes until it’s finished which allows me to only get the external database once per minute for the whole experience, that’s basically allowing me to have 1 server do this task but what it also does is use UpdateAsync as a read call and cancels it after we know the current data and update the _G.Database accordingly every 30 seconds for all servers. I don’t see why to use MessagingService or any other method when UpdateAsync gets both jobs done.

1 Like
local updateInterval = 30
script:SetAttribute("UpdateInterval", updateInterval)

local data
while true do
	GlobalDataStore:UpdateAsync("Database", function(oldValue)
		_G.Database = oldValue or {}
		G.Functions.Moderate()

		if script:GetAttribute("UpdateInterval") == updateInterval then
			local lastUpdate = oldValue.LastUpdate or 0
			if os.time() - lastUpdate >= updateInterval * 2 then
				oldValue.LastUpdate = os.time()
				script:SetAttribute("UpdateInterval", nil)

				task.spawn(function()
					_, data = Database:GetAsync("")
					data.LastUpdate = oldValue.LastUpdate
					_G.Database = data
					script:SetAttribute("UpdateInterval", 0)
				end)

				return oldValue
			end
		else
			script:SetAttribute("UpdateInterval", updateInterval)
			return data
		end
	end)

	local updateInterval = script:GetAttribute("UpdateInterval")
	if not updateInterval then
		updateInterval:GetAttributeChangedSignal():Wait()
	end

	task.wait(script:GetAttribute("UpdateInterval"))
end

Looks like doing another UpdateAsync to update the data when its ready after the initial UpdateAsync to get the data from the external database fixes this issue, the code above is what I did.

Would still like to hear from you about this solution incase I’m missing something. @deleteables

Has the warning gone away? If that’s what you’re aiming for, then it’s good enough. There’s still a possibility 2 or more servers simultaneously make attempts to update the datastore and hit the rate limit but it’s not the end of the world.

Yeah the warnings went away but honestly this is odd the 6 seconds limit that usually gave that warning was removed according to Roblox but still this happens. :man_shrugging:

It shouldn’t be saving that many times. Saving when the player leaves the game should be good enough. If you really want it to save every now and then throughout the game, make it save every 5-10 minutes.

Bro this is an external database that’s used for bans not player data

oooooooooh. I just quick read it. No need to be a bit rude. No point in the topic said “ban data base”.

Wasn’t rude but you should read before commenting, you don’t just comment soley from what you read on the title of a post.

I read it just now and I can’t seem to find anything saying it’s a “ban database”.

You still commented mindlessly, there was nothing related to saving the data when a player leaves the game, it’s not even on topic. Also the sole focus here was on getting data from an external database and having it work with a caching system utilizing roblox datastore

Okay but reading the title “data store when 6 second limit was removed” and that normally means people are requesting too much, I only tried to help but you don’t want help from me so I won’t help.