How to have a more accurate wait in a loop

EDIT: replace wait() with task.wait().

Also, raw RenderStepped faces the same issue. It’s just a matter of drift.

If you have ever used wait() in a loop such as the one below:

for i = 1, 240 do
	wait(1)
end

you may be surprised to learn that your wait() isn’t as trusty or accurate as you may have thought. Infact, if you attach some code to print out the time that the function took based on tick()…

local startTime = tick()
for i = 1, 240 do
	wait(1)
end
print(tick - startTime)

Running code such as this one will reveal to you that your function (after its conclusion anyways) actually took over 240 seconds, with my testing leaving it at 241.8 seconds. Yikes! If you have a timer that runs in this manner for hours, however, you can see how that can quickly drift. The solution to this, however, is implementing a time delta into your loop. See as follows:

local waitTime = 1 -- How long you want the delay in between the loop
local timeOff = 0 -- This variable is used as the delta, and starts at zero since we at the start of the function.
local startTime = tick()
for i = 1, 240, waitTime do 
	wait(waitTime - timeOff)
	timeOff = tick() - startTime - i
end

Our waiting function is now autocorrecting itself back towards the time it should be at based upon the actual amount of time that has passed. Note where the waitTime variable is used. If you fancy a more "while"d loop, here’s an example of how to implement it there:

local startTime = tick()
local expectedDelay = 0
local waitTime = 1
local timeOff = 0
while true do
	wait(waitTime - timeOff)
	expectedDelay += waitTime
	timeOff = tick() - startTime - expectedDelay
end

This solution will combat the inaccuracy of wait(), and will sacrifice the accuracy of your wait() in the short term (±0.003 seconds from my testing) but will cause your functions to run a lot more accurately in the long term. For things such as server timers, cooldown timers, etc, it may be worth implementing a solution like this to make your timing much more accurate.

11 Likes

Something like this just seems too convoluted and bloated to use. I prefer just writing my own custom wait as so:

local RunService = game:GetService("RunService")
local function Wait(seconds) 
	local Heartbeat = RunService.Heartbeat
	local StartTime = tick()
	repeat Heartbeat:Wait() until tick() - StartTime >= seconds
end

Credit to @buildthomas for the above code.

Also from what I’ve heard tick() isn’t too accurate as a benchmark itself (I realize I use it in the provided function above), even when used in the same environment (due to users using different operating systems etc.). os.clock() is a better alternative. That being said, unless you need the utmost precision then I’d say tick() is fine to use in most applications.

14 Likes

Bit of a late answer to this (sorry). I disagree with using this method unless absolutely required for the sole reason that having a conditional check every heartbeat across an entire game, atleast theoretically, can harm performance. Plus, the inaccuracy of wait() isn’t that severe when wait() is only called once, but the slight amount of extra time wait() takes causes extra issues in loops that require wait().

wait doesn’t really have a guarantee on when it will resume. If you’re unlucky and there are a ton of tasks that need to regularly be resumed, you could go over the per-frame quota and your wait() might take a second rather than ~0.03 seconds.

If you need accurate wait you should use a timer that runs on each frame like Heartbeat. I know you mean well but it doesn’t seem like you actually experienced the deficits of using wait as an accurate timer in practice. Checking Heartbeat every frame is a very minor workload compared to the UX impact it could otherwise lead to if suddenly some of your game actions are heavily delayed because the clients/server are resuming many different Lua threads at once.

You obviously don’t need to do this for all Lua threads that you run. Just do it for the ones where having an accurate timer matters.

7 Likes

I actually came back to this problem after someone mentioning your comment to me. I performed a test by using my solution versus a slightly changed version of the code posted by UltraMariner.

This was the code used:

RunService = game:GetService("RunService")

local function Wait(seconds) 
	local Heartbeat = RunService.Heartbeat
	local StartTime = tick()
	repeat Heartbeat:Wait() until tick() - StartTime >= seconds
	local timeWaited = tick() - StartTime
	wait(2) -- so that we don't cause lag by mass print
	print(timeWaited)
end

local function DeltaWait(seconds)
	local waitTime = 1 -- How long you want the delay in between the loop
	local timeOff = 0 -- This variable is used as the delta, and starts at zero since we at the start of the function.
	local startTime = tick()
	for i = 1, seconds, waitTime do 
		wait(waitTime - timeOff)
		timeOff = tick() - startTime - i
	end
	local timeWaited = tick() - startTime
	wait(2) -- so that we don't cause lag by mass print
	print(timeWaited)
end

for i = 1, 1000 do
	coroutine.wrap(DeltaWait)(30)
end

Upon the conclusion of both functions, the output of both functions was a large amount of numbers which invariably equalled 30.005xxx. This would suggest that there is no delay in the unpausing of the threads, and additionally, while it may not be shown, Script Performance on the Heartbeat method versus my method shows a drastic amount of usage for the Heartbeat method. Granted, the Heartbeat method is totally fine in 99.9% of cases, as if you are running wait() over 1000 times at the same time, your code is likely doing something wrong, but making the assumption that Heartbeat is less costly may be incorrect. This doesn’t fully represent the usage of an implementation such as mine, however, and may be changed by actively running processes. However, the pure unpausing / pausing of threads seems to show no output in ScriptPerformance.

  • Performance tab of Heartbeat method, yikes!:
    RobloxStudioBeta_WxIIyKDqkt

Please test timing/threading on clients, not in Studio (preferably weak clients, like low-end mobile devices, and/or the game server). Testing in Studio is not likely to give accurate results on topics like this.

“Activity” and “Rate” in the Script Performance tabs are bad metrics to use to gauge performance, you should not be using these for performance analysis. In my experience this window gives wildly inaccurate results and there have been numerous bug reports on these two metrics. Obviously, a small piece of code that runs every frame / 60 times a second is not going to top off “19%” of your available processing power, so you could have interpreted yourself this is a misleading metric.

That being said:

You’re measuring a different property than what I’m discussing. You have a timeOff that you calculate for the wait(x) solution that you adjust the wait time with. My comment above is only applicable for short wait times where you don’t adjust the time that you’re off-count (especially for single waits).

For long wait times, my comment is not relevant because the relative error will be lower. In the case where I ran into issues, I was running combat effects/animations that had to feel smooth and had to be exactly 0.x seconds to achieve the UX I needed.

As mentioned, on top of that I’m not sure if firing off 1000 threads at the same time is enough to trip the throttling of resuming threads in Studio, or if the throttling applies at all when you’re running it in Studio.

One aspect you also do not measure is how much the individual wait(x)s overshoot.

1 Like

I misunderstood what the claim you were making about the timers were. The solution I have provided (how to have a more accurate wait in a loop) was intended for longer timers where wait() would be called many times in a habitual manner, instead of precise single wait() accuracy. I didn’t really intend to claim that my solution works for short usages of wait() where extreme precision is required, because that isn’t what it’s meant for.

1 Like

Please do not use

tick()

use

os.clock()

instead. tick() is going to be deprecated soon and is unreliable. os.clock() is really percise and should be used for when you want to get a percise time difference.

2 Likes