EDIT: replace wait() with task.wait().
Also, raw RenderStepped faces the same issue. It’s just a matter of drift.
If you have ever used wait() in a loop such as the one below:
for i = 1, 240 do
wait(1)
end
you may be surprised to learn that your wait() isn’t as trusty or accurate as you may have thought. Infact, if you attach some code to print out the time that the function took based on tick()…
local startTime = tick()
for i = 1, 240 do
wait(1)
end
print(tick - startTime)
Running code such as this one will reveal to you that your function (after its conclusion anyways) actually took over 240 seconds, with my testing leaving it at 241.8 seconds. Yikes! If you have a timer that runs in this manner for hours, however, you can see how that can quickly drift. The solution to this, however, is implementing a time delta into your loop. See as follows:
local waitTime = 1 -- How long you want the delay in between the loop
local timeOff = 0 -- This variable is used as the delta, and starts at zero since we at the start of the function.
local startTime = tick()
for i = 1, 240, waitTime do
wait(waitTime - timeOff)
timeOff = tick() - startTime - i
end
Our waiting function is now autocorrecting itself back towards the time it should be at based upon the actual amount of time that has passed. Note where the waitTime variable is used. If you fancy a more "while"d loop, here’s an example of how to implement it there:
local startTime = tick()
local expectedDelay = 0
local waitTime = 1
local timeOff = 0
while true do
wait(waitTime - timeOff)
expectedDelay += waitTime
timeOff = tick() - startTime - expectedDelay
end
This solution will combat the inaccuracy of wait(), and will sacrifice the accuracy of your wait() in the short term (±0.003 seconds from my testing) but will cause your functions to run a lot more accurately in the long term. For things such as server timers, cooldown timers, etc, it may be worth implementing a solution like this to make your timing much more accurate.