Why does the legacy wait() function is more accurate than task.wait()?

So basically after hearing a new alternative for wait() which is task.wait(), I tried out the accuracy between these two.

Legacy wait:

local start = tick()

wait(10)

print("default: "..tostring(tick()-start))

Task wait:

local start = tick()

task.wait(10)

print("task: "..tostring(tick()-start))

However, the results:

Legacy wait:
image

Task wait:
image

The legacy wait function is more accurate than task’s wait function even though I ran it multiple times. Why does this happen? Did I read something wrong? Because from what I understand from task.wait() it pauses the thread until the duration is over and the next Heartbeat event is fired. Please help,

I think it is because of script delay

Hello ItzMeZeus_IGotHacked!

This is an incorrect way to test how long they yield. By default, both function return the first parameter ‘TimeYielded.’

You can test their speed using the following:

print(wait()) --number
print(task.wait()) --smaller number
1 Like

I would like to know why my method of testing the accuracy doesn’t work?

Also, it still doesn’t work.
image
0.27 is by wait() while 3.33 is by task.wait().

Try separate scripts for each of them

task.wait() without any arguments yields for one frame while wait() is inconsistent and yields longer (1/30 sec). I guess wait can be more accurate at the beginning but as more players join and the server gets laggy/exhausted, task.wait will work way better compared to wait which won’t be nearly as accurate as it is at the start.
Also, for benchmarks you shouldn’t use tick, instead, try os.clock which uses CPU ticks and is way more accurate, but in this case as others mentioned both functions return the time yielded so you can just do

local t1 = wait(1)
local t2 = task.wait(1)
print(string.format('\nwait: %f\ntask.wait: %f', t1, t2))
2 Likes

You method is inaccurate as it uses tick() which is not meant to be used for benchmarking.

Use os.clock() instead as the documentation states which is more precise.

Returns the amount of CPU time used by Lua in seconds. This value has high precision, about 1 microsecond, and is intended for use in benchmarking.

I tested it and got similar close to each other results nothing really worth noting.

2 Likes

Sorry but what’s benchmarking? I don’t quite understand.

Does this return how long it took my cpu run the whole Lua code?