You can’t compare a time function and a concept!
The act of preventing something from happening until some cooldown is up is called a debounce (originating from electrical switches bouncing, creating multiple signals).
As for whether task.wait() or timing is better, it depends on your case, os.clock() / other time functions are typically used when you want to be able to dynamically change something, whilst in simpler static debounces, a task.wait() will suffice and be more performant.
In general, I use tick() when only necessary for my needs. You can usually accomplish the same thing using either one, but there are actually some behaviors that are mutually exclusive to each technique. Debounces are really nice if you might have a delay that you don’t know in advance, while tick() is really nice if you need stopwatch-like behavior.
Hey, as some said, these 2 concepts have different use cases:
Tick/Clock Based:
This should be used when you want to prevent a user to use something for a certain amount of time. For example, if you have a button in your game that is to gain money but they should only be able to click it every 10 seconds, then you’ll use time-based debounce.
Tick/Clock Based TLDR: Use if you want to limit an action in time, a cooldown
Debounce based:
This should be used when you want to prevent a user from duplicating an action. For example, example if a player click a button to “buy a pet”, then you’d use a debounce because the player shouldn’t trigger the purchase twice and using a time-based debounce would not make sense in this scenario
Debounce based TLDR: Use when you want to prevent duplicate triggers in rapid succession
I want to point out, if you want to reset a cooldown using a clock debounce, you can just do tick() - cooldownTime.
If you have, for example, a buff that reduces cooldown times, or a debuff that increases cooldown times, clock-based debounces are also extremely convenient since you can change the times very easily.
Also, it’s probably best not to use tick() at all anymore, (see my previous reply).
As I said, it can be up to a second late because it returns a metric in second. 1.9s elapsed will still be 1, until it reach 2. Same applies to time.
Theory and practice varies. I’ve never had any issue running tick() in over 8 years of coding. If anything os.time() should be preferred over time() when replacing tick() to have the same behaviour since both returns elapsed time since epoch.
(Could be wrong, but this inaccuracy could potentially be even higher after yielding.)
Even if it is fine for debounces, I still wouldn’t use it if it’s being officially discouraged and there are alternatives that in-practice do the same thing. It is also slightly inaccurate after print testing.
That being said, you probably wouldn’t notice if a cooldown was off by a second anyways, but it’s still inconvenient. It’s probably rare to be off by a full second, or even half, but you’d still be best off using the more consistent method.
Oddly, you could say both…
You can use tick() or similar to know when the cooldown ends, and debounce to block repeated calls.
So a cooldown implementation usually combines both: tick() or similar, and a debounce.
Hmm that’s embarrassing lol, I’ve always thought tick() was truly a second-based metric.
The inaccuracy you pointed out tho is not inaccuracy, its just the normal code-execution time (lua → C++ → lua roundtrip, which is up to couple hundred nanoseconds)
Yesnt. If the post you linked is accurate, time() is only updated every RunService.Stepped, your print() statement runs in the same “Step”, which leads to the same metric displayed
EDIT: Actually I’m not so sure, the more I think about it… the precision is still very high, even if it was updated every Step, there would be some kind of pattern to the last digits? (My game is heavy server-wise and Stepped can be slow, so that might be why?)
It seems to just be random whether tick() or time() is faster with the yield, but strangely enough, tick() is often faster . I also noticed a… strange oddity with time() after the yield when using a script and there’s no replication lag, but it works just fine using the command bar.
It might just be my code, since I’m pretty tired right now. Tell me if you notice any errors.