Love this module, but sometimes I face the problem of it waiting infinitely… Do you know what causes this?
Do you have a repro script for this? Otherwise, I’m unsure what the issue would be.
EDIT:
Found the issue!
If you have two wait calls right next to each other (e.g print(Wait(1), Wait(1))
), this module would first resume the yielded thread and then remove the already unyielded thread from the priority queue.
What happens in some cases is that a new thread was added to the priority queue before the unyielded thread was removed, disrupting the other and messing up #t
, because getting the length of a table such as this:
{
[2] = ...,
[3] = ...
}
would return 0, because that’s actually a dictionary and not an array, since the first index is 2
and not 1
.
Alright, so apparently Roblox deprecated elapsedTime
, which this module uses.
What’s especially weird is the message it provides:
elapsedTime
returns the elapsed time the Roblox instance has been running for whilst os.clock
returns CPU time, which are two different components. I won’t be using os.clock
in favor of elapsedTime
for now, because even Roblox’s wait
uses elapsedTime
internally, as indicated by this script:
print(select(2, wait(1)), elapsedTime())
This would output:
Obviously there is a margin of error, but the meaning behind what I mean is clear.
Another reason I won’t be switching to os.clock
would be that running os.clock
and elapsedTime
in studio would return two completely different values:
I’m unsure why Roblox decided to deprecate elapsedTime
, not even updating the API and stating it’s been deprecated, but I won’t be switching elapsedTime
over to os.clock
for now.
Update [1.0.2]
-
Removed and cleaned up source once more, removing functions that were no longer used, or only used once. Also switched from
table.insert(Table, FindBestSpot(Time), Value)
toTable[FindBestSpot(Time)] = Value
. -
Fixed an issue where if you had two wait calls right next to each other (e.g
print(Wait(1), Wait(1))
), this module would first resume the yielded thread and then remove the already unyielded thread from the priority queue.
What happened in some cases is that a new thread was added to the priority queue before the unyielded thread was removed, disrupting the array’s order and thus messing up#t
, because getting the length of a table such as this:
{
[2] = ...,
[3] = ...
}
would return 0, because that’s actually a dictionary and not an array, since the first index is 2
and not 1
.
Same person, same issue~
It still yields infinitely sometimes. Although the problem could be in my own code, I’m pretty confident it isn’t, as my problem was fixed once I used the normal wait instead of your custom one. Any ideas on what the problem could be?
[This has been addressed in the new update.]
Update [1.0.3]
-
I have no idea what I was thinking, using
t[i] = v
rather thantable.insert(t, i, v)
, becauset[i] = v
wouldnt shift any elements abovei
by one, thus replacing the thread currently yielded that was stored in that index. This is what caused seemingly random infinite yields. Sorry @Sweet_Smoothies and others for the issues caused. -
Fixed issue with stack overflow; I’ve switched to a much more performant while true do loop rather than a recursive function when updating a yielded thread.
-
Source is now more compact and clean.
Update [1.0.4]
I am pleased to announce that this module now uses binary heaps - probably the best solution when it comes to speed. A binary heap averages at a whopping O(1) insertion time and O(1) find-min time, but most notably it has an O(log n) delete-min time! This is a huge improvement from the previous version of the module. Thank you CntKillMe and Jiramide for helping me understand how to properly implement binary heaps .
I’ve also switched from elapsedTime
to os.clock
- since Roblox deprecated out elapsedTime
and said to use os.clock
instead, I’ll be doing that. The second return parameter has been renamed accordingly from elapsedTime
to CPU Time
, since that is what os.clock
returns.
Update [1.0.4.5]
- Fixed issues where if you start several yields at once, the script may not be able to keep up.
Hey all,
I’ve updated the thread with an explanation on why this module is actually important and good to use. I’ll copy what I appended onto the thread on this post:
Why should I use this, rather than default wait?
The main concern in a lot of Roblox games is that they use the wait
function. Like, a lot. This is more bad than you may think. Firstly, this can clog up the task scheduler very easily, and deteriorate the state of your game very quickly. @Maximum_ADHD has demonstrated this issue, as seen here:
Yeah… Pretty bad.
So, how does this module address this? Simple! This module manages and sorts your yields to make sure there’s always only one wait running at once. This leads to making it practically impossible to clog up the task scheduler, saving your game from all the nasty issues.
TLDR This module makes sure your game will always have only one wait running at once, making your game not die from having a filled task scheduler.
Hey there. I tested your benchmark (server and client) and I get
Num "failed": 15999
with your module and
Num "failed": 16000
with Roblox’s wait()
This is an issue with having different hardware - I’m unsure regarding your computer specs, but my computer is pretty good and as such can compute more data efficiently, which leads to different results.
I have an i7-6700 processor and 8 GB ram. Is that not fast enough? I’m sure most Roblox players don’t even have high-end devices lol.
I’m unsure at this point. Using this code and putting it in a Script in ServerScriptService to benchmark provides me with the result of 0 failed on this custom wait, and 8302 failed on Roblox’s wait.
To benchmark, replace BenchmarkFunc = CustomWait
with BenchmarkFunc = wait
require
ing works fine for me, but you need to add a wait(3)
before the benchmark - the game hasn’t fully loaded if you run the benchmark directly after the require
, which I assume is the issue
This code runs fine and provides the same results as previously, consistently.
local BenchmarkFunc = require(script.ModuleScript)
wait(3)
local NumFailed = 0
for i = 1, 16000 do
coroutine.wrap(function()
local d = BenchmarkFunc(1)
if d > 1.04 then
NumFailed += 1
end
end)()
end
wait(1.1)
print('Num "failed":', NumFailed)
Well, I am going to use this, I don’t knew about this issue, thanks for creating an alternative wait function for us!
Yeah, I’m getting inconsistent results but I guess everyone’s mileage may vary. Maybe my computer is too slow now
Thanks anyway.
Update [1.0.5]
- Fixed some issues where the heap would sometimes not properly sort all active yields based on their priority (i.e time left to yield).
- Source is now put on my GitHub, it is more trusted and also makes it easier for me to update.
I left some commented out debugs - you can mess around with the source with them if you want to, for whatever reason.
I’m back! This time the problem is the Wait not waiting the proper time, usually waiting longer than usual. Although I’m not advanced enough to understand how this works, I think the problem is relating to coroutines? Putting the wait function in a coroutine seems to cause the whole inaccurate wait time.