When calling coroutine.yield() immediately after a wait(), the yield fails to suspend the coroutine.
This works:
coroutine.wrap(function()
print(1) -- prints
coroutine.yield()
print(2) -- does not print. expected
end)()
This does not:
coroutine.wrap(function()
print(1) -- prints
wait(1)
coroutine.yield()
print(2) -- also prints. unexpected
end)()
This is easily reproduceable 100% of the time both via the command bar and from a script.
There’s an extremely hacky workaround for this problem, which seems to be fairly reliable (looping thousands of times and they all work correctly):
coroutine.wrap(function()
print(1) -- prints
wait(1)
game:GetService("RunService").Stepped:Wait()
coroutine.yield()
print(2) -- does not print. expected
end)()
Using it in any kind of production code would likely be asking for trouble but it’s there if you need it. This may also give a hint into what’s going on internally that prevents coroutine.yield() from doing its job.
For context and a usage case of this, I have some combat logic with wait() calls to time everything. If players leave the match early this logic is still running, so I was planning to conditionally yield indefinitely to stop it from executing. Given this limitation, I’ll likely have to rely on erroring out instead.
I have some combat logic that waits often so it can send some info to the client, have them display it, then resume and hide the info (which is arguably a bad practice). My game matches players together to put them in their own match to battle, which is where all this yielding occurs.
It’s a card game, so a simple example of how this works is:
Players get in a match together
A player plays a card
That card is then displayed to both players (via the server)
After a short delay the displayed card is hidden
Then the server tells clients to play animations, which waits an arbitrary amount of time until the animations are done, then continues
Etc.
Problem is, what I use to modify player stats and other game state doesn’t take into account what match the player is a part of. So if players forfeit, or one disconnects and the match ends, if a player immediately gets into another match, the combat logic will still be running due to all the yielding. This in turn will effect their stats in the next match over.
My solution to this was to wrap it all in a coroutine and yield indefinitely (I imagine it gets garbage collected but I’m not certain), or erroring out of it and catching it in a pcall.
So I ran into this problem, and what I think happens is that Roblox has 2 types of threads: normal lua ones and roblox’s custom.
Roblox added in wait and I think it only works with its own threads, (and coroutine.yield()) doesn’t work with Roblox’s threads. When you call wait() it converts a normal lua thread to a roblox thread.
The workaround I used is to do: wait(2^127)
This will act identically to coroutine.yield() and you can call coroutine.resume() on the thread that has the wait() yield and it will resume as expected.
The only caveat is that I don’t think that Roblox removes the thread yield from its task schedulear queue, which can be observed when you do wait(5) and resume that thread, after 5 seconds when wait tries to resume it will throw an error. However, it isn’t that significant because I think that Roblox uses a BST implementation for the task scheduler (might be wrong) and anyways its very fast since its on the c/c++ side.
sorry if theres bad grammar etc I wrote this in a hurry
If you can put this combat logic in a single function, you could return out of it, possibly providing a success value (e.g. true for succeeded, false for player-left-the-game)
You could also create a BindableEvent and call :Wait on its event. If I recall correctly, this should be garbage-collected properly, so it won’t leak.
Ultimately, you shouldn’t be yielding indefinitely to stop execution, anyways. That’s what errors, return, and break, are for.
If you had a normal use for coroutines, you could use my Cord module. If you’re just pausing execution forever, you might as well just use BindableEvents, as that’s how Cord works.
I don’t think there’s any actual difference between the two – at least not one that’s that deep. I’m pretty sure the only difference is that “roblox threads” are in the thread scheduler and “normal threads” are not. wait puts the current thread in the thread scheduler with a time to resume then yields the thread.
My guess would be that most Roblox yielding operations take the thread out of the scheduler when they yield, so it doesn’t get auto-resumed. coroutine.yield obviously doesn’t do this, and changing its behavior now could break existing code.
Just guesses though.
I don’t think this ever gets cleaned up, so this would “leak” and stay in the thread scheduler forever.
Something like this should work better than wait(2^127):
local function yield()
local event = Instance.new("BindableEvent")
event.Event:Wait()
event:Destroy()
end
If the coroutine is never resumed, it definitely gets garbage collected.
If the coroutine is resumed then the event should get garbage collected.
This implementation still loses the ability to pass arguments out of the coroutine with coroutine.yield(...) and pass arguments in with coroutine.resume(...), but at least it lets you yield until resumed externally. If you need argument-passing, you can use BindableEvents or something like Cord.
This behavior will have no doubt been ingrained in Roblox for years now, so a change would be very unlikely at this point. I’ll likely end up just erroring the thread (since it’s already encapsulated in an event listener) to get it to stop, as what I’m attempting to do with coroutines is very hacky.
You’re correct that there is no special “roblox thread” notion in the engine. There’s simply threads which are in the “to-be-resumed at X time” queue and “all the other threads” (which will naturally get GCed by Lua when there are no more references to them).
Anything which yields in the Roblox sense of “waiting” is simply suspending the thread and putting it in the to-be-resumed queue with a time to be resumed at. Those functions don’t care at all how the thread got created, they put it in the queue all the same.
Then when the engine resumes a thread in the queue, it looks at the yield result to know where to put it back in the queue. So when you “coroutine.yield” something that returned from waiting, the engine doesn’t get back anything… so it just sticks it in the to-be-resumed queue “as soon as possible”, which happens to be after all of the threads currently scheduled for the frame, but before it hands control back to the engine. This can be used to “do something last after everything else” in a given frame, though that’s a very bad idea now that there’s BindToRendereStepped with priority available. – Apparently not the behavior anymore.
It’s worth having a read of the following reply to a thread
I am very keen on changing the current behavior so that we can use yield anywhere and it work as you would expect it to, without the Roblox scheduler interrupting that under any circumstance.
It’s worth noting I’ve referred to threads as if there is two types, but Stravant is correct in what he is saying about the implementation behind the scenes.
Oh sorry, I guess I should check that stuff still works that way before I say how it works given how long it’s been.
What do you do now to break the ChildAdded lock (If you want to remove an Instance that was parented during the same frame)? That was how I handled that behavior back in the day (on object added, yield(coroutine.current()), then remove the object).