Whats the difference from the two, they both make and run new threads right?
There really isn’t a huge difference between the two as far as I know.
However, spawn() has a built in wait() into it before the code within it executes. Whereas coroutine executions are instantaneous. To provide an example.
spawn(function() print("lol") end) print("Hello there") --This will print first ---Coroutines--- coroutine.resume(coroutine.create(function() print("lol") --This will print first end)) print("Hello there")
However you can use both without any harm. Although if you achieve things by threading. In some cases it will be better to use a coroutine over spawn().
More of, coroutines can be created and not have themselves be called right away.
local a = coroutine.create(function() print("Hello world!") end) wait(2) coroutine.resume(a) --Will Print 'Hello world!'
And if you’re to use coroutine.wrap(). You will be able to utilize arguments and parameters as well. Which in certain cases. Can be very useful.
local a = coroutine.wrap(function(str) print(str) end) a("lol") --Will print 'lol' to the output
Error handing with coroutines is a huge pain. I highly recommend using
delay to avoid having to deal with this. Alternatively, use @Quenty’s
fastSpawn, which allows spawning immediately, without the delay @Wingz_P mentioned.
The reason for
spawn delaying is that it’s queuing up the function to run the next step in the task scheduler (which is essentially 1 frame). Using Quenty’s fastSpawn uses a fun “hack” by using a BindableEvent to execute the new thread immediately and get the correct error handling / stack traces.
from the PiL: https://www.lua.org/pil/9.1.html
coroutine.resume returns a boolean saying if it was successful and an error message if there was an error, so you could treat coroutines as multithreaded
Take this coroutine for example:
local co = coroutine.create(function(a) wait(1) if invalid.table then print("error") end end) while true do local s, msg = coroutine.resume(co,4) if not s then error(msg) end wait(4) end
This will error after one second with this message and an appropriate stack trace printed in the console:
attempt to index global ‘invalid’ (a nil value)
Therefore, coroutines shouldn’t be as much of a pain to handle as a
pcall function should be.
It gets much more complicated when you try to retrieve the actual stacktrace. In large projects, it becomes incredibly difficult to know exactly where errors are coming from. The stacktrace/traceback is incredibly valuable, and is essentially lost with coroutines, unless you do some hacky maneuvers with your code.
coroutine.wrap be a good alternative? It propagates errors normally, and is easier to use:
You still lose stack info overall though
Yeah, debugging asynchronous state is actually one of the most painful thing so preserving that stack information is really important. There’s weird error-ownership by Roblox in certain cases.
I think it’s worth noting that since
spawn() creates a Roblox thread, this means that it will take at least 1/30th of a second before it can run because the task scheduler runs at 30 hz, unfortunately. This means that in general-purpose applications you should avoid
Yup. Spawn() is the same as delay(0).
You can pass the coroutine as the first parameter to debug.traceback. This will get you the full stacktrace to the error.
I’m not sure about the statement above ‘debugging asynchronous state’. As lua threads are “coroutines” (not really threads), then as per my understanding race conditions do not exist as no more than one coroutine is executing at once
(assuming appropriate yielding). .
This has the upside of easier to write code (don’t have to watch your locks as much), but the downside in poor leveraging of multi-cores where real threads can be moved to. (Ala, Lua Lanes)
If you need code to execute immediately, why not just execute it in the current ‘thread’? I would say coroutines / spawns are used for code which can be executed at some time in the near future (or delay, if you want to schedule)
Interesting, Roblox encourages use of coroutine as well -
The fewer scripts using
waitat any given time, the better.
- Avoid using
while wait() do … endor
while true do wait() endconstructs, as these aren’t guaranteed to run exactly every frame or gameplay step. Use events like
Heartbeat, as these events strictly adhere to the core task scheduler loop.
- Similarly, avoid using
delayas they use the same internal mechanics as
wait. Uses of
spawnare generally better served with the coroutine library:
I’m not sure I understand the desire to avoid “spawn()” and “wait()” My guess is the scheduling logic for spawned threads is space/time complex.
Great article, must read - Task Scheduler
Here’s a good thread to review, which has Roblox Staff on it.
Looks like multicore is on roadmap - Faster Lua VM Released
We’re also looking into a way to unlock access to multiple cores. As I mentioned during my RDC talk (which you’re all welcome to watch! https://www.rdcglobal.com/video-stream-gallery/lua-as-fast-as-possible-rdc-2019 ), we think we have a design that will allow you to run Lua code on multiple threads safely and performantly, which could unlock performance for some specific usecases that just isn’t achievable right now.
Apologies for [probably] necroposting.
To take advantage of the ability to run code in parallel. You don’t see the parent process halting when you call fork() in C, for example. You might want the parent process to continue working on something while the child process starts and does something else without impeding on the parent process’s progress through its own program.
One example I can give is this radio system I made that lets a player call an artillery strike on a given location. The client can request an artillery strike by asking the server for one via RemoteFunction. On the server, I’ll do a check to make sure the call is valid. If it’s not, then it’ll return false, but otherwise, it’ll start the code that handles the entirety of the artillery strike and return true. This boolean is used by the client to know whether a call was successfully executed or not, and it is expected to return ASAP as the client has GUI elements and other things to deal with based on what result it gets back.
If I let the main process/“thread” handle it all, then the client would not be getting a response from the server until after the artillery strike concludes, because it will have to run all the code for the artillery strike first before being able to return anything to the client. (Obviously, there are workarounds available if the main process absolutely must run the artillery strike code, but those are avoidable here.)
In contrast, if I use a coroutine or spawn(), I can create a child process to handle the execution of the artillery strike while the parent process continues and returns the result to the client [practically] immediately, since in this case it’s not being held up by the burden of running all the code related to the artillery strike.
Another example (although if you end up doing this in an actual project then you might want to rethink what you’re doing) is having a significant number of tasks to complete when a player leaves the game. What happens if the server closes? One thing you could do is have a function bound using BindToClose(), and this function iterates over every player and performs whatever tasks it needs to.
Remember that any functions running while a game tries to shut down has a time limit before the server says “times up” and halts all execution and shuts down. If you have a lot of players in the server and a lot of tasks to do for each player, then letting the main process iterate over all the players and perform all the tasks could mean that some tasks aren’t performed for the players at the end of the queue if everything isn’t done quickly enough.
If you use a coroutine/spawn(), you could have the main thread iterate over each player and create a new child process that performs the tasks on a given player. The main thread will end quickly since all it’s doing is iterate over a table that’s at most 100 entries large, and the tasks for each player will be running in parallel, so multiple players will be dealt with simultaneously instead of all of them being handled one by one in a queue.