Full Release of Parallel Luau V1

Yeah, there’s going to be more documentation / tutorials on this feature in the future. Additionally, while right now it’s difficult to communicate between Actors realistically (the feature works best when you can split isolated units of work into actors), we’re working on additional features for message passing and sharing data that will make this much easier / practical. So effectively what we have released is “parallel scripting V1”, with V2 already being worked on.

It’s included into the release and works the same way as described previously. It looks like the documentation for this is currently missing, as per above we’re working on backfilling the documentation for this feature.


Thanks for the request! You’re right, there’s no reason for GetPlayers to be marked as Unsafe. Unfortunately we have to manually audit and annotate each function that can be used in parallel, and we missed this - we’ll fix it!


Absolutely awesome system! I’ve been using it for a long time in my game, and I’m glad it’s evolved into something that can be officially used in-experience.
My main qualms with this though is that despite it’s promising functionality, the learning curve seems to be overly steep, and even now I’m not sure how it works. I was able to get ahold of Elttob’s code which allows you to directly run modules in parallel, which I found to be a way easier system than what’s currently offered with Actors. For reference, here is my modified version of the Parallel module:

local RunService = game:GetService("RunService")

local Parallel = {}

local workers = {}
local waitQueue = {}

local numRunning = 0

local workerFolder = Instance.new("Folder")
workerFolder.Name = "__ParallelWorkers"
workerFolder.Parent = game:GetService("Players").LocalPlayer.PlayerScripts

local actorTemplate = script.ClientWorker

local function getFreeWorker()
	for _, worker in ipairs(workers) do
		if worker.available then
			return worker

	local newActor = actorTemplate:Clone()
	newActor.Parent = workerFolder

	local newWorker = {
		run = require(newActor.WorkerMain),
		available = true

	table.insert(workers, newWorker)
	return newWorker

function Parallel.runParallel(functionModule, ...)
	local worker = getFreeWorker()
	numRunning += 1
	worker.available = false

	worker.run(functionModule, ...)

	worker.available = true
	numRunning -= 1

	if numRunning == 0 then
		for _, thread in next, waitQueue do
function Parallel.waitForThreads()
	if numRunning ~= 0 then
		table.insert(waitQueue, coroutine.running())

return Parallel

Thanks to it, I was able to very easily setup my code to run in parallel by just separating the code I was interested in parallelizing into a separate module, and calling Parallel.runParallel(Module).


Yup, we’re aware of these limitations (in addition to the aforementioned sparsity of documentation that’s being worked on!). Briefly, there’s two modes in how the system can be used:

  • Each isolated entity, like an NPC / car / etc., gets an Actor with a bunch of scripts inside it that can subscribe to parallel events / use task.synchronize / etc. The use of Actors is important here because they delineate DataModel access, which is going to be important in the future once we start allowing changing DataModel in parallel.

  • A script isn’t really related to a specific subtree of the DataModel and just needs to perform some work independently; however, due to the existing interface, it needs to use the Actors which can be awkward compared to the first category of use cases.

We’re looking into whether we can solve the second use case more ergonomically, but it is a little difficult to solve it “perfectly”, so we’re hoping that in the meantime helper libraries can be used for such ad-hoc parallelism.


I’m not sure we ever promised this?

I remember things incorrectly, mb, it wasn’t from RDC, I am getting a secondary source from someone who had a conversation with yourself about this. And the following was something like:

“Parallel Luau’s current implementation isn’t the full picture. Eventually you should be able to spawn parallel threads in the same way that you spawn coroutines, traditional module architecture is something we want to accommodate. Cloning scripts and parenting to special instances is just how it is for now.”

What I think the problem is with Actors right now is that they are coupled to the datamodel, as you mentioned. I believe Actors in their current implementation have a valid use case and should stay as they are, however I want a different way to create threads in a pure luau environment that are isolated from the main thread’s memory and communicate via message passing. But unlike actors, they would not create an associated actor in the datamodel since managing instances can be annoying.


Ah, right - that was mentioned in a few replies above. The big question there is what’s the API, we’re exploring a few options but they have a variety of ergonomics issues.

It’s still not going to imply arbitrary access to shared objects, and it would imply more significant DM restrictions - in that sense you can already implement a library that takes the management pain away (see replies above), it’s a question of whether we can implement something notably better, or at least something that’s very cleanly defined.


Is there some way to run parallel code that is a function rather than a separate module, as you would in a language like C#? It’s a bit jarring having to split every piece of parallelized code into separate modules, rather than something like this:

static void Main(string[] args)
            Thread t1 = new Thread(SomeMethod1)
                Name = "Thread1"
            Thread t2 = new Thread(SomeMethod2)
                Name = "Thread2"
            Thread t3 = new Thread(SomeMethod3)
                Name = "Thread3"

Since each Actor runs a separate Luau VM and has its own memory, it doesn’t seem possible to inject/pass a function into a thread like in the example above. Syntax like this would make parallel Luau far more practical, at least to me and everyone I’ve spoken to.


I believe that’s what the discussion above is about:

Completely disregarding the blatant technical challenges and efficiency issues this potentially poses, I would actually quite like something sort of like task.synchronize and task.desynchronize for this.

	task.parallelize() -- Move this lua thread to run in parallel (in its own thread)
	-- Now we can call desynchronize/synchronize and do parallel stuff
	task.deparallelize() -- Move this lua thread back into the main thread

But if this were possible it may even just make sense to hijack task.synchronize and task.desynchronize in the first place and just have these behave differently when on the main actor by having them split off into their own unique threads (kind of like creating a new actor and then spawning the thread inside).

1 Like


I did some experimentation and there is a bit of delay or overhead when running task.desynchronize() and task.synchronize() that can be good to be aware of so you don’t overuse them. Guess it has to do with the execution phases.

Example code:

local function distance(m1, m2)
	if not m1.PrimaryPart or not m2.PrimaryPart then
		return math.huge

	return (m1.PrimaryPart.Position - m2.PrimaryPart.Position).Magnitude	

local function getClosestTarget(char : Model)	
    local start = tick()
    local closestDistance = math.huge
    local closestTarget = nil
    if not char.PrimaryPart then
        return closestTarget, closestDistance
    for _, target in pairs(targetWhitelist) do
        if not target.PrimaryPart then
        local distance = distance(target, char)

        if distance < closestDistance then
            local combatHealth = target:FindFirstChild("SC_CombatTarget")

            if combatHealth and not combatHealth.IsDead.Value then
                closestDistance = distance
                closestTarget = target	

    sumMS += (tick() - start) * 1000
    count += 1
    print("Average time: ", sumMS / count, "ms")	
    return closestTarget, closestDistance


  • Run each check normally, avg. time: 0.042ms
  • Run each check in parallel, avg. time: 47ms

My guess is that the performance impact might be better when running in parallel, but that the execution of the function ends up taking some time.

Would be nice to hear about this from someone on Roblox. :slight_smile:

1 Like

Also - something I cannot find any information about is if lua variables and tables are threadsafe.

Is this expected to include functions? If so, will the functions be able to read variables defined outside of their declaration, in the code they are declared in, like this?

local a = 1
function test()

Allowing functions to be sent to the threads would allow for a more standard multithreading syntax, like in C#.

If the variable is a primitive or a table, it can be sent to the thread via a signal. However, the value sent is a copy, not a reference, meaning that it is read-only.

Idk, its just random assumption but i think you can move the task.sync / task.desync outside of the iteration to minimize the delay?

As for the reaction to post itself, I have following the development of parallel luau since its announcement back in 2020 or somewhere? and I’m very excited to see it finally make the first major release :smiley: Though, I do agree with the others above and those in other parallel luau related posts that the current implementation of actors is a bit hard to use especially with single script architecture or something alike to it, due to the lack of support to have shared state among the vm’s easily (yes, bindable events and even value objects / attributes can be used, but it feels weird to do so).
But this question has already been answered above already so honestly all I can say is I look forward to the further development of this feature so it becomes easier to use, allowing us to make full use of the modern cpu’s!

Wouldn’t that qualify as “local safe” since the entire script itself is only bound to one actor (through the datamodel thing)?

It does appear to be the case with some of my own testing; variables and functions outside of a :ConnectParallel function can be accessed no problem

1 Like

I’m not sure what you mean. I’m talking about passing and running functions from 1 Luau VM in another. Currently, this is impossible as it will give the error “Attempt to load a function from a different Lua VM”, which makes sense since the VM doesn’t have access to the memory that contains the function.

1 Like

My original reply was based on the example you provided which talked about accessing variables directly outside of the parallel function’s scope, rather than from a different Lua container

1 Like

Oh. Yeah, the reason why I mentioned it was because for it to work, the function would have to have memory access to the variables and their references. Even if Roblox finds a way to pass a function from 1 thread to another, they would then need to find a way for that function to use variables stored in another VM, complicating the issue.


I think I’m not understanding something. What is the right way of having an actor calculate a bunch of data(say, populate a table), and then send that data back to the “main thread”?

I got the actor running, but I couldn’t send the table it generated back into the main thread. What is the way to achieve this?


I believe using events (BindableEvents) should work? I haven’t tested it myself yet, but it sounds like it would after you re-synchronize the VM

1 Like

Is it possible for pathfinding to be made thread-safe? Pathfinding is one of the most expensive Roblox API and would greatly benefit from being safe, unlocking so many new game ideas that are currently impossible (e.g. hundreds of NPCs accurately pathfinding to a single player).
It would be a massive lost opportunity to keep it unsafe.