Yes.
Yes.
Thanks for the feedback! SharedTables are definitely slower than built-in (non-shared) Luau tables. Weāve measured about a 10x performance difference for various usage patterns. Some of the overhead (about 1/3 of it) is due to fundamental synchronization costs. The remaining overhead is largely the cost of moving data between the Luau VM and the āshared stateā (for example, strings need to be copied into the Luau VM when they are accessed). A lot of optimizations that make built-in Luau tables fast canāt be used when data is shared across Luau VMs (or, at least, many of them require substantial work).
That said, there are definitely opportunities for reducing some of the overhead, and thatās something that we intend to look at over time. For the moment, we believe there are a lot of scenarios where the current performance characteristics are āgood enough.ā Weāre very interested in seeing how people start using these new features so that we can target future improvements where theyād be most valuable. We appreciate any and all feedback in this area.
This sounds great and all, but Iām curious if there is any use cases of SharedTable
outside Parallel Luau, and whether it is advisable or not. It sure is global state and everyone knows how bad unmanaged global state can be, but again this might improve (or not) cross-script communication. (unless ofc, im being an idiot and this feature is only accessible in Parallel Luau)
Thanks; we appreciate this feedback. The current restrictions on key types are a compromiseāwe expect these types of keys to be the most commonly used and most useful, and they were the easiest to implement efficient support for.
Due to the āsharedā nature of SharedTables, there are certain types of keys that we cannot support. For example, we could not support using a Luau table as a SharedTable key. But there are certainly other types of keys that we could consider supporting in the future if there is demand for them and if we are able to implement support efficiently. For the present, we expect that even with support only for integer and string keys, there are a lot of use cases that will benefit from this feature.
Vector3 keys would be my next request also instances as keys is hyper useful.
You can, but youāre better off using ModuleScripts (or even _G
if you like to play dangerous) for sharing environments between the same execution context. Like mentioned above there are performance implications with SharedTable and it should only be used for Parallel Luau because thereās no other feasible options.
One of the property that I would like to be read safe would be this
Our experience uses a lot of Mesh deformation (the above post tells more about it). We use this a lot in combination with raycast and shapecasting and knowing where the position of the bones is useful in Parallel Luau.
Thereās not even plans for WriteVoxels? Are there plans for any kind of creating or modifying parts or terrain? Not being able to do those things is the biggest drawback to parallel luau
Ahh yes, donāt think I read over that part, so thanks for the heads-up.
Im having a hard time understanding how i can use the shared tables across multiple actors.
If i had 8 actors add values to a table in parallel and the next parallel function needs to use the finished table, would a shared table wait for it to be filled up first or would the data being processed be incomplete?
If not, what would i have to do to make the function run with the filled table on all 8 actors while running on the same step?
Hi @DevSersponge, we are definitely lacking good tutorials for Parallel Luau. I have written some additional sample code that should be released soon here:
Parallel Luau | Documentation - Roblox Creator Hub
That sample code will demonstrate terrain generation using the Actor Messaging APIās for communication.
We are also strongly considering some tutorials or sample places to better demonstrate how to use Parallel Luau. We donāt want to promise anything yet, but hopefully we will be releasing more content to help soon.
The messaging API is mostly intended to provide a more convenient way to send messages. For example it avoids the need to create a BindableEvent
instance in order to send arguments. We also hope it will make communication more consistent. For example, if developers used BindableEvent
ās instead it is likely that there will be many conventions for where events are placed in the DM.
Is there a performance benefit, is serialization faster.
Having an explicit messaging API may also gives us an obvious place for future extension or performance optimization.
I donāt believe there are any performance advantages to using the API today, but there are some possible optimizations we could implement that may make the Messaging API faster.
Can you send complicated tables with tables as indices, or arrays which contains nil values?
Both the Messaging API and BindableEvent
ās share the same restrictions on what arguments can be passed. Because the data being passed may be send from one Luau VM to a different Luau VM, certain types of data canāt be shared safely.
A problem with the new data structure, SharedTable, is that you cannot atomically update multiple keys. Additionally, nested SharedTables are not synchronized with each other when cloned. This means that even when using the clone method, obtaining a consistent view of the entire structure becomes unreliable. These limitations restrict the flexibility of updating desired fields, particularly when they do not reside within the same sub-table.
All of this indicates that the current design of SharedTable almost never justifies inter-system parallelization, as the limitations significantly compromise the usability and flexibility of systems that utilize SharedTables.
I hope that these limitations and challenges associated with the SharedTable data structure can be addressed in future iterations.
Thanks for the response! parallel luau is a great tool and hopefully with these new sample codes and sample places weāll get a lot more people from varying levels of scripting knowledge to use it. canāt wait for it!
This is pretty much what I thought, which is why I was confused when reading this in the original post:
task.defer(), task.delay(), and task.wait() now resume in the same context (serial or parallel) that they were called in.
Wouldnāt that allow a single desynchronization to persist through multiple frames? What happens if you have a script that makes a connection to some event and then desynchronizes and waits?
For example:
game:GetService("RunService").Heartbeat:Connect(function()
-- Do I ever happen? If so, in parallel or serial?
end)
task.desynchronize()
while true do
task.wait()
-- According to the post, we're still parallel here
end
I think itās a good initial release as others have said, however there are a few questions I have:
Is it ever going to be possible to write to Instance properties in Luau? AFAIK there are no properties that can be written to in parallel (unless theyāre under Actor instances possibly? I thought I read that somewhere but could be wrong).
Is there a reason SharedTable methods canāt be namecalled?
local st = SharedTable.new()
local st2 = st:Clone()
seems much more consistent to Roblox syntax than
local st = SharedTable.new()
local st2 = SharedTable.clone(st)
Other than that I think everything else is addressed nicely.
I think the idea was to keep it consistent with native Lua tables. But then itās inconsistent with the naming convention, as everything Roblox-related uses CamelCase but native Lua functions never capitalize.
SharedTable.clone(st)
table.clone(t)
SharedTable.clear(st)
table.clear(t)
SharedTable.isFrozen(st) --this is inconsistent
table.isfrozen(t) --the F is lowercase
Is there a reason SharedTable methods canāt be namecalled?
Our Luau type system experts had concerns about using namecall here, because it substantially hinders our ability to do type inference. E.g.,
local st = SharedTable.new()
st.Clone = 1
After this, when you refer to Clone
, are you referring to the property Clone
that you added, or to the Clone
method? Correctly performing type inference here is somewhere between very difficult and impossible. We thus made the methods āstatic.ā
Wouldnāt that allow a single desynchronization to persist through multiple frames? What happens if you have a script that makes a connection to some event and then desynchronizes and waits?
To annotate your example (Iāve also added a ConnectParallel
call):
game:GetService("RunService").Heartbeat:Connect(function()
-- This function will be called in the serial context
end)
game:GetService("RunService").Heartbeat:ConnectParallel(function()
-- This function will be called in the parallel context
end)
task.desynchronize()
while true do
task.wait()
-- This code will be run in the parallel context
end
Effectively, this script will be āresumedā three times each frame:
task.wait()
will be resumed in the parallel contextHeartbeat:ConnectParallel
will be called in the parallel contextHeartbeat:Connect
will be called in the serial context.[The first of these is the behavioral change that we are makingātoday, the continuation after task.wait()
is resumed in the serial context.]
This is definitely a bit tricky; weāre working on some additional samples to help clarify the behavior.
A problem with the new data structure, SharedTable, is that you cannot atomically update multiple keys.
This is something that we may look at supporting in the future.
nested SharedTables are not synchronized with each other when cloned. This means that even when using the clone method, obtaining a consistent view of the entire structure becomes unreliable
This was an intentional design decision. In order to provide a consistent view across multiple SharedTables during deep cloning, we would need to implement some form of global locking. This would negatively impact performance not just of the deep clone operation, but also most of the other operations supported by SharedTable.