[Beta] Deferred Lua Event Handling

Do you have any idea what the performance implications might look like? Will this impact, for example, an immediate wait or RunService.Heartbeat:Wait() on init?

Is there a reason this is in Workspace and not ServerScriptService? That seems like a more appropriate place for the property.

12 Likes

Question: Let’s assume I have a script ScriptA that fires a BindableEvent BindableEventA and a script ScriptB that listens to BindableEventA. Given the FastSpawn example, I assume that this will now have a slight delay. But if ScriptB calls a second BindableEvent, called BindableEventB that script ScriptC listens to, when will ScriptC run? Will it wait for the next invocation point? Does this mean that latency will keep stacking up?

I have a framework where individual components communicate with each other through the use of BindableEvents. Sometimes a series of multiple components are involved that all execute one after another. It is very important for me that splitting code into multiple components will not cause significant gaps due to latency.

15 Likes

Question, does this means that the custom wait which relies in Hearbeat or Stepped won’t work good at all anymore?

7 Likes

If the event never fires, this will never return. In the example, dispatch is just some function that should (but may not) cause the event to fire.


Event handlers are resumed in the order that the event was fired. If event A fires before RenderStepped then it’ll be called first.


Nope, these will be unaffected.


We process the queue of deferred event handlers until it is empty. If you have an event handler which triggers another event, that event’s handler will be added to the back of the queue and will run in the same invocation point. It is worth noting that a re-entrancy limit of 10 still applies to deferred threads.

17 Likes

The reason for the bindable event fastSpawn implementation is because it preserves stack trace errors for debugging purposes.

Coroutines don’t provide the same level of error tracing information, it just spits out an error message which doesn’t give a lot of context if the function can be called from many different places.

We switch between the two implementations for live games and studio-debugging modes, but now this update will basically kill off the debugging-oriented implementation.

32 Likes

Aren’t stack traces passed through coroutine.resume/wrap now?

image

10 Likes

Looks like not even roblox is completely ready with this change.
With deferred events enabled opening dev console produces this nice little error.
RobloxStudioBeta_I0jEtV2xfW
It’s probably reasonable to assume it’s going to take a while for this change to completely phase in.
Do we have a estimated timeframe on when roblox is going to phase out immediate behavior?

25 Likes

This is a really good change and I actually made a custom behavior like this for some of my scripts. They won’t be needed anymore but at least it’ll be in the engine.

By the way, is this the end of debounces?

5 Likes

I was just about to make a comment about this, I’ve had no issue at all debugging coroutines ever since the February 2021 updates. In light of this I see no reason to keep spamming BindableEvent objects as a fast-spawn solution.

3 Likes

It’s still an issue, coroutine.wrap doesn’t concatenate the trace when it propagates the error.

9 Likes

Not quite sure on how this affects debounces (as far as I’m aware anyways) - delaying the firing of an event to later in the frame doesn’t stop it from being fired multiple times, no?

4 Likes

So wait then I’m missing something. Does this mean events are like spawn() now in terms of threading?

3 Likes

Just retested the coroutine approach, I remember why I personally kept the bindable event implementation.

Clicking the red output error message opens up the script where the coroutine was created, not the actual module script that errored.

This becomes really tedious to debug when we have a ton of module scripts to actively work on and a core “Event” handler module script that keeps getting opened unnecessarily.

Comparison between fastSpawn and Bindable output behaviors:

29 Likes

I assume defferred thread calls are still queued, so they will fire in the order that it’s called.

8 Likes

What’s this delay exactly by the way? Is it like work cycles? Post says invocation point which isn’t really descriptive for me. If it’s like that isn’t there a possibility of two events running at the same time?

7 Likes

Edit: Oh neat, this may not be a problem.

This seems to solve this problem of signals firing re-entrance. As long as events fired in the data model are ok, we should be fine.


I’m sorry, but please do not release this change as-is. This will destroy responsiveness in all of my experienes, and (I assume) many others experiences. The code will still function, but this change may introduce significantly latency (read: multiple frames) of latency.

The root of the issue is that I, and many others chain signals together that flow across our data models. If anyone has code like this, this change introduces latency into core data models.

This change breaks how I, and many other developers, program, forcing me to add latency into my game. I use Roblox’s datamodel as the source of truth. We gave a [programming talk]( 5 Powerful Code Patterns Behind Top Roblox Games) at RDC about this. We use signals such as:

  1. Attributes
  2. CollectionService
  3. ChildAdded/ChildRemoved
  4. PropertyChanged events

I also use signals for internal models that don’t use Roblox as a source of truth.

By doing this, we introduce unavoidable latency into my datamodel updates. For example, consider this scenario.

  1. I instantiate a new object into the game, tag with CollectionService
  2. Event fires (later now), we set some properties and then we maybe instantiate 2-3 other objects with tags. Finally, we parent these.
  3. These child added events fire (later now), and so we do this again
  4. We repeat a few more times, and now stuff loads in over 3-4 frames, instead of one.

Another example if where I recursively replicate properties in my virtual data-model using signals downwards. This will now occur over multiple frames, missing first-rate responsiveness. This is a super common pattern.

  1. Listen to input
  2. Abstract input into a data model and fire off the event (like setting a bool value, or having a custom signal)
  3. Listen to this abstraction instead of the true input event.

By making this change, we introduce a round of latency between this input and the response to the user. In more complicated data models, we may introduce even FURTHER rounds of latency, leading to multiple frame delays. In something like a VR, this is deadly.

It will be very hard to audit all of the places that I am using signals. :Connect occurs 1923 times across 910 scripts in my experiences. These thousands of concurrent users, and my code is used in many more places.

Please consider an alternative behavior. I’m willing to sit down and chat about this, because this is a truly disruptive and breaking change for us.

55 Likes

They will still be handled in the same invocation point before general engine execution continues. You can think of an invocation point like this:

while #deferredInvocations > 0 do
     local nextInvocation = table.remove(deferredInvocations, 1)
     nextInvocation:Run() --> this may end up adding more invocations to the list
end

In that way, this change should never add frames of latency, only change at what point within a given frame stuff happens. In fact, the reason it fully exhausts the invocation queue before continuing is to avoid the exact kind of frame latency that you’re worried about.

34 Likes

For people who are confused. Before, the event firing order was messed up. Now the engine makes sure the order is right to make for a more expected results and avoid nasty order bugs that are hard to track down and chaotic.

That comes with a really tiny delay that you won’t notice at all.

16 Likes

I didn’t realized that many of the issues with coroutine.wrap have been fixed, so that’s nice. It’s going to be annoying to have to write a custom Signal wrapper now though, because this affects a lot of libraries I wrote in the past that used to be standalone and use Bindables, but now need to have an extra implementation in them (not to mention users of said library are going to have it break for them).

One thing I can think of is Rocrastinate, although there’s quite a number of existing lua libraries that have similar behavior, which will now have unpredictable behavior due to this change.

I would strongly encourage considering making this off by default, or having some way for existing code to be backwards-compatible by default.

I use Bindables all the time for state management and in-game events, among other things, and I need listeners to run immediately. Now I have to switch to a custom implementation.

11 Likes

Replica and Rocrastinate—both public libraries that have been used by many other people—are both affected like this, and will now have unpredictable behavior. Now I need to fix both libraries, and then people using my code need to update their forked versions of these libraries or else their games will break. Not how I wanted to spend my Thursday.

9 Likes