Deferred Engine Events: Rollout Update

We made some changes but without more information it is difficult to know if they resolved your issue. Could you include an example of something that’s not working as you expect?

It is a continuous process, there are some performance benefits that we can provide now but many others that we can’t do until all experiences are using deferred events.

Event ordering is preserved however event execution is deferred to later in the frame. Even though the event was triggered, the function you passed to Connect, Once, or Wait won’t run immediately.

We process the deferral queue until it’s empty so any events that get triggered will be resumed at the end of the current resumption cycle rather than the next.

Plugins are a tricky one to solve. It’s not that easy to do one thing for plugins and another for scripts in an experience. Our hope is that most plugins can be updated to support deferred events. If any creators are interested in doing this but finding it difficult for one reason or another then feel free to reach out to me and I’ll be happy to help with this.

3 Likes

Oh then I love this change, im sure waiting a tick or 2 isn’t gonna cause a massive yield or slowdown in code + you guys said it improves performance so this is amazing, we dont even need to change anything.

In fact I am going to enable this update right now in my game.

1 Like

So, if I understand it better, this means that after all coroutines that were yielding by the Task Scheduler are resumed, then comes the events?

Hypothetical Example:
Coroutine 1 → Triggers 2 Events
Coroutine 2 → Triggers 3 Events
Coroutine 3 → Triggers 1 Event
The 6 Events are triggered then after everything is resumed and finished.

So is like another order basically?

1 Like

Maybe it should only be applied to new games?

Edit just read the reply on that, nevermind!

1 Like

I would have to ask though, what table is so big that you have to check execution times to adjust on the fly? If you already know it can cause long execution times due to it’s size, there is no need to write checks and setup elaborate ways to pause and continue it’s work. If you know that the slowest targeted device for your experience takes, as an example, over 2 seconds to iterate through this table causing the Client to freeze for that time while it works, why not work the problem from the bottom up instead of the top down approach you are taking? If you optimize this code to run flawless on the slowest targeted device, there is no need to optimize it upwards as the extra speed will carry easily to faster devices.

1 Like

I asked this question in the original thread, but never got an answer and would still appreciate if @WallsAreForClimbing or @tnavarts could give an explanation:

I got a few replies from users telling me how I could fix the issue, but really my concern has always been with understanding why this change would cause those errors in the first place - which I still do not. The second example in my previous post, where ChildAdded is fired but a descendant is seemingly missing from the Child is particularly concerning and makes me feel like I am missing something about how this actually changes things.

6 Likes

Imagine a table containing 300 names of items (strings), you then start cloning and setting up a frame for each item and parenting it to their respective scrolling frame as you iterate through this table. This can easily freeze up the player and so I do it slowly.

The execution time is lower for high end devices meaning those devices will finish the iteration faster compared to one device who is slower.


Another example where I use this is to update zones in my game. I iterate slowly through the regions calling GetPartsBoundInBox which’s speed depends on the number of parts and size of the region and so I must account for the execution time to yield and let other coroutines run.


I optimize as much as I can and for these tasks that requires a lot of objects to be updated all at once, there must be something in the way that can pause and let other stuff in the game work. If you have a better solution let me know (parallel lua doesn’t allow for changes in Instances).

1 Like

With a few exceptions most threads enter the same queue and are resumed in that order so you may find that things are actually interleaved. For example:

bindable.Event:Connect(print)

bindable:Fire("1")

task.defer(function ()
  print("2")
  bindable:Fire("3")
end)

bindable:Fire("4")

Will output 1, 2, 4, 3

6 Likes

Okay so to clarify it a bit further.
Some people like me have clean-up functions that execute when a object is destroyed like so:

script.Parent.Changed:Connect(function()
 if not script.Parent.Parent then
  -- Clean-up function
 end
end)

I sometimes may have script signals or other external objects that are NOT parented or connected to the object that is being destroyed.

Therefor I must execute some code to manually clean up any external objects, bindable events, etc that would otherwise remain and cause a memory leak.

But if I am not mistaken, the NEW event system causes the Changed, AncestryChanged and other signals to disconnect BEFORE it can fire, and therefor never executing the clean-up code.

So, is there a way around this?
Having to design a whole janitor system from scratch would be very tedious and time consuming.

Is the .Destroying event signal supposed to be a replacement so I don’t have to change the way clean-up code is executed?

3 Likes

Even though these signals will be disconnected the invocations should be queued before that happens. Therefore, they should still be executed. If that is not the case please let me know and I’ll make sure we look into it as it would be a bug.

Also, to add some clarity to this:

We actually introduced two types of event disconnection to the engine to support this. If an instance is destroyed pending invocations will not be dropped even though the signals will be disconnected. They are only dropped if the disconnect method is called. We did this intentionally to handle the case you’re describing.

4 Likes

Why not just use a frame heartbeat for each clone so that it builds at 60 fps? Then you won’t have to worry about execution time as the fastest client will build it 60 clones at a time and the slower devices just mean they build at whatever they can manage while still yielding to the rest of the code elsewhere? Sure, it’s not instant if a client is so fast it could have build that entire list in 1 frame, but is anyone actually bothered if a long list of frames are only building at 60 frames instead of 300 frames every game frame? Meaning, would a player really care if that list was coming in at 60 per second instead of 300? Perfection is the enemy of good enough. :wink:
Sorry to side-track your discussion, but I run into the same issue alot when dealing with Ai code which is it’s own nightmare in roblox :melting_face: when it comes to timing and keeping things fast.

2 Likes

This change is good, but I can definitely see confusion in new programmers starting on Roblox. Especially since not only is this is breaking things which were very common practice before, but now you also have to understand concepts that can be confusing and might take a while to understand.

1 Like

I am the creator of Polybattle and scripted the entire game myself. Currently, I have been working on a project with a team full-time for over 2 years. I have several years of scripting experience on the platform and before.

I’m dreading the day when this becomes a reality for all games. I cannot wrap my head around the fact that certain parts of the code essentially become deferred when the execution order is key. I rely on immediate behavior, for example, Remote events with changes to instance properties that now run parallel to each other. BindableEvent is also deferred. The .Destroying event is now useless, but why does the .PlayerRemoving event work the same as before then? There is plenty of new logic I would need to introduce. I’ve been trying since the last post about deferred engine events. While there were times before when Roblox introduced game-breaking changes, and it would take some time to adjust, now it’s on another level. It seems impossible for a project of the size I’m working with.

I can only plead for the old behavior to be kept supported. If not, at least keep AncestryDeferred forever. I love this platform, but this is not the way to keep creators on the platform, and this affects the most ambitious games the most.

I’d like to quote qwertyexpert’s post in a previous thread, which explains my concerns more:

This fundamentally changes Roblox’s event handling model and breaks any code that expects to be notified immediately when an event fires. Basically, most code that uses events.

Once your callback is called, it’s already too late. The caller has already sped way ahead of you and done other things, and you have to account for way more edge cases now. It is now impossible to use events to ensure that something is acted upon in a timely manner; you may as well use spawn.

Changes in Instance hierarchy are particularly dangerous examples of this, as scripts might want to know when an Instance they control is reparented/etc and act on it quickly, but now scripts that execute before the callback can see invalid/unintended states.

Things like the camera input being off by one frame appear out of thin air as a side effect of this. By the time scripts are notified of the events corresponding to user input, it’s already too late to change things in response before the next frame.

The fix is to notify scripts faster, which is what Roblox already does and has done since the beginning of time. I do not see why it’s required to break every script in Roblox that relies on events, even if you’re going to do it over the course of a few years. You can not expect every experience on Roblox to migrate, especially the ones that aren’t being maintained anymore, or whose developers have long left the platform. This will leave all the hidden gems of Roblox in an incredibly broken state.

If this is needed for Parallel Luau, keep the old system and simply use this new one only for events that are fired in parallel (so basically, events that are fired outside the Actor by code running desynchronized inside it). This will keep every existing script on Roblox working, while preparing for the future of new code running in parallel. Breaking so many things on Roblox outside of Parallel Luau is not required.

Parallel Luau is a beta feature and has no compatibility guarantees yet, and making that change to allow it to function is perfectly acceptable because running Luau in parallel is a new concept that requires some care. It’s expected that it’ll be different than regular scripting.

However forcing this change on existing code and the entire non-parallel Roblox ecosystem is not acceptable for me. I highly recommend that you backpedal a bit on this change and consider confining it only to actually concurrent systems like Parallel Luau. This is detrimental to my code and many others’ outside of parallel execution.

Firing an event is inviting other code to act on it instantly, that’s what events are for. Events should not be queued unless there is reason for not being able to act on it instantly, like in the case of Parallel Luau.

3 Likes

Do you mind sharing what needed to be fixed in Knit for this to work? I have a game using Knit and I enabled this feature and it didn’t seem like anything involving Knit broke. I feel like there should be a list of modules/frameworks that will break with this change, because I forsee a ton of eventual posts asking “why my game broke??” in response to these changes.

player.CharacterRemoving:Connect(function(character)
    print(character.Humanoid.SeatPart)
end)

Was using code similar to this to check which seat a player was sitting in when they leave the game. Doesn’t work in deferred mode. To work around this would require also using a CharacterAdded event and a PropertyChangedSignal to keep track of the SeatPart state, a total of 3 event connections to do something that was previously achievable with 1.

2 Likes

it doesn’t wait a tick or 2, it just changes when the connection gets executed in the tick.

yeah ur right, I missread that, so it works the same as always from teh outside, this update is so cool dude its basically just free performance optimization from roblox’s end

2 Likes

yes, and not only performance also time writing code because now when the code is executed it’s guaranteed to be safely executed.

Specifically, this was an edge case when using lots of remote properties with Knit services.

Should be fixed by updating to the latest version of Knit

This update will heavily break my game, as it is causing code to error out in the most unexplainable ways. My events are now returning nil at random intervals, and the suggestions on how to fix the issues are not helping.

If anything, I would HEAVILY advise the team to see how these deferred systems respond in streaming enabled places, as actions done when a player spawns in locally are breaking.

For example, when a person spawns, it would collect what hats they have on, and make a list of them. Before deferred, everything worked amazingly. Now, everything breaks. I can’t see any logic on why it is breaking, but it is.

No one asked for this. No one was clamoring for this. And once “immediate” setting is unable to be used, my game will have to shut down.

1 Like