I would have to ask though, what table is so big that you have to check execution times to adjust on the fly? If you already know it can cause long execution times due to it’s size, there is no need to write checks and setup elaborate ways to pause and continue it’s work. If you know that the slowest targeted device for your experience takes, as an example, over 2 seconds to iterate through this table causing the Client to freeze for that time while it works, why not work the problem from the bottom up instead of the top down approach you are taking? If you optimize this code to run flawless on the slowest targeted device, there is no need to optimize it upwards as the extra speed will carry easily to faster devices.
I asked this question in the original thread, but never got an answer and would still appreciate if @WallsAreForClimbing or @tnavarts could give an explanation:
I got a few replies from users telling me how I could fix the issue, but really my concern has always been with understanding why this change would cause those errors in the first place - which I still do not. The second example in my previous post, where ChildAdded is fired but a descendant is seemingly missing from the Child is particularly concerning and makes me feel like I am missing something about how this actually changes things.
Imagine a table containing 300 names of items (strings), you then start cloning and setting up a frame for each item and parenting it to their respective scrolling frame as you iterate through this table. This can easily freeze up the player and so I do it slowly.
The execution time is lower for high end devices meaning those devices will finish the iteration faster compared to one device who is slower.
Another example where I use this is to update zones in my game. I iterate slowly through the regions calling GetPartsBoundInBox which’s speed depends on the number of parts and size of the region and so I must account for the execution time to yield and let other coroutines run.
I optimize as much as I can and for these tasks that requires a lot of objects to be updated all at once, there must be something in the way that can pause and let other stuff in the game work. If you have a better solution let me know (parallel lua doesn’t allow for changes in Instances).
With a few exceptions most threads enter the same queue and are resumed in that order so you may find that things are actually interleaved. For example:
bindable.Event:Connect(print)
bindable:Fire("1")
task.defer(function ()
print("2")
bindable:Fire("3")
end)
bindable:Fire("4")
Will output 1, 2, 4, 3
Okay so to clarify it a bit further.
Some people like me have clean-up functions that execute when a object is destroyed like so:
script.Parent.Changed:Connect(function()
if not script.Parent.Parent then
-- Clean-up function
end
end)
I sometimes may have script signals or other external objects that are NOT parented or connected to the object that is being destroyed.
Therefor I must execute some code to manually clean up any external objects, bindable events, etc that would otherwise remain and cause a memory leak.
But if I am not mistaken, the NEW event system causes the Changed
, AncestryChanged
and other signals to disconnect BEFORE it can fire, and therefor never executing the clean-up code.
So, is there a way around this?
Having to design a whole janitor system from scratch would be very tedious and time consuming.
Is the .Destroying
event signal supposed to be a replacement so I don’t have to change the way clean-up code is executed?
Even though these signals will be disconnected the invocations should be queued before that happens. Therefore, they should still be executed. If that is not the case please let me know and I’ll make sure we look into it as it would be a bug.
Also, to add some clarity to this:
We actually introduced two types of event disconnection to the engine to support this. If an instance is destroyed pending invocations will not be dropped even though the signals will be disconnected. They are only dropped if the disconnect method is called. We did this intentionally to handle the case you’re describing.
Why not just use a frame heartbeat for each clone so that it builds at 60 fps? Then you won’t have to worry about execution time as the fastest client will build it 60 clones at a time and the slower devices just mean they build at whatever they can manage while still yielding to the rest of the code elsewhere? Sure, it’s not instant if a client is so fast it could have build that entire list in 1 frame, but is anyone actually bothered if a long list of frames are only building at 60 frames instead of 300 frames every game frame? Meaning, would a player really care if that list was coming in at 60 per second instead of 300? Perfection is the enemy of good enough.
Sorry to side-track your discussion, but I run into the same issue alot when dealing with Ai code which is it’s own nightmare in roblox when it comes to timing and keeping things fast.
This change is good, but I can definitely see confusion in new programmers starting on Roblox. Especially since not only is this is breaking things which were very common practice before, but now you also have to understand concepts that can be confusing and might take a while to understand.
I am the creator of Polybattle and scripted the entire game myself. Currently, I have been working on a project with a team full-time for over 2 years. I have several years of scripting experience on the platform and before.
I’m dreading the day when this becomes a reality for all games. I cannot wrap my head around the fact that certain parts of the code essentially become deferred when the execution order is key. I rely on immediate behavior, for example, Remote events with changes to instance properties that now run parallel to each other. BindableEvent is also deferred. The .Destroying event is now useless, but why does the .PlayerRemoving event work the same as before then? There is plenty of new logic I would need to introduce. I’ve been trying since the last post about deferred engine events. While there were times before when Roblox introduced game-breaking changes, and it would take some time to adjust, now it’s on another level. It seems impossible for a project of the size I’m working with.
I can only plead for the old behavior to be kept supported. If not, at least keep AncestryDeferred forever. I love this platform, but this is not the way to keep creators on the platform, and this affects the most ambitious games the most.
I’d like to quote qwertyexpert’s post in a previous thread, which explains my concerns more:
This fundamentally changes Roblox’s event handling model and breaks any code that expects to be notified immediately when an event fires. Basically, most code that uses events.
Once your callback is called, it’s already too late. The caller has already sped way ahead of you and done other things, and you have to account for way more edge cases now. It is now impossible to use events to ensure that something is acted upon in a timely manner; you may as well use spawn.
Changes in Instance hierarchy are particularly dangerous examples of this, as scripts might want to know when an Instance they control is reparented/etc and act on it quickly, but now scripts that execute before the callback can see invalid/unintended states.
Things like the camera input being off by one frame appear out of thin air as a side effect of this. By the time scripts are notified of the events corresponding to user input, it’s already too late to change things in response before the next frame.
The fix is to notify scripts faster, which is what Roblox already does and has done since the beginning of time. I do not see why it’s required to break every script in Roblox that relies on events, even if you’re going to do it over the course of a few years. You can not expect every experience on Roblox to migrate, especially the ones that aren’t being maintained anymore, or whose developers have long left the platform. This will leave all the hidden gems of Roblox in an incredibly broken state.
If this is needed for Parallel Luau, keep the old system and simply use this new one only for events that are fired in parallel (so basically, events that are fired outside the Actor by code running desynchronized inside it). This will keep every existing script on Roblox working, while preparing for the future of new code running in parallel. Breaking so many things on Roblox outside of Parallel Luau is not required.
Parallel Luau is a beta feature and has no compatibility guarantees yet, and making that change to allow it to function is perfectly acceptable because running Luau in parallel is a new concept that requires some care. It’s expected that it’ll be different than regular scripting.
However forcing this change on existing code and the entire non-parallel Roblox ecosystem is not acceptable for me. I highly recommend that you backpedal a bit on this change and consider confining it only to actually concurrent systems like Parallel Luau. This is detrimental to my code and many others’ outside of parallel execution.
Firing an event is inviting other code to act on it instantly, that’s what events are for. Events should not be queued unless there is reason for not being able to act on it instantly, like in the case of Parallel Luau.
Do you mind sharing what needed to be fixed in Knit for this to work? I have a game using Knit and I enabled this feature and it didn’t seem like anything involving Knit broke. I feel like there should be a list of modules/frameworks that will break with this change, because I forsee a ton of eventual posts asking “why my game broke??” in response to these changes.
player.CharacterRemoving:Connect(function(character)
print(character.Humanoid.SeatPart)
end)
Was using code similar to this to check which seat a player was sitting in when they leave the game. Doesn’t work in deferred mode. To work around this would require also using a CharacterAdded event and a PropertyChangedSignal to keep track of the SeatPart state, a total of 3 event connections to do something that was previously achievable with 1.
it doesn’t wait a tick or 2, it just changes when the connection gets executed in the tick.
yeah ur right, I missread that, so it works the same as always from teh outside, this update is so cool dude its basically just free performance optimization from roblox’s end
yes, and not only performance also time writing code because now when the code is executed it’s guaranteed to be safely executed.
Specifically, this was an edge case when using lots of remote properties with Knit services.
Should be fixed by updating to the latest version of Knit
This update will heavily break my game, as it is causing code to error out in the most unexplainable ways. My events are now returning nil at random intervals, and the suggestions on how to fix the issues are not helping.
If anything, I would HEAVILY advise the team to see how these deferred systems respond in streaming enabled places, as actions done when a player spawns in locally are breaking.
For example, when a person spawns, it would collect what hats they have on, and make a list of them. Before deferred, everything worked amazingly. Now, everything breaks. I can’t see any logic on why it is breaking, but it is.
No one asked for this. No one was clamoring for this. And once “immediate” setting is unable to be used, my game will have to shut down.
Toggled this on for our game.
Was a bit scared by replies here, but pretty much everything worked exactly as before. I’m not sure if I’m just ‘lucky’ to not be using breaking/unstable coding patterns or if the people above me are slightly exaggerating the issue.
I recommend you try it for your own experience and see if it works or not.
If you’re experiencing issues with deferred events then I would encourage you to submit a bug report or create a post in #help-and-feedback:scripting-support to ask for help. If you tag me or share it with me over message I will happily take a look to help you diagnose the issue.
Hi, this is a great update and it’s nice to see Roblox improving performance and security across the board. Kudos to you and your team for the good work. Although there is still a very large problem with this update.
Documentation.
In the first post of the Deferred Engine Events thread on April 7th, this line is stated:
We will also be updating our documentation on scheduling so keep an eye out for that in the coming weeks.
It has been 8 months, the update is rolling out, and just about none of the documentation on create.roblox.com/docs mention almost any of this new behavior and how it works. It has taken me searching the Developer Forum for multiple threads about deferred events and reading hundreds of comments on these threads to fully understand some of the potential footguns that I may run into from switching my game’s SignalBehavior from Immediate to Deferred.
Specifically three threads are where most of the information can be gleaned about deferred events:
November 2023: https://devforum.roblox.com/t/deferred-engine-events-rollout-update/2723113
April 2023: https://devforum.roblox.com/t/deferred-engine-events/2276564
May 2021: https://devforum.roblox.com/t/beta-deferred-lua-event-handling/1240569
Ideally developers should be able to go to the documentation to figure out the behavior of events and how they are deferred and how that affects the usage of BindableEvents, Changed connections, etc. They should not be having to go search across three different threads that exist over multiple years (making them slightly hard to even find) to understand how events work.
Almost none of the behavior or inner-workings of events mentioned in those three threads is mentioned in any Roblox Documentation page. It’s not mentioned in the BindableEvent
page, the RBXScriptConnection
page, the Custom Events and Callbacks
page, or any page (that I can find at least).
Here is a great example of something that is not documented at all, which is Disconnect’s dropping all pending events associated with the connection:
Hard - Disconnect from the event immediately and drop all pending events associated with the connection.
I ran into a specific problem because of this recently when I connected to BindableEvent.Event, fired it, and then when a different module of mine disconnected that returned RBXScriptConnection on the same frame. I knew all three of these were on the same frame too, but I was still quite confused why the the single event fire wasn’t resolving, and went on a goose chase thinking my module had improper logic somehow. It may have been the case that pending events were always dropped, but previously at least this single fired event would’ve resolved so this dropping behavior wasn’t obvious. Perhaps this should be obvious behavior now due to the “deferred” terminology, but it really is one of those easy traps that new developers could easily fall into in which there’s not much documentation to help them.
So yeah. It would be nice if we got updated official documentation on all the intricacies of how events work on create.roblox.com/docs, as mentioned in the April 7th post.
How is the Destroying event useless? I have been writing deferred compatible code for over a year and I have never found the Destroying event to be “useless” in deferred mode, it works exactly as I would expect it to. When stuff is destroyed it fires at the end of the tick during the deferred stage in the engine, which is exactly what I would want it to do in deferred mode.
Deferred events just fire at the end of the frame. It isn’t as if the order in which events fire has become random or arbitrary or something, events still fire in the expected order, they just fire at a different time in the cycle, and they fire grouped up together.
The main benefit of event deferrals is that changes can (and already are) often grouped up into the deferred stage. It is like a final event point at the very end of the tick. In fact I have made intentional optimizations which take advantage of the engine’s ability to reduce work in deferred code. If you make changes to model bounding boxes for example in the deferred step e.g. using manual joint destruction or :BreakJoints()
you will find that the engine does a lot less work, with the former being by far the fastest.
I am also not sure why you would ever need or want BindableEvent
s to fire immediately. With the way that I conceptualize events I just expect to throw the events out and have them be processed whenever, because that’s kinda what I think of when I hear event. I am not really sure what other case the order could or would actually be that important unless events aren’t being used like events or something.
I personally haven’t really ever ran into any issues unless I’m doing something weird. I’ve had a few minor issues at times because of cases where I fired events in events and expected them to make immediate changes to some data or variable or something because I was using them weirdly, but usually then I wasn’t using events as events anyways and replacing them with some different abstraction was always the solution.