I think there is also a problem with Players.PlayerAdded:Connect(player) event which it simply doesn’t triggers as the game launches in the studio test.
Okay, I just found another, much more performant, and much more elegant way to do FastSpawn… use BindableFunctions instead of BindableEvents!
EDIT: Updated to work with nested calls, since BindableFunctions can’t invoke themselves during an OnInvoke callback:
--!strict
local FAST_SPAWN_BINDABLE = Instance.new('BindableFunction')
local FAST_SPAWN_CALLER = function(cb: () -> ())
cb()
end :: any
local LOCK_IS_INVOKING = false
FAST_SPAWN_BINDABLE.OnInvoke = FAST_SPAWN_CALLER
local function FastSpawn(func: () -> ())
if LOCK_IS_INVOKING then
local nestedCallFastSpawnBindable = Instance.new('BindableFunction')
nestedCallFastSpawnBindable.OnInvoke = FAST_SPAWN_CALLER
coroutine.resume(
coroutine.create(function()
nestedCallFastSpawnBindable:Invoke(func)
end)
)
else
LOCK_IS_INVOKING = true
coroutine.resume(
coroutine.create(function()
FAST_SPAWN_BINDABLE:Invoke(func)
end)
)
LOCK_IS_INVOKING = false
end
end
return FastSpawn
Stack trace (Works just like old FastSpawn):
Overhead (The “SlowCodeAfterFastSpawn” label is due to external code, so the actual overhead is only a small percentage of what’s showing up here):
Overhead: 0.036 - 0.029 = 0.007 ms for a 0.029 ms loop:
game:GetService('RunService').Heartbeat:Connect(function()
for i = 1, 10 do
debug.profilebegin('FastSpawnOverhead')
FastSpawn(function()
debug.profilebegin('SlowCodeAfterFastSpawn')
for i = 1, 1000 do
math.random()
end
debug.profileend()
end)
debug.profileend()
end
end)
This is a nice update I think. The only thing it might break for me is that in some places I use ValueInstance.Changed+Tweenservice to tween models because there’s currently no way direct way:
local val=Instance.new'CFrameValue'
val.Value=x0
val:GetPropertyChangedSignal'Value':Connect(function()
model:PivotTo(val.Value)
end)
One immediate issue I’m finding with this behavior is anything using Event:Wait()
. If that event is triggered more than once during the deference period, only the first result is making it to the Wait()
call. I first discovered this with some Touched
events, but then found it would work in basically any circumstance. Consider the contrived example below:
local bindable_event = Instance.new("BindableEvent")
local function spam_event()
bindable_event:Fire("a")
bindable_event:Fire("b")
bindable_event:Fire("c")
bindable_event:Fire("d")
bindable_event:Fire("e")
end
spawn(spam_event)
while true do
print(bindable_event.Event:Wait())
end
In this block, “a” is the only result that ever prints; the rest of the events are entirely ignored. This sinking behavior is not present when using Event:Connect()
, but it also does not occur with Immediate
Signal Behavior. Is this an intended change?
What I believe is happening here is you’re firing six events at once. And due to how Roblox’s task scheduler handles yields, the deferred event could be preventing any future :Wait() calls being handled.
Without signals deferred, they’d bunny hop between each other, since one call would release a yield, and so it handles that, then reactivates the thread that called it
Obviously thats just speculation.
Fun fact:
This change causes roblox character sounds on the client to load 1 frame later than usual, along with a lot of other undefined behavior in the core scripts themselves
Also, a certain test case in the Roact codebase will now break.
Probably others just get dropped, try switching it back to old one and see if it’ll result the same.
Did you test? You can’t be sure that it’ll break because it uses bindable events.
The results with Immediate
prints a, b, c, d, e
.
This technically doesn’t achieve the same thing as the traditional fast spawn method. Since you wrap the invoke in a coroutine.resume
, it eats up the stack trace, and you can no longer click the error to see the origin.
Then these unknown "invocation point"s are a bit less frequent than what I expected. I’d still want to hear from staff how exactly these work and how frequent they are.
The ChatScripts also use bindables in unorthodox ways. The main example is using them to connect to CoreScriptChatConnections or whatever the key is called.
But the whole ChatScript API needs a rewrite to conform to Roblox’s modern style rules.
The BindableFunction is what allows the stack trace to show up; this is why people use FastSpawn in the first place.
You may be right, I tested again and it works. Didn’t work first time for some reason, maybe I misclicked?
Will this affect parallel lua in any ways? I couldn’t understand how it works fully to be honest so it can be completely unrelated too.
Can we get documentation on “redundant” events? If a value changes, then changes again before the event fires, the first change won’t fire an event right? What else has this behavior?
From what I’m understanding it sounds like they’re trying to reduce the amount of stress of performance that can occur for every frame and scheduling across more events in a very short time in between frames for the task handler thing.
This techinically helps with performance and will help prevent servers from crashing and hold larger players better in theory, but games that handle a lot of action and expect the speed to be there are going to suffer the most which sucks. And there’s also the issue of unexpected behavior to occur especially with developers who are still used to scripting in their old habbits.
Although I think I’m probably completely wrong: there’s a lot of hard points to understand in these posts made and I think going into more detail will help us from getting spooked out about having to go through thousands and thousands of code and retesting if we’re thinking/going about this all wrong.
It’s really hard to comprehend how good this announcement is supposed to be when you mention optimization & delay in the same post.
I don’t know everything yet, but I’m severly against this change. Yes, there are edge cases
where performance may dip with a ton of events firing, but it’s more important to keep things like this snappy. I say this because you mentioned specifically that this will instead be tied to the Scheduler.
Patterns like wait() are already discouraged because they add initial and unecessary delays to your code execution, adding more things to this sort of pattern of articial lag is only going to make things worse for tight code.
Kind of uncomfortable to know that if the scheduler is lagging / throttling (most front page games have plenty of lag on the scheduler) you’re going to see visible delay in things like property changes and a myraid of other practical uses for events like you already can with wait() patterns. This is going to impact many games.
Edit: just read that it’s toggleable lol
If events are hooked in via Event:Connect()
, they should all process as intended in the correct order, just not at the exact moment they occured.
I think Roblox has severely underestimated the impact that this change will have on so many different types of code. This isn’t just a minor change that will only affect a few people, this is literally changing the execution order of all event-based code that has ever been written on the Roblox platform, and this change needs to be treated with a lot more weight.
This is bound to break/alter so many things, and it is extremely difficult to narrow down every single piece of code that will be affected in some way. Even in cases where it won’t necessarily cause errors, this will introduce so much undefined and untested behavior in many existing games.
I have spent so much time writing the logic for my existing games and making sure that it all runs exactly the way I intended it to. But with this, Roblox would just be throwing a wrench into all of that and saying “well, tough, now you need to revisit all of your code and debug it all over again.”
There needs to be a way for us to maintain backwards compatibility. Roblox cannot force such a broad change that will have so many unintended consequences, placing the burden on us to audit and fix all of our existing code that we’ve ever written on the platform.