so i found an issue when making my own goodsignal style signal class, and i think it might be an issue with goodsignal as well considering it uses the same method: when using task.spawn
to resume a :Wait()
, and following the wait with disconnecting all connections, theres a chance you will encounter an error. replacing task.spawn
with task.defer
would fix this
Whatās the specific scenario?
Changing to task.defer
would be a change in behavior. The intent is that firing the signal synchronously invokes the handlers, which would not happen with defer.
If you need defer on a specific handler you can always add a defer in the handler.
Hi, I recently made a pull request on the GitHub repo based on a new feature coming soon:
Would love it if you could check it out: Add Signal:Once by Baileyeatspizza Ā· Pull Request #5 Ā· stravant/goodsignal (github.com)
Updated the post, gists, model, library, github repo etc with a couple of updates:
-
Added a
Once
call to matching the new API added to RBXScriptSignal in v531. -
Removed the error when calling
Disconnect
multiple times. The approach of throwing an error in that case is incompatible with theOnce
API, so I have to admit defeat on that one. Most people didnāt want the error anyways and this shouldnāt create backwards compatibility problems. -
Added the fix avoiding leaking the first set of arguments passed to
Fire
in a given session. This potentially comes with a small performance penalty but only under very specific circumstances (If you both have a large number of event handlers connected and they also frequently yield).
Hey there! I had a look through your module and it is absolutely wonderful!. Though I have some questions in mind regarding to how the script actually works as Iām extremely confused.
Specifically in these blocks of code:
-- The currently idle thread to run the next handler on
local freeRunnerThread = nil
-- Function which acquires the currently idle handler runner thread, runs the
-- function fn on it, and then releases the thread, returning it to being the
-- currently idle one.
-- If there was a currently idle runner thread already, that's okay, that old
-- one will just get thrown and eventually GCed.
local function acquireRunnerThreadAndCallEventHandler(fn, ...)
local acquiredRunnerThread = freeRunnerThread
freeRunnerThread = nil
fn(...)
-- The handler finished running, this runner thread is free again.
freeRunnerThread = acquiredRunnerThread
end
-- Coroutine runner that we create coroutines of. The coroutine can be
-- repeatedly resumed with functions to run followed by the argument to run
-- them with.
local function runEventHandlerInFreeThread()
-- Note: We cannot use the initial set of arguments passed to
-- runEventHandlerInFreeThread for a call to the handler, because those
-- arguments would stay on the stack for the duration of the thread's
-- existence, temporarily leaking references. Without access to raw bytecode
-- there's no way for us to clear the "..." references from the stack.
while true do
acquireRunnerThreadAndCallEventHandler(coroutine.yield())
end
end
-- in the :Fire() method
if connection.isConnected then
if not freeRunnerThread then
freeRunnerThread = coroutine.create(runEventHandlerInFreeThread)
-- Get the freeRunnerThread to the first yield
coroutine.resume(freeRunnerThread)
end
task.spawn(freeRunnerThread, connection.callback, ...)
end
I have read your detailed comments meticulously quite a few times now and I struggle to comprehend this. I guess Iām just way out of my depth to partially grasp this. Anyway, when I removed those codes the performance got significantly worse, and Iām curious as to why those codes reduce the ālagā so much. Iām eager to hear your explanation!
Appreciate your effort profusely!
A picture is worth a thousand words:
Basically, we need at least one thread for each handler being run at the same time. Normally only one handler is being run at a time, but if the handler yields (the blue wait), now thereās a chance for another :Fire()
call to happen (green), and the handlers for that will have to be run on a new thread because the original one is occupied waiting on the second waiting (blue) handler.
The magic happens when at the end of acquireRunnerThreadAndCallEventHandler
, we set freeRunnerThread
back to the currently running thread, thatās what lets the same thread be re-used once itās no longer needed by a yielding handler.
And the performance comes from that same thread being re-used, because creating new threads is expensive.
Unfortunately I canāt really support correct operation of ConnectParallel, all I could do is pretty much just directly wrap the existing call, and I wouldnāt even be able to pass tables by reference because you canāt share a table across parallel boundaries.
you might be able to support that through task.desync and task.sync?
Actually yeah, youāre probably right at the moment, I could use those.
But Iād rather wait until parallel Luau has had more time to mature lest simply calling desynchronize isnāt sufficient anymore or something like that.
The Connect function seems to resume on a seperate thread than it was fired in, so that when the thread is suspended by Roblox internally (like when a script is destroyed), the function continues to run. How can you turn this behavior into a function?
The code below is an awful and inefficient wrapper for your code which achieves the behavior, but it is obviously not practical
--// Variables
local signal = require(game.ReplicatedStorage.modules.signal)
local mainSignal = signal.new()
--// Module
local lockedThread = {}
mainSignal:Connect(function(func: any, ...)
func(...)
end)
function lockedThread.spawn(func: any, ...)
mainSignal:Fire(func, ...)
end
-- Creates a new signal when called, and waits for a resulting thread object from task.delay
function lockedThread.delay(duration: number, func: any, ...)
local delaySignal = signal.new()
local indices = 0
local result = nil
-- Create connection and immediately fire signal
delaySignal:Once(function(...)
result = task.delay(duration, func, ...)
end)
delaySignal:Fire(...)
-- Wait for result, then clean up
while not result and indices < 6 do
print("waiting for thread result")
indices += 1
task.wait()
end
if not result then
print("created unsafe thread")
result = task.delay(duration, func, ...)
end
delaySignal:DisconnectAll() -- probably redundant
return result
end
return lockedThread
How can I create a function which works like this, without the awful inefficiency?
Edit: So, the observed behavior only happens in some scenarios? Iām terribly confused and have been losing sleep over this one. Is there even a way to stop a thread from being cancelled when inside a script that is destroyed? You know this stuff better than I do.
Unfortunately itās not possible to make it immediately stop calling the handler when the original connector goes away, as Lua isnāt privy to the information about exactly when threads get uninitialized on the engine side.
However it is possible to make it stop calling the handler in the exact same way as the RBXScriptSignal but slightly later, the next time a garbage collection cycle happens. Hereās the simplest implementation:
function Signal:ConnectWeak(fn)
local weakConnecterHolder = setmetatable({coroutine.running()}, {__mode = "v"})
local connection;
connection = self:Connect(function(...)
if weakConnecterHolder[1] then
fn(...)
else
connection:Disconnect()
end
end)
return connection
end
Bummer. Eventually I tried my method with bindables, didnāt work either. Was slightly surprised by that though; wouldnāt you expect for a method within an instance to run in a different thread/environment than the script it was called in? My guess is that itās because the Connect function is a method of an RBXScriptSignal which is probably a form of userdata
Itās a bit difficult to understand this part, whatās a use case for ConnectWeak? Iām not familiar with weak references. To my knowledge, itās creating a weak table for the running thread (co.running()) and if the reference is broken (through gc? not too sure), the connection gets disconnected. When is a weak reference broken? Thanks
Ah, there are actually two behaviors at play here, I think I misunderstood which you were talking about.
-
Built in RBXScriptSignals have a behavior where they disconnect a handler function when the thread which called connect to that handler gets terminated.
-
Any threads you create remember their original source, and when a script gets de-parented it terminates the entire tree of threads spawned out from its original thread. This includes event handlers. Unfortunately it doesnāt actually terminate the coroutines, just the C++ side threads, so thereās no way to detect whether a given coroutine has been terminated in that way, the Lua side threads are still in a suspended state so other code can run them again.
So, itās not really about being the same or different thread. Because the event handlers always run in a different thread than the caller, even for built in RBXScriptSignals, itās rather the special behavior linking threads to the thread that spawned them coming into play.
Iām not aware of any way to detect script-deparented termination of the threads which is what would let us handle this. The best you can do right now as far as I know is this:
function Signal:ConnectWithContext(fn)
local localFreeRunnerThread = nil
local function localAcquireRunnerThreadAndCallEventHandler(...)
local acquiredRunnerThread = localFreeRunnerThread
localFreeRunnerThread = nil
fn(...)
localFreeRunnerThread = acquiredRunnerThread
end
local function localRunEventHandlerInFreeThread()
while true do
localAcquireRunnerThreadAndCallEventHandler(coroutine.yield())
end
end
-- Need the coroutine.wrap so that the new runner threads are made in the context
-- of the original connecting thread and terminated along with it.
local makeRunnerThreadFn = coroutine.wrap(function()
while true do
localFreeRunnerThread = coroutine.create(localRunEventHandlerInFreeThread)
coroutine.resume(localFreeRunnerThread)
coroutine.yield()
end
end)
return self:Connect(function(...)
if not localFreeRunnerThread then
makeRunnerThreadFn()
end
task.spawn(localFreeRunnerThread, ...)
end)
end
This will terminate any currently running event handlers when the connecting script gets deparented, however it wonāt stop subsequent event handlers from getting run afterwards. Weād need the aforementioned way to detect when a thread had been terminated thanks to script deparenting to make it 100% work.
No worries. Just for good measure, let me reiterate with an example
ā A Part in the Workspace
ā A Script in the Part
ā Part is interacted with by a character (touch, click, etc)
(no yield) ā Do something to said character after a delay (via task.delay)
ā Destroy Part
ā Suspended delay thread never resumes now
Obviously with this example, the best solution is just modularization. But the scope of the issue becomes much larger when you add layers.
ā Part in Workspace
ā Script in Part
ā Part interacts with character
ā Delay via a timer class
| ā Create timer
| ā Store timer in table
| (no yield) ā Delay for any number duration (via task.delay) to clean it up & remove it from table
ā Destroy Part
ā Memory leak, as task.delay cannot resume and remove it from table
There is no way to properly secure the timer module other than by using costly loops every heartbeat. The only other way to avoid the issue is to use a fully modularized framework (or specifically modularize scripts which call this function, but thatās very inconsistent), which is definitely not for everyone. Even then, how can you guarantee that thereāre no sneaky edge cases?
Anyway, thank you a lot for the help. Iām not sure what Iām going to do just yet but Iāll figure it out eventually
Hi, love your work. Iām having an issue with the linked implementation of GoodSignal:
Calling :Connect
on a newly instanced Signal
object throws the error ReplicatedStorage.Signal:111: Attempt to get Signal::_handlerListHead (not a valid member)
.
The referenced lines of the current implementation are:
-- Make signal strict
setmetatable(Signal, {
__index = function(tb, key)
error(("Attempt to get Signal::%s (not a valid member)"):format(tostring(key)), 2)
end,
__newindex = function(tb, key, value)
error(("Attempt to set Signal::%s (not a valid member)"):format(tostring(key)), 2)
end
})
It assumes that any attempts to set or get keys in the Signal object with a corresponding nil value is invalid, but Signal.__handlerListHead
is meant to have nil
as a valid value so in some cases __index
/__newindex
is called even though a valid key was index.
Here's a workaround:
-- Make signal strict
setmetatable(Signal, {
__index = function(tb, key)
if key == "_handlerListHead" then return end
error(("Attempt to get Signal::%s (not a valid member)"):format(tostring(key)), 2)
end,
__newindex = function(tb, key, value)
if key == "_handlerListHead" then return end
error(("Attempt to set Signal::%s (not a valid member)"):format(tostring(key)), 2)
end
})
Did you modify your implementation in some way?
If you look at the code _handlerListHead
/ _next
are set to false
instead of nil
when no value is present for exactly that reason (and because itās more efficient for the number of members in a table to not change over its lifetime through things getting changed from nil to non-nil).
Ah, yeah I must have because I canāt get it to break again
Sorry about that
Could you possibly add
:Defer() and :Delay()
I made a PR to add ConnectParallel
. The method I am using should work since ConnectParallel
is just using Connect
with task.desynchronize
But, heres a type you can use in your open-source packages:
type ScriptConnection = {
Disconnect: (self: ScriptConnection) -> ();
Connected: boolean
}
type ScriptSignal<T...> = {
Connect: (self: ScriptSignal<T...>, func: (T...) -> ()) -> (ScriptConnection);
Wait: (self: ScriptSignal<T...>) -> (T...);
Once: (self: ScriptSignal<T...>, func: (T...) -> ()) -> (ScriptConnection);
ConnectParallel: (self: ScriptSignal<T...>, func: (T...) -> ()) -> (ScriptConnection);
}
Example Usage:
local MySignal = Signal.new() :: ScriptSignal<...>
Just remember to add parallel support along with changing connected from private to public.