I would like to know how this does performance wise if the connection would be listening for a long period of time. Does it use less performance than using a regular BindableEvent? If so, that would render this module useless. I would like to know the pros and cons about this module compared to regular Bindable Events and other event based modules.
A pro that this module has over regular BindableEvents is the absence of an instantiated BindableEvent class. It allows you to listen to events without the overhead of an instance.
Regarding hard statistics, I have run a benchmarking test using my event module and using BindableEvents and ended up with this:
The benchmarking code I wrote is shown here:
local RBXEvent = require(game:GetService("ReplicatedStorage").RBXEvent);
local BindableEvent = script.Event;
local Event = RBXEvent.new();
local RBXEvent_fire_time;
local Bindable_fire_time;
Event:Connect(function()
print("RBXEvent Fire Delay: "..os.clock()-RBXEvent_fire_time);
end)
BindableEvent.Event:Connect(function()
print("BindableEvent Fire Delay: "..os.clock()-Bindable_fire_time);
end)
RBXEvent_fire_time = os.clock();
Event:Fire();
Bindable_fire_time = os.clock();
BindableEvent:Fire();
As you can see, my module performed at about 2.0999e-6 seconds firing delay, while the BindableEvent performed at a lower performance at about 5.3000e-06.
I repeated the test several times and the results seemed to stay consistent, with BindableEvents having worse performance than my library.
This should also be tested against other signalling libraries, but I have not done this test at the moment.
Can you test performance with one connection with a wait(1) inside, and then fire it 100,000 times and see how long it takes to fire 100,000 times compared to other libraries?
Like quenty’s (can’t find a link tbh),
or mine? mine uses coroutines but it suffers from some problems with connecting and disconnecting too often as bringed up in my topic, it is faster but I would still go with Bindables myself if I’m working with heavy use.
Also can we get a GitHub?
That’s actually not a problem with Coroutine based solutions at least. The function you connect to is just kept there and gets ran in a Coroutine when it’s fired. It won’t get worse overtime, it might just take a bit more memory. I am not sure with Bindables though.
Hi, I performed these tests repetitively using a loop, about a million times.
The results are in:
My library is the most performant of the 3 (RBXEvent, FastSignal, and regular BindableEvents), and has a time delay of about 0.20312390604522 for 1 million fires of the event.
On the contrary, FastSignal, your library, has the slowest of the 3, a time delay of about 0.69016789540183 for 1 million fires of the event.
BindableEvents are in the middle between both of our speeds clocking in with a time delay of around 0.48147069837432 for 1 million fires of the event.
The benchmarking code that I have reworked to account for the 3 libraries is shown here:
local RBXEvent = require(game:GetService("ReplicatedStorage").RBXEvent);
local FastSignal = require(game:GetService("ReplicatedStorage").Signal);
local BindableEvent = script.Event;
local Event_RBXEvent = RBXEvent.new();
local Event_FastSignal = FastSignal.new();
local RBXEvent_fire_time_total = 0;
local FastSignal_fire_time_total = 0;
local Bindable_fire_time_total = 0;
local RBXEvent_fire_time;
local FastSignal_fire_time;
local Bindable_fire_time;
Event_RBXEvent:Connect(function()
RBXEvent_fire_time_total = RBXEvent_fire_time_total + (os.clock()-RBXEvent_fire_time);
--print("RBXEvent Fire Delay: "..os.clock()-RBXEvent_fire_time);
end)
Event_FastSignal:Connect(function()
FastSignal_fire_time_total = FastSignal_fire_time_total + (os.clock()-FastSignal_fire_time);
end)
BindableEvent.Event:Connect(function()
Bindable_fire_time_total = Bindable_fire_time_total + (os.clock()-Bindable_fire_time);
--print("BindableEvent Fire Delay: "..os.clock()-Bindable_fire_time);
end)
for i=1,1000000,1 do
RBXEvent_fire_time = os.clock();
Event_RBXEvent:Fire();
FastSignal_fire_time = os.clock();
Event_FastSignal:Fire();
Bindable_fire_time = os.clock();
BindableEvent:Fire();
end
print("RBXEvent: "..RBXEvent_fire_time_total);
print("FastSignal: "..FastSignal_fire_time_total);
print("BindableEvent: "..Bindable_fire_time_total);
Your computer is really good
Mine crashed with 1 million times when I tested it lol
You might wanna look into testing with a wait(1)
like I said since I have seen BindableEvents (and overall any signal API) getting really slower with them, with Bindables just like really just dying and showing their weakness.
The benchmarking you did is comparing the time it took for a connection to be fired and how long it took to actually run; my testing was different so it makes sense.
Also I used this script for testing I think when I did mine:
wait(10) --\\ wait until game is loaded i guess. I always do this for benchmarks.
for _, SignalObj in ipairs(game.ReplicatedStorage.Signals:GetChildren() do
local Signal = require(SignalObj).new()
Signal:Connect(function()
wait(1)
end)
local start = os.clock()
for _ = 1, 250000 do
Signal:Fire()
end
print(SignalObj.Name..":", os.clock() - start)
wait(3) --// wait is bad, so just make everything fair for the next contestants lol
end
This is compatible with basically any API you throw at it, so it’s cool
You’re right. I commented out the Bindable test to just both of our event APIs and it seems to have dropped a bit on both:
(BindableEvent 0 because we’re not firing it)
My library is the most performant of the 3 (RBXEvent, FastSignal, and regular BindableEvents), and has a time delay of about 0.20312390604522 for 1 million fires of the event.
Those differences you see are extremely negligible, your module isn’t “performant”. On the other hand, this module has literally no useful purpose, there have been many custom RBX Script Signals for us to use, why use this one instead?
It allows you to listen to events without the overhead of an instance.
This is definitely an excuse, creating a bindable instance and working with it is no hard thing, just requires a few lines of code.
I created this module for use in my game and wanted to share a resource with the community. It was not intended to be more performant than BindableEvents and other libraries, but others asked so I provided a benchmark script & its results, which happened to have a faster speed on a small scale.
The differences are small, I agree, but it was pointed out to me that if it was slower than BindableEvents then it would not have any use case in game, which is why I performed the tests, and it turned out to be faster. The creator of another signalling library replied and wanted me to benchmark test against his library, and I did so as per his request. Additionally, the timing of the firing of one event can be compounded when events are being fired more often, such as in the case of the firing test. In most cases the performance is negligible and I would agree with you.
For my use case this likely would have been sufficient, but I wanted it to function without using the DataModel, which is why I wrote this library in the first place, just for use in my game.
I don’t have any issue if someone doesn’t use my library, but if you want to avoid the hassle of BindableEvents, you may want to use it.
Thank you for the feedback.
RBXEvent Feature Addition
Event:Wait()
The Event:Wait() method is in the RBXScriptSignal datatype, but has yet to be implemented into RBXEvent. I have added this method to the module. This new method yields the thread until the event that the method is being called upon is fired.
The module seems to be fully feature complete compared to the RBXScriptSignal datatype as of this latest release.
Ok. I got my hands on the source code. And I do not really like it.
First of all, I also didn’t know this, but… you still did it like the worst way possible.
You can improve on :Wait()
by using a tecnic like I used in FastSignal.
Anything you send from coroutine.resume
will return to coroutine.yield
;
This is how you’re currently doing it:
Even if you used RunService.Heartbeat
it would be better. But using it the same way as FastSignal (and pretty much any other API) is just better.
Also like??? Use coroutines?
I can’t even bench-mark it.
After making it use coroutines. I got it work, and mine still won.
This technique actually does look better. I will change to this method of doing it for the :Wait() function soon.
The reason mine had better event firing performance was due to the absence of coroutines which can take a little longer to fire up a new thread. However, not using coroutines may be an issue if you have many connected events, especially if those events take a long time to run to completion. Thereby, I will take the small dip in performance and implement coroutines into the :FireListener() method, even if this will cause performance to drop minimally.
Thank you for the feedback.
EDIT: Coroutine update has been published @LucasTutoriaisSaimo. Your benchmark tests of my library should show a dip in performance on my library due to the coroutines, but now if you have a computationally intensive event, other events will not be put on hold waiting for it.
EDIT 2: Fixed a bug with the coroutines where arguments weren’t being passed (forgot to pass the arguments to the variadic coroutine function)
I tested out the performance of this module vs my custom signal module, turns out this module is about 3-4x much slower. Now that the performance is nothing special about it, this module still has no useful purpose, it just replicates the same behavior from bindable events.
Benchmark:
local eventNew = event.new()
local fast = Signal.new()
local before = os.clock()
for i = 1, 1000000 do
fast:Fire()
end
print("fast " .. os.clock() - before)
for i = 1, 1000000 do
eventNew:Fire()
end
print("event " .. os.clock() - before)
fast is my signal module
Result:
Mine also uses coroutines, so “using coroutines cause a dip in performance” isn’t a valid point for this module being very slow.
Yes. Before we implemented the coroutines in the latest update we had a greater speed, but spawning a new thread seems to take a toll on performance on my module. While both modules utilize coroutines, there likely is something in mine I have yet to optimize.
However, as you said earlier, the differences are negligible, unless you are firing events hundreds of thousands of times or more at once.
However, as you said earlier, the differences are negligible, unless you are firing events hundreds of thousands of times or more at once.
That difference you see is clearly not negligible.
Yes. Before we implemented the coroutines in the latest update we had a greater speed, but spawning a new thread seems to take a toll on performance on my module. While both modules utilize coroutines, there likely is something in mine I have yet to optimize.
Just because your module is proven slow, doesn’t mean you should give out an invalid point that you have something yet to optimize. Other than performance, this module has no useful purpose. Most experienced scripters won’t look for negligible performance boosts, they look for usefulness as I previously stated.
Well, in defense of him Signal APIs don’t have much more they can add.
Then why make a signal module when they don’t have much more? One can easily do the job with bindables. For e.g, I would consider my signal module since it uses bindable functions and supports returning values as well as handles the edge case of bindable functions yielding if no callback associated to OnInvoke
.
Learning, and understanding performance dependant code pretty much.
Was testing some other APIs here again messing with stuff, and I required to use a signal api and was testing some out, and I (on an actual game) had MAJOR issues with memory leaking from your library.
I was confused on why it was so high, after removing specifically yours, I had pretty much no issues.
One of the things, I think that might be the cause of, is the fact you don’t have a :Destroy() function for a Signal. Nada. And you should highly add that, even if it just disconnects everything. When disconnecting a function, something that I do with FastSignal is also remove the reference from the signal it was created from on the connection. Seems to have helped.
For reference, it didn’t take long until I had 2000MB of untracked memory!
This is because of course, I’m creating a lot of signals at once, but this shoudn’t be an issue!
I have a loop that keeps testing these APIs and retrying, etc.
This is what it looked like without yours after some time running, maybe 8 minutes.
And this is with your library not too long after launching a new server. About 30 seconds after the server opening.
And this is 90 seconds after the server opening.
Note: The second image I sent you, was about after 30 seconds in-game! The first one, I stayed for like 5 minutes, and no issue!
Even though this is not a library many use, I would highly recommend you fix this.