Sure. Add yield support and then use the newest version of Signal+.
Did I add support for this? Well, yes, of course, I agree with you, she’s literally a crutch right now, you’re right, I’m going to do it now!
Version number 2
- Significantly improved performance
- Asynchronous signal dispatching (thread pool)
lol, I described adding 120 lines in 2 lines (Although I did it for a long time)
Hi, are you using version 63 for simplesignal?
Yes, its version 63 strong text
I’ll do my own benchmarks rq just to see if it’s real ong
I just clicked ctrl z in the benchmark code.
Ahh yeah there was some oversight causing the module to repeatedly preallocate threads I’ll benchmark it with a patched version really quick
It’s great to see that you’ve found the problem! Fix your module soon and let me test it. We’ll have to redo the benchmarks again.
Hi, unfortunately something came up and i have to sleep right now, the patch is done but I havent benchmarked it yet Will be sending tomorrow
I thought I had commented on this module and left my penny; anyway, while you preallocate, its still unwise to copy the buffer repeatedly on connect if the size is not enough (because you haven’t made it so it grows better), misc, but worth noting.
Hi there.
The benchmarks are not quite right visually.
But don’t worry, it’s not your fault! It’s BoatBomber’s fault.
There have been many bugs in BoatBomber’s Benchmarker plugin for a while, even though it’s paid.
I have just recently reported the issues, which he says will be looked into as soon as he has time.
I’m obviously not allowed to send even a modified version of the plugin, since it is not free.
Just know that one of the biggest issues is that the flamechart (the chart you highlight in your benchmarks) isn’t accurate. Look at the numbers, not the length of the horizontal bars — because for example in the 1000 Connections no fires
benchmark of yours it looks like SignalX is ~10% faster than Signal+ but in reality they’re the same speed.
Version 2.0.1
added MIT license
I have edited the module slightly to improve it; however, I don’t own the Boatbomber Benchmarker plugin.
So I did a simple benchmark comparing my edited vs the original vs SignalPlus.
This is only for the 1000 creation time, and was able to achieve a 33% (avg) reduction in time
22:28:03.402 Starting benchmark: 1000 instances x 5 trials for SignalX, NewX, and SignalPlus... - Server - Script:26
22:28:03.418 Trial 1: SignalX = 11210.30 µs | NewX = 4515.40 µs | SignalPlus = 16.00 µs - Server - Script:59
22:28:03.770 Trial 2: SignalX = 3782.70 µs | NewX = 3927.10 µs | SignalPlus = 17.60 µs - Server - Script:59
22:28:03.893 Trial 3: SignalX = 3712.20 µs | NewX = 3741.60 µs | SignalPlus = 17.10 µs - Server - Script:59
22:28:04.020 Trial 4: SignalX = 3676.40 µs | NewX = 3692.90 µs | SignalPlus = 18.40 µs - Server - Script:59
22:28:04.130 Trial 5: SignalX = 3733.90 µs | NewX = 3660.10 µs | SignalPlus = 15.00 µs - Server - Script:59
22:28:04.131 --------------------------------------------------------- - Server - Script:92
22:28:04.131 -- BENCHMARK RESULTS -- - Server - Script:93
22:28:04.131 -- Iterations per Module per Trial: 1000 - Server - Script:94
22:28:04.131 -- Number of Trials: 5 - Server - Script:95
22:28:04.131 --------------------------------------------------------- - Server - Script:96
22:28:04.131 [SignalX] Average Total (1000 iter): 5223.10 µs - Server - Script:99
22:28:04.131 [SignalX] Average Per Instance: 5.2231 µs - Server - Script:100
22:28:04.131 --- - Server - Script:101
22:28:04.131 [NewX] Average Total (1000 iter): 3907.42 µs - Server - Script:104
22:28:04.131 [NewX] Average Per Instance: 3.9074 µs - Server - Script:105
22:28:04.131 --- - Server - Script:106
22:28:04.131 [SignalPlus] Average Total (1000 iter): 16.82 µs - Server - Script:109
22:28:04.131 [SignalPlus] Average Per Instance: 0.0168 µs - Server - Script:110
22:28:04.131 --------------------------------------------------------- - Server - Script:111
22:28:04.131 -- Comparison Ranking (Fastest First) -- - Server - Script:128
22:28:04.132 1. SignalPlus: 16.82 µs total (0.0168 µs per instance) - Server - Script:130
22:28:04.132 2. NewX: 3907.42 µs total (3.9074 µs per instance) - Server - Script:130
22:28:04.132 -> 3890.60 µs (23130.80%) slower than SignalPlus - Server - Script:144
22:28:04.132 3. SignalX: 5223.10 µs total (5.2231 µs per instance) - Server - Script:130
22:28:04.132 -> 1315.68 µs (33.67%) slower than NewX - Server - Script:144
22:28:04.132 --------------------------------------------------------- - Server - Script:152
SignalX.rbxl (70.2 KB)
Place the file to check the benchmarks, I also added some extra stuff to the module. However, don’t take it as a fully working module, I just wanted to try and optimise some stuff, do your testing and debugging, etc. I added comments in the code as well
Callback ID Management (Memory)
Extra Thread Creation (Performance & Memory)
Buffer Resizing (Performance)
Finding Empty Slots (Minor Optimization)
Those are some chaged I made
Version 2.0.2
Fixed a bug with passing only 1 argument
Hi, with my own benchmark against my own module, I found a fatal bug in your module that pretty much causes it to not execute functions after they yielded:
--!optimize 2
--!strict
-- yield test
local BadSignal = require(script.BadSignal)
local SignalX = require(script.SignalX)
local loops = 100
local Signal1 = BadSignal:new(1) -- 1 is to tell to make module dynamically allocate slots
local Signal2 = SignalX.new() -- empty to make it do the same as above
local t = os.clock()
for i = 1,loops do
Signal1:Push(true,function(b:any) -- true because there is a yielding function
local a = i*b::number
task.wait()
print("pTest1")
end)
end
print("Bad signal connect: ",os.clock()-t)
t = os.clock()
for i = 1,loops do
Signal2:Connect(function(b:any)
local a = i*b::number
task.wait()
print("pTest2")
end::any)
end
print("SignalX connect: ",os.clock()-t)
local t = os.clock()
for i = 1,10 do
Signal1:Fire(2)
end
print("Bad signal fire: ",os.clock()-t)
local t = os.clock()
for i = 1,10 do
Signal2:Fire(2)
end
print("SignalX fire: ",os.clock()-t)
The output in theory should have 1000 pTest1 and 1000 pTest2. But on tests there are only 1000 pTest1.
Also, I tested mine against non-yielding operations, and from the tests, once again, sometimes some of the functions don’t even fire:
--!optimize 2
--!strict
-- no yield test
local BadSignal = require(script.BadSignal)
local SignalX = require(script.SignalX)
local loops = 100
local Signal1 = BadSignal:new(1) -- 1 is to tell to make module dynamically allocate slots
local Signal2 = SignalX.new() -- empty to make it do the same as above
local t = os.clock()
for i = 1,loops do
Signal1:Push(false,function(b:any) -- false because there are no yielding functions
local a = i*b::number
print("pTest1")
end)
end
print("Bad signal connect: ",os.clock()-t)
t = os.clock()
for i = 1,loops do
Signal2:Connect(function(b:any)
local a = i*b::number
print("pTest2")
end::any)
end
print("SignalX connect: ",os.clock()-t)
local t = os.clock()
for i = 1,10 do
Signal1:Fire(2)
end
print("Bad signal fire: ",os.clock()-t)
local t = os.clock()
for i = 1,10 do
Signal2:Fire(2)
end
print("SignalX fire: ",os.clock()-t)
Bad signal connect: 0.0000092999980552122
SignalX connect: 0.00009100000170292333
▶ pTest1 (x1000)
Bad signal fire: 0.10646739999356214
▶ pTest2 (x600)
SignalX fire: 0.08270960000663763
As for cases when all of the functions fire, it loses against Bad Signal:
Bad signal connect: 0.000005299996701069176
SignalX connect: 0.00010699999984353781
▶ pTest1 (x1000)
Bad signal fire: 0.09846180000022287
▶ pTest2 (x1000)
SignalX fire: 0.10529619999579154
(Note that those are singleton tests; they can vary but most of the time Bad Signal wins.)
Hi, thanks for the tests! I think it would be more honest to test this by specifying it in my capacity module, because you also insert arguments into your object.
I also ran a couple of tests and I never saw it print only 600 times. I will try to get this bug and fix it.
Okay, let’s take 16 PoolThreads and 1028 capacity.
Although your module wins by 30 percent in Connect, its very slower than my module in Fire.
Perhaps you want to ask why everything is so fast? I removed the print and added counter, because print is an expensive operation, and I was just checking if Counter is ~1000 after 1000 one-by-one operations
You do not understand; your module is actually broken. Sometimes it can allocate automatically, sometimes not. What I said above is a bug report