For proper documentation there is this post instead of in script comments, in script it is short and without in-depth explanation. I made it pretty clear that this module is meant for people who already used BindableEvent.
What do you mean by “cherry-picked”? Real results is a benchmark; I provided you the code I used for it; it does benchmark 20 times and gets an average result. You can validate the legitimacy of my benchmarks by running your own with the code I provided.
Read the title of this post once again, please;It’s the whole point of this module to be faster.
I was going to consider switching to this one since I actually care about nanoseconds of speed because my whole game’s structure is event driven, so it made sense to use this for a thing that has a ton of events.
The resource is good, but based on the developer’s attitude I don’t want to use it anymore, although to be fair I also think some people are bashing on this guy for no reason just for publishing signal library.
You haven’t seen what people have told me under this section bro it’s already cleaned up by moderation luckily.
I’m not pushing you to do something; you are free, like everyone else.
If you want, you can still use the BadSignal, which the normal signal has derived from. It is a lot different from other modules, but it’s pretty much the fastest thing out there
These benchmarks aren’t good. Please, if you are going to show benchmarks, use Benchmarker or a decent implementation of benchmarking. This code just takes the average, not removing outliers, no median, etc. It also does not help that your benchmarks are really confusing and could use simplifying.
If you’re going to flex performance, do it properly; even then the performance gain is negligible, and you also unnecessarily localizewarn, which isn’t something that will boost your performance at all on Luau. There is also an unnecessary thread allocation, which could’ve just been cheaper to probably create a slightly heavier table representing the connection internally than anything.
In Roblox, we are more focused on the memory performance than the runtime performance anyway.
This is why we have opted to do metatables, despite the greater performance that one can achieve by cloning a stub, like it has already been said on more than one post on DevForum by me and others. What is important is reasoning why we do these things to improve upon them; simply creating a new library will not aid much…
Regardless, we opt for memory because OOM == Crash == Losing Players. If using a metatable will prevent 20% of the user base from an OOM crash, then I’d rather lose 1.2 ms when a function is run 100,000 times than lose 20% of my user base. The benefit between something that will occasionally dispatch 1 or 2 times, contributing virtually picoseconds of execution time, is negligible. I may be a performance junkie, but I myself still opt for the most ‘performant’ concepts when they’re not particularly great.
Making more signal modules is very much useless; this is not the JavaScript community, where a new hot framework comes along every week. This is ROBLOX, we don’t want the “Framework beef” of JS to come to Roblox in the form of “Signal Library Beef”
Honestly, I’m going to wait till I get home before drafting a reply, because your reaction is nothing short of mind boggling and disappointing. At no point have I ever tried to clown on you, being that I provided you honest and real feedback.
The JS mention is simply a comparison to the situation of frameworks in JS to signal libraries, bridging two completely unrelated things to provide impact and entertain the reader of my post. It has nothing regarding this library, and more on the entire environment of Devforum signal libraries.
If you’re going to crash out, do not do it in such a way in which:
you attack someone that didn’t do anything against you
respond properly to the comments, professionally, without trying to make the other party look like a complete incompetent and lying by omission.
Carefully read your own code, and understand once and for all that strict type checking has no real effect. Native Code Generation is not yet enabled on the client nor on RCC, therefore making libraries for it shouldn’t be the primary focus.
Regardless of that, having exchanged messages before in other threads I expected a much more professional, proper response than the clown writing and complete debauchery I obtained, I’m disappointed, I expected more than this.
Native code generation HAS been enabled on a client actually.Also typecheck can boost perfomance on non native code aswell becouse it helps script analisis a bit.
You did attacked me tho with baseless call outs about “localizing globals” and such without making any research prior to that.
Its useless talking to you about everything else becouse you are a production based developer and its just pointless talking to you.
I belive that you are in a wrong chat for that here we discuss development resources and not “charisma” or “eloquence”
Also lower down your ego.
Nope. You said it was enabled on the client, not studio. It will take a while for them to roll it out to all platforms on the client due to the sheer complexity of translating to native code across different architectures, for example ARM, Intel, Apple Silicon, etc.
That doesn’t metter anyways; Just have all code typechecked to be safe in future
Also once again even without native code it does help script analisis a bit
So what?Let it be then; it’s not like it hurts runtime or anything + helps with writing the code + works if you use this module inside native generated code without causing errors.
That literally just coping right now 3 people can’t win a fight against single person (me)
People seem to argue here that the performance benefit is negligible and the memory costs outweigh it. I say “θ.”
Because in reality most of the memory expenses literally come from the workspace and replicated storage rather than the script environment. In most of my cases, my entire Lua script environment size is only about a kilobyte, even for huge scripts. That amount of memory is matched by what? Like 20 parts? Devices handle 20k instance games, and unless you don’t know module scripts, you’re maximally going to use like 200 scripts in your game.
Also regarding typing: It matters even for interpreted code. There are a lot of mechanisms undergoing Luau’s optimizations, and most of them rely on knowing the type of variable. For example, the vector library, if not typed properly, would be slow to read values from. Because vector.x can seem like table access, and if Luau doesn’t know the type of the variable it’s accessing from, it’s going to check first if it’s a table or a vector. Which introduces runtime overhead. Typing vector removes that overhead, providing direct access to the vector x value without invoking type checking. And even if there weren’t interpreter-type optimizations. Having a strict typed module is a lot better than constantly looking back at it’s documentation.
You needn’t buy Beatbomber’s plugin; you may produce an accurate benchmark without it; you simply have to research how to do so. Beatbomber’s plugin is simply a useful, simple and practical tool that I personally use to test performance. Your test is flawed; it in some cases combined more than one thing and named it ‘Creation’. The benchmark does not focus exactly on creation then (see your last code block where you nominated a test ‘Creation’).
In two points this is wrong. If you are testing creation, first of all, you don’t just emit a signal with the preallocation of 99999. That will waste way too much time in simply allocating an unrealistic, bizarre and out-of-this-world amount of slots on the table. As for the following fire call, it is testing the performance when firing… one connection. This is not a proper benchmark, even if you had used Beatbomber’s plugin.
Care to check that I did do testing? Many times, using the proper tools for that and testing the proper thing. I am not a JS developer at all, and there are no tables there particularly, only prototype inheritance, but that is another story; I tested it, and localising warn here would emit an op. code that would simply fetch a captured upvalue for all functions that use it. Aside from unnecessarily creating an upvalue, which is already slightly damning you, you also completely disregard the existence of GETIMPORT, something which I assume people to be aware of, more because you read my OOP optimisation post, which was a follow-up to my previous post, focusing on DUPCLOSURE and GETIMPORT. However, since you like to disregard anything without it being here in front of you, I will bring a benchmark image and its source code.
GETGLOBAL will also disable DUPCLOSURE however, I have disregarded the difference here, as it would affect the GETUPVALUE benchmark as well
Test Sample
-- REMOVE THE LINE BELOW IF YOU WANT TO ENABLE GETIMPORT OPTIMIZATIONS!
--!optimize 0
--[[
This file is for use by Benchmarker (https://boatbomber.itch.io/benchmarker)
|WARNING| THIS RUNS IN YOUR REAL ENVIRONMENT. |WARNING|
--]]
local _pcall = pcall
return {
ParameterGenerator = function()
return
end,
Functions = {
["GETGLOBAL"] = function(Profiler)
for i = 0, 10000 do
pcall(function() end)
end
end,
["GETUPVALUE"] = function(Profiler)
for i = 0, 10000 do
_pcall(function() end)
end
end,
},
}
You also shown that you have no idea on type checking, as the behaviour warn and print display is ok. That is a packed generic, speced on the Luau RFCs here if I recall properly, they document that T... is simply a generic interpretation of a vararg, where any of the elements in the vararg can take any type.
which is exactly what you denoted as a ‘mistake’. The non-capital a is simply the Luau typechecker inferring a generic pack. any... is not recommended, because it discards type information entirely, while T... does not do so.
Please, instead of calling this a mistake, then saying I’m ‘wrong’ properly read.
This makes absolutely no sense; however, I will assume this is about my criticism of the :Once implementation you are using. A thread, also known internally in Luau C as a Lua State, is approximately (by itself, in structure size alone) approximately 128 bytes in size. This disregards the fact that the Luau stack and the CallInfo that the thread uses for existing in the call stack are also allocated with it and roughly are about 40 * 8 bytes (BASIC_CI_SIZE macro) and 16 bytes for each element on the stack that is allocated on the Luau heap, of which roughly 2 * LUA_MINSTACK are allocated, which are roughly 40 * 16 bytes in memory.
A table alone simply costs 48 bytes by itself, already smaller than a thread, and objects it may allocate are a metatable, which already exists in memory and is a separate table altogether, the array part, which is simply 16 bytes each, and the node part, 32 bytes each. Assuming that you use the table with an array, you can cheap out probably 100 bytes or so. So no, it is more efficient to use a table than to create a coroutine for :Once, thanks.
It is also shocking that you yourself, or perhaps moderation, have removed the post you originally wrote, to which I have no copy to really continue writing my reply, sadly. So I will move on to another topic, your follow-up.
No, Roblox themselves have been very clear that this is not true. Native code generation has never been enabled on the client for release purposes if it was ever enabled for another reason. No code on the client or in RCC (which, if you do not know, is the Roblox Cloud Compute, basically what truly runs your Roblox game server) uses native, because there are challenges that aren’t complete yet for the feature, and you simply trying to attack me with that is mindboggling.
That is not an attack; it is a fact. You simply do not do proper research or testing before saying something out loud and claim everyone else is incorrect, and no. I’m not a production-based developer.
We discuss development resources, yes. Which thread gets liked and receives attention is something else. If your post is of bad quality or you decide to disregard feedback and commentary and, most of all, simply attack those who are actually trying to aid you and do nothing but try and help the library forward is amusing. I’m not the one with an ego; the only one with one is you, who thinks the entire world is against you and you are the only winner and sole truth giver.
This, again, is not true; it is just enabled on studio because it is a testing environment. Even then you can disable native code generation by disabling the beta feature, so no. It is not released; it is still opt-in and for the purposes of testing.
You are not even sure of your own claims.
Regardless of all the previous chatter.
Your signal module is fine. I gave you feedback on what could be done to improve it and, simply put, got met with complete and utter anger, one that is not really something I would expect in a forum for developers by developers. We are here to mostly help each other, and if you cannot accept that someone tried to lend you a hand and simply try to bite it off, claiming I’m “against you” and that “I am evil”, and that I’m on ‘denial’ and ‘copium’. All of which is nothing short of what you are in your previous message; at no point have I been disrespectful to you before, and this is as low as I’m willing to stoop to make a response to your previous message.
Cheers, enjoy your ego, I suppose. I will refuse to help you in any future posts or resources you make, because you are unlikely to accept feedback, regardless of origin.