You seem to be commenting without knowing the importance of this. I ask you to please investigate the use of Hash Tables and why it is somewhat useless to use it when trying to perform a process as specific as accessing tables such as Classes and Modules in an object that manages classes and modules, since it is only a convention.
I agree that LuaU is getting better and better, but I want to tell you that the hash algorithm is the fastest known so far to do things like this, it is used by most programming languages, and it makes no sense that you are mentioning that it is a micro-optimization when the rest of the world is trying to optimize anything however small, I am exposing a very specific approach in the OOP branch, the convention of using self.Classes
is useful but adds consumption and with this approach I am giving solution to this convention
This might be useful for this specific case, yes.
If you have just a simple class and usually just some methods, then don’t have __index
metamethod as a function, instead just make it what you’re gonna index.
local Class = {}
Class.__index = Class
--\\ Faster;
function Class:__index(index)
return Class[index]
end
--\\ Slower;
Your trick might help in your specific edge case on which there’s sub-tables, though I wouldn’t be that impressed if the difference is small, there’s a lot of optimization when it comes to this type of stuff.
Also I will say your method of benchmarking is bad because the order on which you execute code changes results. Try once at a time and get an average for both instead. That’s how I do it, might be not the best way but certainly better than that;
you are mentioning that it is a micro-optimization when the rest of the world is trying to optimize anything however small
Anecdote? I don’t know where you’re getting that from, it’s commonly accepted in software design that micro optimization is a waste of time for no tangible performance benefit. Because time == money, If this were a matter of 15 miliseconds, rather than 15 nanoseconds, than heck yes. Otherwise this is yes, a waste of time to implement.
One way to benchmark that i’ve seen which provides really good accuracy is to stop using a game script in general, and put the code you want in the command bar; run it, now wait a little bit, and then change it to the other thing you’re benchmarking, and then do it. That way the two results are completely indepent from eachother and in no way related
I prefer just using wait(10)
before running code and essentially still using a game script.
I’ve noticed the command bar can be 2x slower in my testing, so I don’t use it.
First decent comment, in effect, I am almost shouting that it only helps specific things, check the AeroGameFramework and you will know what specific case I am referring to…
About the BM you went a bit too far, a very superficial criticism, because you don’t know the method I am using to perform the BM.
Method:
do
local units = {
['segundos'] = 1,
['milisegundos'] = 1000,
['microsegundos'] = 1000000,
['nanos'] = 1000000000
}
function benchmark(unit, n, f, ...)
local elapsed = 0
local multiplier = units[unit]
for i = 1, n do
local now = os.clock()
f(...)
elapsed = elapsed + (os.clock() - now)
end
print(string.format('Hash\'s BM\nAVG execution Time: %.5f %s\n\n',(elapsed / n)*multiplier, unit))
end
end
And they are also two different codes.
I can confirm this, the command console usually adds time to the benchmark.
Everything is ran on the same VM. If anything it’s worse, I still recommend just testing it like that instead.
It doesn’t actually change the order you’re running code, they’re still essentially running in a way where one will benefit from the “warm-up” from the code from before;
Though you making it so that it does a bunch of tests instead of only one might help that not be as important.
Code will be ran faster after some other code runs in almost every language / if not all;
Would be more accurate to say microoptimising, it’s 15 nanoseconds. The point of writing code the first way is being clear and concise. It’s not worth sacrificing readability for the sake of a negligible speed improvement that will most likely see greater increases as Luau is underway.
It’s interesting but RealExoctic is correct in that it’s a microoptimisation. There was no need to reply passively aggressively because they were correct. Even if it’s not speed we’re talking about, there’s other factors like readability and necessity.
Well, speaking of readability, I am about to create an OOP manager similar to AeroGameFramework, and practically readability will be left behind because it is not necessary to read the controller, which makes me think that speed must be fundamental to have a reliable framework.
Definitely a nice thing to be aware of. Although, I’ll side with @Gojinhan and others who have mentioned a few issues with this in practice. LuaU will keep receiving improvements to keep these low level optimizations out of the way of the developer so that one can be more efficient. If internally this was of concern it would have most definitely been already addressed; we’re falling under the category of premature optimization otherwise.
Remember this famous quote by a wise person - "Premature optimization is the root of all evil"
This is absolutely useless, the hash part of the table having to resize makes 0% difference to performance. Your benchmark is literally a clear example of this, its just a stupid micro optimization.
Those who know a little about how the method works in order to use words in a table (hash), will know that this can take some time. That’s why I had an idea to solve this and make faster the execution of classes and objects in a game.
That’s not how a “hash” works like the way you mentioned, please stop citing false information about something you aren’t sure about. Its regarding table resizing on its hash / array part which isn’t slow in Luau due to Luau’s new VM. Even in Lua, there is ONLY a slight noticeable difference when a table is resized quickly a lot of times but the difference still isn’t enough to consider optimizing.
How table resize works is that more slots in power of 2 are allocated when there aren’t enough slots for the new value either in the array or hash part. In your case, in the hash part.
However, Lua isn’t smart and considers this below table a dictionary rather than a array:
local t = {[1] = 1}
Your title is also very misleading. If its for readability, mention readability not “optimizing”.
In other words this is non-idiomatic, while it may be faster now; it most likely won’t be later, the opposite actually.
also this is a micro micro optimization
Actually, it is infact a silly micro optimization. The new Luau VM is far more optimized than Lua’s VM. Even in Lua, table resizing is micro optimization and there is only a slight noticeable performance difference (when a lot of resizing is done to the same or different table) is seen which still isn’t enough to consider optimizing it. @Gojinhan
AeroGameFramework is less of an OOP manager and more of a game framework that controls flow MVC style. OOP is just a small part of AGF and that’s in terms of being able to store classes in your modules folder. Most of the time you won’t need OOP in your project.
A game framework should never leave readability behind: controllers are meant to be read because you need to write them to begin with. Keeping readability helps you maintain the codebase and it helps when collaborating because other developers can see what you’re doing clearly. There are times where developing for speed is not appropriate and it’s better simply to just settle on something that’s fast, efficient and organised; AGF hits those targets well enough.
If you’re looking for a system where your hash trick may be more applicable, you may be interested in middleclass. That looks about roughly what you tried to write as well.
there is ONLY a slight noticeable difference when a table is resized quickly a lot of times but the difference still isn’t enough to consider optimizing.
table.create()
exists exactly because extraneous amounts of memory allocation is non-ideal. So I guess it must be somehow worth optimizing if the engineers added it to the API.
Actually, it is infact a silly micro optimization
I never disagreed with that, In-fact I think you misread my post. While strikethrough can mean a redacted statement, in this case it means a statement which is lesser than relevant, or intended to be ‘sneaky’ or something like that. I gave context to this meaning through my saying of “the opposite actually.” If I meant that meaning as a redacted statement, then how come that was left unstriked?
Anyway enough about my intent, back to you
Even in Lua, table resizing is micro optimization
Again, it was considerable enough for this time string.concat
to be implemented, according to PIL 5.0 anyway.
That’s correct in Lua only if you have a lot of table resizing. You should be taking about Lua 5,1 rather than 5.0. This doesn’t really matter so isn’t really worth taking about anyways.
table.create is completely redundant in Luau for its performance, it just initialises the table properly so that it won’t have resizes often. The new Luau VM is extremely optimised.
This got a little out of control guys, let me tell you that I was studying a bit more about the Benchmark, and I noticed that there is a certain percentage that yields 0 nanoseconds in both cases, 50% for the trick and 67% for the hash.
Still, you could say that in a matter of stress, the condition trick would help a lot, I mean the use of it in time waits for example a repeat until
or some specific gigantic load.
Note that this is going to be my last post here, as I don’t like to clutter up topics and rather explain this here rather than PMS as other people may also get the chance to learn.
An array that isn’t seen as a dictionary by Lua internally is with no explicit keys:
local t = {1, 2, 3}
Meanwhile an table (even if its an array) with explicit keys like this is seen as an dictionary by Lua as Lua isn’t smart, so it allocates about 4 slots in the hash part rather than the array part.
local t = {[1] = 1, [2] = 2, [3] = 3 }
This is mentioned by one of Lua’s creator them self, Robert. I’ve linked down the article and you can just go to the topic where they discuss tables by scrolling down. Note that most of the performance tips are useless in Luau.
Quote from the article:
Article: https://www.lua.org/gems/sample.pdf
If there is something in Lua’s internals that you could post, we wouldn’t need a roundabout discussion
Here’s rather a summary on how tables are implemented (Lua 5.1):
. Lua tables are associative arrays and can be used like arrays or dictionaries. It doesn’t need to and can’t tell a difference because there is none. They’re all, in some way, AAs and there’s a reason why you can still index arrays by their hash part, because they still use it.
Tables are infact associative arrays. However, arrays are different than dictionaries and this is where your confusion is coming from.
The primary difference stands out that the array part is just an array in C internally, whereas a hash part is just a hash table which is it self an array internally, that stores buckets (Buckets are the hash of a key stored in that array. Inside each bucket, consists of a linked list which holds nodes. Here’s an example of what I mean.
local dict = {
Bo = 1
}
print(dict.Bo) --> 1
Internally, dict.Bo
is performed something similar to this:
-
The
Bo
key is first hashed (Note that the hash function will return the stored hash inside that string object, since strings are interned, their hash is precomputed as well), and a linked list is stored in a key (which is the hash ofBo
string object) inside the array. -
The linked list contains of nodes, and each node is defined like this:
{next = ..., key = ..., value = ...}
. -
Lua will loop through the nodes in that linked list and check if the key we indexed is the same as
node.key
and if so, return its value vianode.value
. This for the most part, is a O(1) time complexity assuming there are NO hash collisions involved (Hash collisions will append new nodes to that specific linked list, increasing lookup time).
Looking at the source for how Lua internally gets a string key’s value inside a dictionary:
First, the hash of the key is taken (It isn’t calculated but just returned) *n =hashstr(t, key)
. Lua gets the bucket and searches all the nodes in that bucket and checks if the node’s key member is the same as the key we indexed. If so, return the value of the node. If the key isn’t the same, then go to the next node via n = gnext(n)
. Finally return nil
if it has searched all the nodes.
Meanwhile a search function for numbers:
See? Now that wasn’t so hard! If this was your first post in response, we would have no off-topic debate on this thread and we could get on with our day without spending so many hours thinking about it!
Thank you for actually providing substance for your post this time around, so that we can actually have a chance to learn and correct potential misconceptions we may have been harbouring (assuming all this information is accurate - I myself, admittedly, am not well versed in Lua internals). This could’ve gone much better.
+1 for the follow-up.
I did flag myself, but I’m going to go ahead and delete some previous posts to avoid clutter in advance because I am clearly wrong and this is also a good moment for me to really practice what I preach and not speak on things I am not well versed on.