Luau supposedly has inline caching as stated here, but when I benchmark to test, I dont see the optimization taking effect.
Results:
Code:
local Class = {}
Class.__index = Class
function Class.new()
return setmetatable({ bool = true }, Class)
end
function Class:Method()
self.bool = not self.bool
end
local new = Class.new
return {
ParameterGenerator = function()
return
end,
Functions = {
["normal call"] = function(Profiler)
for _ = 1, count do
local c = Class.new()
end
end,
["cached call"] = function(Profiler)
for _ = 1, count do
local c = new()
end
end,
},
}
We can clearly see that the cached new
function runs much faster than Class.new
, which shows that inline caching is doing absolutely nothing?
I went ahead and tested the global access chains optimization which has the same concept as inline caching (syntactically speaking at least) and it works as advertised.
Results:
Code:
return {
ParameterGenerator = function()
return
end,
Functions = {
["inline"] = function(Profiler)
Profiler.Begin("Create")
local t = table.create(count / 2)
Profiler.End()
Profiler.Begin("Insert")
for _ = 1, count do
table.insert(t, true)
end
Profiler.End()
Profiler.Begin("Clone")
local clone = table.clone(t)
Profiler.End()
Profiler.Begin("Clear")
table.clear(t)
table.clear(clone)
Profiler.End()
end,
["localized"] = function(Profiler)
Profiler.Begin("Create")
local t = create(count / 2)
Profiler.End()
Profiler.Begin("Insert")
for _ = 1, count do
insert(t, true)
end
Profiler.End()
Profiler.Begin("Clone")
local clone = clone(t)
Profiler.End()
Profiler.Begin("Clear")
clear(t)
clear(clone)
Profiler.End()
end,
},
}
As we can see from that benchmark, caching a global field by localizing is unnecessary as it is already cached for us by that optimization.
So what is happening in the first benchmark? Is that not an example of what inline caching optimizes or?
All benchmarks performed by Benchmarker Plugin