Ever since i started making games on roblox i always had some deep thoughts in my mind: Is cloning more efficient than Instance.new? Is multiplication faster than division? And many more. So one day, I finally decided im going to answer all of these questions so I created this benchmark class:
local Benchmark = {}
Benchmark.__index = Benchmark
function Benchmark.new(benchmarkTask: () -> ())
local self = setmetatable({}, Benchmark)
self.task = benchmarkTask
return self
end
function Benchmark.Execute(self: any, count: number): number
local startTime = os.clock()
local myTask = self.task
for i = 1, count do
myTask()
end
return os.clock() - startTime
end
return Benchmark
Use case:
local Benchmark = require(Path.To.Benchmark)
local myBenchmark = Benchmark.new(function()
print("Doing task!")
end)
local result = myBenchmark:Execute(100)
print(result)
I want to make sure im getting the correct results from it, so my only question is:
Are there any flaws?
1 Like
i mean it’s good but here is a revised version:
-
Warm-up Phase: To avoid cold start issues (like JIT warm-up times or initial memory allocation), it might be beneficial to include a warm-up phase where the task is run a few times before actual timing begins.
-
Garbage Collection: Lua’s garbage collector can introduce noise into your measurements. You might want to control for this by collecting garbage before starting the benchmark, or disabling it during the benchmark (and then re-enabling it afterwards).
-
Resolution and Accuracy:
os.clock()
might not have sufficient resolution for very fast operations. If possible, consider using a higher resolution timer if the environment provides one.
-
Task Complexity: Ensure the task you’re benchmarking is complex enough that its execution time is measurable and not too fast. Very fast tasks can result in measurement noise dominating the actual task execution time.
-
Environment Variability: Run your benchmarks multiple times and take the average to mitigate the effects of transient system state changes (like CPU frequency scaling, background processes, etc.).
Here’s a revised version of your benchmarking class incorporating some of these ideas:
local Benchmark = {}
Benchmark.__index = Benchmark
function Benchmark.new(benchmarkTask: () -> ())
local self = setmetatable({}, Benchmark)
self.task = benchmarkTask
return self
end
function Benchmark.Execute(self: any, count: number): number
local myTask = self.task
-- Warm-up phase
for i = 1, 100 do
myTask()
end
-- Collect garbage before starting the benchmark
collectgarbage("collect")
-- Disable garbage collection for the benchmark duration
local gcEnabled = collectgarbage("stop")
local startTime = os.clock()
for i = 1, count do
myTask()
end
local endTime = os.clock()
-- Re-enable garbage collection if it was enabled before
if gcEnabled then
collectgarbage("restart")
end
return endTime - startTime
end
return Benchmark
Explanation of Changes:
-
Warm-up Phase: The task is run 100 times before the timing begins to account for JIT and initial setup times.
-
Garbage Collection: Garbage is collected before starting the benchmark to minimize its impact. Garbage collection is also disabled during the benchmark to avoid interruptions, and re-enabled afterwards if it was on before.
so uhm, Use Case Example:
local Benchmark = require(Path.To.Benchmark)
local myBenchmark = Benchmark.new(function()
print("Doing task!")
end)
local result = myBenchmark:Execute(100)
print(result)
if u need smth else im here
1 Like