Benchmarker Plugin - Compare function speeds with graphs, percentiles, and more!

Just to make sure I understand; I’m supposed to report here, right?

I believe you misunderstood me. With displays I meant that the flamechart displays with text the values, but the chart itself doesn’t display/resemble those values correctly.

Here’s an example:


As you can see, the difference between the sizes for TweenPlus and TweenService are not even close to the 48μs which is the 50th-percentile difference. Why does this happen? Because the flamechart is not using the 50th-percentile — what it’s then using I can’t tell when looking through the source code, because that part is so unreadable.

I don’t think reproduction steps are necesssary, because it’s just a general issue, although only clear to the eye in some scenarios.

Oh sorry, I wasn’t aware. I thought all updates were posted here, so that’s why.
I appreciate you maintaining it.

The plugin has proven useful, but it’s missing a lot of polish.

Additionally, I have two super simple feature requests, which I have already added manually, but would love official support for:

An Iterations parameter.

This is a parameter, just like Functions, BeforeEach etc.

What it allows you to do is control the amount of iterations of the for-loop.
By default (in the current official version of Benchmarker) it is 1.000 iterations. I’d like to be able to control this, as there are a few scenarios where you’d want to increase/decrease the iterations for better accuracy or , without influencing the actual function run-time with e.g. a manual for-loop.

The parameter is of course of type number.

A Runs parameter

This is a parameter, just like Functions, BeforeEach etc.

What it allows you to do is to repeatedly run each function multiple times.

But it doesn’t work quite like just a manual for-loop inside of the functions.
This is because it allows you to run the BeforeEach before every function run, and without taking the measurement of the BeforeEach — because if you just did some BeforeEach logic in a manual loop, before the actual thing you want to measure, inside of each function, well then you’d have the BeforeEach logic be apart of the measurement of each function (the for-loop would also be apart of the measurement).

Additionally, it’s a time-saving feature, because you don’t have to manually create for-loops inside of every single function.

The parameter is of course of type number.



If you have any questions, please let me know!

Here’s another small but annoying bug with the flamechart:

The flamechart code uses an unsanitized key and looks it up in a table of sanitized keys, so for any keys that contain . like in the example image, the calculated order value will simply be replaced with keyCount due to or keyCount. This order value is used to calculate the color, which results in all functions, whose name contain ., having the same exact color, just like in the example image.

Oh and another very small issue:

As you can see, some of the blue graph is overlapping the red, and sometimes not.
I believe the ZIndex is just random. You should group each graph’s individual lines so that they share the same ZIndex.

Thank you for the reports and requests. :+1: I will take a deeper look into these when I have a chance.

Iterations and others were actually in older versions of Benchmarker but I removed them because people were shooting their foot and getting unreliable results using improper configs. I felt it best to force reasonable standards so that results shared around would be trustworthy and not heavily dependent on user configuration.
I quite like the Runs idea, seems like a great QoL improvement for a common pattern.

1 Like