Graph Outliers setting Setting descriptions plus some other things
When graphing a distribution, it’s generally more useful if you drop the statistical outliers from the graph. If a single call happened to take 50x longer, we don’t want to zoom the graph out to see that one call, because then our relevant data points are too small to analyze visually! By default, the plugin drops those.
However, I realize that could be counter-productive in certain use case. To empower those users, I’ve introduced another setting, Graph Outliers which toggles whether the plugin should graph the outliers or not.
The settings tab will now display a short desc when you hover over a setting, so you that you know what it does without having to tinker and guess (or read my posts).
The summary tab used to show more decimal places than it actually measured (it showed below a single microsecond), leaving trailing zeros all over. That was dumb, so I’ve fixed that.
I also fixed a couple minor mistakes that I noticed in the UI.
Added a visibility toggle to each function so that you can test a lot of functions at once without cluttering your graph.
Also, restructured the GUI internals so that this feature is possible.
In my quest to make the microprofiler obsolete… I’m working on making this plugin have a microprofiler tool built in. Yup. Don’t know if I’ll succeed, but I’m trying!
I have a prototype that works but needs more polish and testing.
Setting a line to invisible hid the GUI, but didn’t actually make it recalculate the graph bounds.
I’ve redone visibility so that if a line is not visible, the graph will totally ignore that line and zoom in on the ones that are visible.
It’s also more efficient now cuz it doesn’t waste time calculating invisible lines.
Not gonna deploy on a Friday, but I do think it’s ready (will publish Sunday night, most likely.)
I’ve got the profiler working! Now we have a histogram and a flame chart! Pretty freaking sweet.
With this, we no longer need the microprofiler or any other benchmarker tools!
(Image displays the profiler breakdown of table.create() vs {}, showing that table.create() takes much longer to initialize the array but can then populate the array far faster, making the function the faster overall method.)
This update adds a Profiler to the plugin, so you can break your function into individually timed sections for more in depth benchmarking.
When you don’t specifically profile your functions, it still acts as a useful bar graph similar to Validark’s popular benchmark module. When you use the profiler, that bar graph doubles as a flame chart to give you the detailed breakdown of your function. Hovering over a chunk gives you the full statistics breakdown for that particular section.
Also remade the demo/tutorial video to make sure it’s got all the latest features.
I forgot a break in one of my loops, which caused the displayed error message to be the wrong one if your tests are faulty. If one of your functions fail due to an error on your end, it now displays your error for you.
This bug made it incorrectly tell you that you forgot a Profiler.End(), sorry about that!
Whoops! Thanks for catching that. I never use light theme so I didn’t catch that, thanks!
I’m going to use light theme for a while and try to improve in all areas.
Update:
V 4.1
After a lot of testing, tweaking, and just plain trying to use it-
I’ve removed the colors yellow and yellow-green. They’re eye strains.
Light theme uses pastel colors and dark theme uses vibrant colors.
The colors are actually slightly different based on their background in order to add contrast, but are still recognizable as the same color and function.
Users of this plugin often deal with timings below a single millisecond, which means that reading the results has a lot of annoying decimals to keep track of.
This updates makes it automatically pick between microseconds and milliseconds based on your data set, so that all the displayed measurements will be easier to read and remember. It will also switch the amount of digits shown so you don’t have “000” at the end of every microsecond measurement.
This plugin is marketed toward power users and advanced developers, so I priced it with that in mind. To that target audience, the plugin costs less than $3 USD and gives them powerful tools to help them optimize their game to earn a lot more than 3 bucks.
End Condition Setting - Run for a set amount of time OR for a set amount of function calls.
(and other minor improvements)
This feature was requested by @zeuxcg and @pobammer quite a while ago. Sorry for the long delay!
Up until now, the benchmark tests would call each function X amount of times. This allows you to gather precisely sized datasets. However, some users would rather run the benchmark for X seconds, and gather an arbitrary number of datapoints. This is generally more user-friendly since all tests will always take the set time, so slow functions won’t make your tests take longer. Slow function tests will just have fewer datapoints in order to stay within your time constraint.
Therefore, I’ve added a setting to allow you to pick which test behavior you prefer!
One behavioral note for Run Time: The test might take a few milliseconds above your actual set time, because I make sure each function is run the same amount of times (so the test might go overtime to call the next functions) in order to ensure a balanced and accurate result.
Improved protection behavior - Mark test modules with “.bench” suffix in their name, rather than a CollectionService tag.
This makes it much easier to use this plugin with Rojo and GitHub, while still protecting you from accidentally running an unintended module and causing issues. Requested by @Kampfkarren
With icons from the incredible @Elttob, V 6.0 brings a much cleaner and more compact menu interface, replacing the ugly long text buttons.
Library
As you may have noticed, there’s a new section in the menu. The Library serves two purposes.
Firstly, it provides you with commonly used tests so you don’t have to write them yourself. Secondly, these files act as examples that demonstrate how to use Benchmarker and how to write your .bench files.
If you would like to contribute to the Library, open a pull request here.
Profiler Improvements
Using the Profiler for a more detailed breakdown is a very useful tool. However, if using the Profiler alters your test results then it wouldn’t be much good. I spent some time optimizing it down to the microsecond level.
As you can see in the image below, running the same function with 5 labels (2 nested) actually ran 0.3 microseconds faster. That’s pretty impressive, if I do say so myself. It means the the Profiler had no significant impact at all!
In addition to this performance squeeze, it now displays the “dark matter” of a function under a label called [UNTRACKED]. If you put a portion of your function into a Profiler.Begin - Profiler.End wrap, but leave the rest of the funciton with no profiling, the rest of the function will be put into the [UNTRACKED] label.
If your entire function has profiling, then the [UNTRACKED] label represents the overhead of the Profiler itself (which should usually be well below 2 microseconds).
Minor improvements
I made various little tweaks throughout, such as it displaying what .bench file is currently running. Nothing worth explicitly noting but just general improvements to UX and speed.
Running into a bit of a problem with running a test. The plugin is just stuck at “Running Tests.bench” without any sort of errors returned. Chances are I am doing something wrong, just not sure where I am going wrong here.
Here’s my Tests.bench:
--[[
|WARNING| THESE TESTS RUN IN YOUR REAL ENVIRONMENT. |WARNING|
If your tests alter a DataStore, it will actually alter your DataStore.
This is useful in allowing your tests to move Parts around in the workspace or something,
but with great power comes great responsibility. Don't mess up your stuff!
---------------------------------------------------------------------
Documentation and Change Log:
https://devforum.roblox.com/t/benchmarker-plugin-compare-function-speeds-with-graphs-percentiles-and-more/829912/1
--------------------------------------------------------------------]]
return {
ParameterGenerator = function()
-- This function is called before running your function (outside the timer)
-- and the return(s) are passed into your function arguments (after the Profiler). This sample
-- will pass the function a random number, but you can make it pass
-- arrays, Vector3s, or anything else you want to test your function on.
return require(game.ServerStorage.Await)
end;
Functions = {
["Await"] = function(Profiler, Await) -- You can change "Sample A" to a descriptive name for your function
-- The first argument passed is always our Profiler tool, so you can put
-- Profiler.Begin("UNIQUE_LABEL_NAME") ... Profiler.End() around portions of your code
-- to break your function into labels that are viewable under the results
-- histogram graph to see what parts of your function take the most time.
-- Your code here
Await(0.05)
end;
["Sample B"] = function(Profiler, RandomNumber)
wait(0.05)
end;
-- You can add as many functions as you like!
};
}
Here’s ServerStorage.Await:
local RunService = game:GetService("RunService")
local BinaryHeap = {}
function BinaryHeap.insert(value, data)
local insertPos = #BinaryHeap + 1
BinaryHeap[insertPos] = {
value = value,
data = data
}
while insertPos > 1 and BinaryHeap[insertPos].value < BinaryHeap[math.floor(insertPos / 2)].value do
BinaryHeap[insertPos], BinaryHeap[math.floor(insertPos / 2)] = BinaryHeap[math.floor(insertPos / 2)], BinaryHeap[insertPos]
insertPos = math.floor(insertPos / 2)
end
end
function BinaryHeap.extract()
local insertPos = 1
if #BinaryHeap < 2 then
BinaryHeap[1] = nil
return
end
BinaryHeap[1] = table.remove(BinaryHeap)
while insertPos < #BinaryHeap do
local smallerChild = 2*insertPos + (BinaryHeap[2*insertPos].value < BinaryHeap[2*insertPos + 1].value and 0 or 1)
if BinaryHeap[insertPos].value > smallerChild.value then
BinaryHeap[smallerChild], BinaryHeap[insertPos] = BinaryHeap[insertPos], BinaryHeap[smallerChild]
end
insertPos = smallerChild
end
end
local CPUTime = os.clock()
RunService.Stepped:Connect(function()
CPUTime = os.clock()
local PrioritizedThread = BinaryHeap[1]
while PrioritizedThread do
PrioritizedThread = PrioritizedThread.data
local YieldTime = CPUTime - PrioritizedThread[2]
if PrioritizedThread[3] - YieldTime <= 0 then
BinaryHeap.extract()
coroutine.resume(PrioritizedThread[1], YieldTime)
PrioritizedThread = BinaryHeap[1]
else
PrioritizedThread = nil
end
end
end)
return function(Time)
BinaryHeap.insert(Time or 0, {coroutine.running(), os.clock(), Time or 0})
return coroutine.yield(), CPUTime
end
Might be worth pointing out that I have the settings set to have the plugin run for two seconds instead of running it a certain amount of times as well. Again, I am almost certain I am doing something wrong here and it isn’t the plugins fault. Just not sure where I am going wrong here.
Roblox decided to moderate the add button.
Uploaded again, hopefully they let this one through. I’ll be updating the plugin with the new asset id if it works.
Sorry for the inconvenience.
You don’t need to require your module every single time the functions are called, just have it defined at the top of your bench script and use it in whatever function you want.
The problem is that your Await function doesn’t work in Edit mode, and is therefore yielding indefinitely. You made it rely on Stepped, which doesn’t run during edit since there’s no physics simulation being stepped! I switched from RunService.Stepped to RunService.Heartbeat. This fixed the issue completely.
I also changed if PrioritizedThread[3] - YieldTime <= 0 then
to if YieldTime >= PrioritizedThread[3] then.
Checking if subtracting something is less than zero is a really strange and roundabout way to check if something is larger. Not sure why you did that. This wasn’t the issue but it was absurd so I changed it.