What are "big O" and "little o" notation? (Beginner-friendly explanation/guide)

Hello :smile:

I’m writing a few resources related to optimization, because I wanted to share some of my knowledge and experience on the topic. I haven’t seen any resources on these notaitons, or how they apply to Roblox or luau, so, here we are.

So what exactly are the two “O notations”?

Big O and little o notation are really just used to described how “complex” something is. How it scales based on different parameters (but not the true cost in time or memory, just how it grows). These notations can be used to describe that time-wise, memory-wise, or just about anything else, and show up in both math and programming.

These two notations are actually pretty simple (and are very similar). Typically you’ll see the variable n be used, which sort of represents how many “things” happen. Usually in programming you will see big O notation used more, and little o is more common in math. The reason for this is really just that what little o notation describes doesn’t really come up nearly as often in programming, its more applicable to math and pure numbers situations.

Big O & little o

Big O notation says that the overall cost will be at most whatever is inside of it, for example, O(n) means that the complexity or cost will be at most n. Little o notation, e.g. o(n) means that the complexity or cost will never quite reach n, and will always be less than that.

In other words, O(n) means that the complexity is less than or equal to n, and o(n) means that the complexity is always less than n. This also means that big O notation also implies little o (if O(n) is true, o(n) is too).

If something is complexity O(1), it means that it always costs a consistent amount of time no matter what the stuff in question is. A great example of something which is O(1) in lua is assigning a variable:

local a -- Somewhere

a = b -- O(1) in time, setting a costs the same
a = 123
a = "abc"

It’s good to note that while setting the variable might be O(1), that does not mean that creating the value you set it to is also O(1). This is a totally possible example: a = somethingNotO1()

A few examples of different code complexity

To keep it simple, I’ll go over a few quick examples.

Example A: Reading a variable (just like writing)

doSomethingWith(someVariable) -- Reading someVariable is O(1)

Example B: For loop

-- O(n) complexity
for i=1, n do
	somethingO1()
end

A simple for loop can be described as having O(n) complexity on its own.

Example C: Two nested for loops

-- O(n^2) complexity
for i=1, n do -- O(n) on its own
	for j=1, n do -- O(n) on its own
		somethingO1() -- O(1)
	end
end

This is a good demonstration of how two simpler things can make something have more complexity.

A simple combination rule for big O

A simple way to figure out complexity when you “combine” two things like above is just to multiply the insides.
For example, doing something O(n) an O(n^2) number of times makes it O(n * n^2).

-- Together, this code is O(n * 1)
for i=1, n do -- This is O(n) on its own
	somethingO1() -- O(1)
end
-- Together, this code is O(n * n), aka O(n^2)
for i=1, n do -- This is O(n) on its own
	somethingOn() -- O(n)
end
-- Together, this code is O(n * n^(1/2)) which is O(n^(1 + 1/2)), aka O(n^1.5)
for i=1, n do -- This code is O(n) on its own
	somethingOsqrt_n() -- O(sqrt(n)), or, if you are familiar with taking any root of a number, O(n^(1/2))
end

Some memory complexity examples

local someValue = 123 -- O(1), just a single value
-- O(n) memory complexity
-- (Also, the same multiplication rule above applies, each item is O(1) and the final complexity is O(n * 1))
local tab = {1, 2, 3, 4, 5, ..., n}
-- O(n^2) memory complexity
-- (A good example of the multiplication rule again)
local tab = table.create(n) -- A table that has n elements
for i=1, n do
	tab[i] = table.create(n, 0) -- Another table that has n elements, all filled with zero
end

-- Now we have a 2D table, in a "square" shape (There's O(n^2) elements in the table)

Summary

These two notations are basically just a way to describe how complex things get as you change a parameter, like the number of items in a table, or the number of times you loop over something. The biggest form of optimization is primarily just reducing complexity, for example, going from a complexity of O(n^2) to a complexity of O(n), by taking shortcuts, and generally changing how an algorithm does something.

Lastly, I hope you found this helpful! Thanks for taking the time to read this, and I look forward to critique from the rest of the devforum. :smile:

21 Likes

TL;DR for this: O-notation determines the growth of time- and space-complexity of your loops/execution in programming context.

For people who wants to identify the O-complexity of eventual convoluted loops:

  • Calculating it(for people who wants more specifics how the number is found) is to find all the operations towards a goal(i.e. one for loop counts as n, nested within in counts as n^2).
  • Once you’ve looked up all the factors, remove all coefficients(the constant numbers from instances like 2n). Also be aware that a single constant counts as 1 * x and always results 1.
  • Find the highest order term. Refer to the image to determine the highest order.

For additional references, you should also look up example resources where O-notation is used! For instance, sorting algorithms!

6 Likes