Hello! My name is Milkeles and in this tutorial I’ll show you a way to find the maximum subarray in an array of elements and while doing so, attempt to help you build a basic understanding of the concept of time complexity, which is crucial in programming and game development. I understand that this was likely covered countless times before, so I apologize to whoever may’ve posted it first, the purpose of this is to provide an alternative explaination of the concept.
KADANE’S ALGORITHM AND BASIC TIME COMPLEXITY

Who’s this tutorial for?
Although this tutorial may be helpful to almost everyone, you’re expected to at least know the basics of scripting in Lua, especially loops, before you dive into this. The topic of this tutorial is not something crucial, in fact, you’ll likely never use this in game development. What you will need to develop games, however, is decent problemsolving skills and the purpose of this tutorial is to help you improve yours a little. Also, we will cover the basics of a very important concent in programming  time complexity. For those who decide to stay regardless of what I just said, have fun. 
What’s a subarray?
A subarray is an array within another array. A subarray is not necessarily smaller than the original array, for example, the subarray of [1, 2, 3, 4, 5] could be [2, 3, 4], but it could also be [1, 2, 3, 4, 5], or even a single element  [3]. However, it cannot be [2] and [4], the elements need to be contiguous/neighboring. 
What’s the problem?
Let’s take the same example, if we want to find the maximum subarray of [1, 3, 2, 1, 1], it would be [2, 1], because 1 + 2 = 3 and there is no way to form any other subarray that has larger sum. It is easy to tell, but how can we do this with code? 
Bruteforce solution.
Naturally, most people would simply try to find all the sums of all possible subarrays with a nested loop:
arr = {1, 3, 2, 1, 1}
maxSum = math.huge It is possible the max subarray may have a sum less than 0.
for (i = 1, #arr, 1) do
local currentSum = 0
for(j = 1, #arr, 1) do
currentSum = currentSum + arr[j]
end
if (currentSum > maxSum) then
maxSum = currentSum
end
end
What it does is that it checks the sum of all possible combinations: [1], [1, 3], [1, 3, 2], [1, 3, 2, 1], [1, 3, 2, 1, 1], [3, 2, 1, 1], [2, 1, 1], and so on. This solution WILL work and for small arrays like this it will be very fast, we only need to conduct total of 25 calculations and the computer can do that instantly. However, if our array had 10 000 elements, that would mean we have to perform 100 000 operations and, while that will also take mere seconds for the computer, if we want our code to be as fast as possible, that will not be the best possible way to solve the problem.
Note: Below I’ll cover only the basics of time complexity, if you’re already familiar with the concept and you’re mainly interested in the algorithm, you may skip that part.
 What is time complexity?
Estimating the time our code takes to run, and especially improving that, is extremely important in games as performance is often crucial but, also often, have to use many loops, especially nested ones, or other heavy operations all the time. Estimating how fast a certain code is can be a bit challenging and difficult to comprehend, so we will not go into details in this tutorial, we’ll only scrape the surface, so that you’ll have a basic understanding of it and you’ll hopefully able to understand why the brute force solution above is not an optimal one. So, what is time complexity after all? When we measure time complexity of a code, we don’t actually measure it in units of time, such as miliseconds or seconds, because our computers nowadays perform hundreds of operations behind the scenes, and those operations affect our measurements significantly and every time we measure the time it takes our code to run, the result will be different. Of course, we could measure it several times and calculate the average time, but that’s timeconsuming and often inaccurate as well. Instead of time units, we use what’s so called “steps”, each step is a single operation our code has to perform. For example in the following code has total of 3 steps:
a = 1
b = 2
print(a + b)
The steps are  initialize a with 1, initialize b with 2, print the sum of a and b. This code will always take that many steps to compute, therefore, it’s time complexity is constant. There are three different kinds of time complexities  average (Θ), best (Ω), and worst(O). We usually care about the worst possible case, but knowing the other two is helpful in many cases too. Our code above takes 3 operations and that is a constant time, it will always take that many operations, it cant get any better or worse, so we say that it’s time complexity is O(3) (BigOh of 3) = Θ(3) = Ω(3).
That’s constant time complexity, if our code has constant time complexity, that’s great, there’s nothing to worry about, but we cannot always write code that way…
 Linear Time Complexity
Imagine you have a list of things to do and it takes you certain amount of time to do each. Linear time complexity means that the time it takes to process the list grows proportionally with the size of the list. If you have twice as many items, it will take roughly twice as much time. In a loop, if you go through each item in a list exactly once, the loop has linear time complexity. For example, if we have an array of n numbers and we want to print each number, that will take us n steps, therefore the time complexity of our loop is O(n), we don’t care (or know) about the exact count of elements, we just say that it is n. Linear time complexity is also good, it is very fast, not as fast as constant time complexity, but it would never crash our computers if what we’re trying to do is physically possible for our computer to handle. That is not the case with nested loops, however:

Exponential time complexity
If we have a loop that runs n times, and inside it another one that runs m times, this means that for every iteration of the first, the second loop will run m times, therefore the complexity of our code is O(n×m). If both of the loops run n times, we have complexity of O(n^2), because n×n = n^2. This is still fine but only for small or medium programs, but you have to understand that it’s N TIMES WORSE than the linear time, making it very slow. And if we put a third loop inside the second we get time complexity of O(n^3), which is catastrophic. For comparison, if code runs with O(n) time it would be able to process 1 million pieces of data, O(n^2)  10K pieces of data, O(n^3) → 100 pieces of data. And it gets worse and worse, that’s why we MUST avoid writing code with exponential time complexity. (Although sometimes we can’t, that’s why NPhard problems like the Hanoi towers one exist). 
Logarithmic time complexity
This kind of time complexity is more difficult to comprehend, and yet we’ll need it for our original problem. Imagine you want to find a particular word in a dictionary, if you skim through the pages one by one, it will take you a long time (O(n) operations, to be exact). However, words in a dictionary are ordered alphabetically, so you can open it in the middle then tell in which half your word is, then open in the middle of that half, then again, and again, until you eventually find the page where the word you’re looking for is. Esentially, you’re eliminating half of the problem each time, and that’s what logarithmic time is. O(logn) IS BETTER THAN O(n) AND WORSE THAN O(const).
Here’s a graph including those and more kinds of time complexity that I did not cover but you can check if you’re more curious:
(I found it online, sorry for the watermark)
Keep in mind I explained only the basics of time complexity, and I only used worsttime complexity because it is most important and the goal is to give you only basic understanding of the concept, but if you’d like to learn more there’s also a similar way to measure the amount of data your code uses  space complexity, and there are cases when the best, worst, and average time complexity are not the same for a single piece of code. You may also be interested in different algorithms and data structures  how fast they are, how they’re implemented, and are the advantages and disadvantages of using each one, and so on. I unfortunately will not cover that in this tutorial, sorry. (What are some data structures you know in Lua? )
 Kadane’s Algorithm
Let’s return back to our original problem. In case you forgot what was it, because my “short deviation” turned out to be longer than I anticipated, we were trying to find the largest subarray sum in an array. We did that by checking the sum of all possible subarrays and finding the largest one by comparing them, but as we mentioned that is not an optimal way to do it, and if you did understand what I wrote about time complexity you can now easily tell what the complexity of our bruteforce solution was  O(n^2). Good job if you guessed that right before reading the answer! You now also know why that is not an optimal solution, so let’s try to write a better one together!
Let’s say we’re taking a similar approach, we’ll be looking at each index in the array and want to find the max subarray ending at that index:
[1, 3, 2, 1, 1]
If we take the third index, and we know what the sum of the subarray at the previous index is, according to Kadane’s algorithm, the max subarray at the current index is either the element at the current index or the element at the current index COMBINED with the previous maximum subarray. That allows us to ignore all other possible subarrays up to that index and compare only those two. Let’s try to write that down:
Starting from the first element, our max subarray would be [1] => maxSum = 1
[3] and [1, 3] < 1 => maxSum = 1
[2] > 1 => maxSum = 2
[1] < 2 but [2, 1] > 2 => maxSum = 3
[1] and [2, 1, 1] < 3 => maxSum = 3
This only took us five steps, while the previous method took 25. You can probably already tell how much faster this algorithm is and predict its time complexity, right? I’ll leave the code and the answer below, but before taking a peak, try to write them yourself! One cannot learn to code by reading alone, it requires handson experience.
Thank you for your time, I hope you enjoyed this tutorial.
ANSWERS
 Write code for Kadane’s algorithm:
array = { 1, 3, 1, 2, 1 }  An example, you can use anything else.
 Taking the first element as the max subarray and the current max subarray.
max = array[1]
currentMax = array[1]
 Starting from the second element because we already checked the first.
for i = 2, #array, 1 do
Comparing the current element and the previous max combined with the current element.
currentMax = math.max(array[i], currentMax + array[i])
Replacing the previous max if the current one is larger.
if (currentMax > max) then
max = currentMax
end
end
print(max)
 What’s the time complexity of Kadane’s Algorithm?
Answer: O(n)