Big O notation (Constant and Linear)

Kevin Huang
7 min readJan 3, 2021
Has nothing to do with Big O notation, but it was the first thing that popped into my head when I saw the term.

A few weeks ago I joined a webinar about algorithms, and personally I don’t have much experience or knowledge with algorithms. Going in the webinar, I had some sort of idea on what it was going to be about, I’ve seen examples of search/sort algorithms before and I knew I was not going to come out of this webinar knowing what was being taught. You might ask, Why would you attend a webinar even though you have no idea what’s going on in it? Well..even though I knew going in the class blind would be confusing to me. During the entire webinar I was behind and didn’t know what kind of questions I should be asking, but one thing was clear, I knew what I needed to study up on and the first thing I researched was one of the first term the instructor brought up in his slides— Big O.

At first I thought, oh cool I’ve seen that anime before, but quickly realize this is not the Big O I know of. However, I knew this is something I need to familiar myself with. What is Big O notation and what the hell is Time and Space complexity. Before we get into what the Big O notation is, we need to know what algorithms are.

What is an algorithm?

Algorithm —A sequence of steps or instructions to solve a clearly defined problem.

The code we write in class is also defined as an algorithm, as in our code we are defining every step on the behavior of our program. As programmers we are constantly solving problems, whether it’s a bug in our code or the efficiency of the program we are writing. Throughout the couple of months in Flatiron, I’m constantly asking myself “Is this the best way I can write this? Can I make this more efficient? Do I absolutely need to create this extra variable in this function?” 90% of the time I’m looking for ways to make my program perform it’s best and what type of algorithm can I use to make my program run faster? This is where the Big O notation comes into play.

Big O notation

OK, so what exactly is the Big O notation? The Big O notation is used to describe the performance or time/space complexity of an algorithm. It specifically describes the worst-case scenario and can be used to check if the algorithm we are using is the best performing one. It is a convenient way to describe how fast a function is growing.

O(N) (Linear Time)

O(N) describes an algorithm where the performance will grow linearly and in direct proportion of the size of the input data. If you can imagine a linear graph (will show an example below), this is basically what O(N) describes. As the input gets bigger, the time complexity will also increase.

O(1) (Constant Time)

O(1) describes an algorithm that will always execute in the same time regardless of the size of the input data.

Translating Big O to code

What better way to show how Big O helps us determine which algorithm is the fastest and more efficient. Consider the code below:

In the function above, we take a number as parameters and add the numbers of the number passed in starting from 1 as we declared i as 1 in our loop. So if n = 4, the function should be running 1 + 2 + 3 + 4 and the output should be 10.

If we take a look at the results above, as the time it takes to perform the function increases as the value gets higher. If we were to put this in a graph it would look something like this:

Linear Time

Now let’s take a look at how we can achieve the same output results from the function above by using a formula by Carl Gauss’s, ( n / 2 ) * ( first number + last number)

The code above outputs the same results as the function (addNumber) outputs.

Let’s take a look at the results from the function (addNumberTwo)

As you can see, there is very minimal differences in the time complexity. If we were to put the time results in a graph it would look like this

Constant Time

If we take a look at the graph, we can see that even though there is an increase in the input, the time complexity is always constant. No matter what value you put in the input, the time complexity will remain the same.

The point of using the Big O notation is not about the actual time on the performances, but to get an idea of the functional growth of the algorithm we are using. Judging by the two Big O notations provided above (linear and constant) we can see that the constant time complexity will always be the ideal outcome, however, no all problems can be solved using constant. For example, if you have an input that is an array, it’s very rare that you will be able to achieve the constant time complexity unless you are given the index of the element in the array.

How do we determine which Big O notation we are using?

To see what kind of Big O notation we are using, we’ll need to break down the amount of times the line of code runs. I will try my best to explain this. If we take a look at the first function and we’ll go through each line and see how many times the code runs each line.

function addNumber(n){
let results = 0 // Runs 1x
for (let i = 1; i<=n; i++){ // Runs 1x
results = results + i //Runs nx (given the input)
}
return results; //Runs 1x
}

We can write this as T(time) = 1 + 1 + 1 + 1n(value of input) and we can simplify is down to T = 1n + 3 and remember, we’re not looking at raw performance numbers, our goal is to figure out what kind of Big O notation. It doesn’t matter if it’s T = 1n + 1,000 because at the end we will still be achieving the linear time complexity. With this example, we can then re-write it to T = a(n) + b. From here, we will need to figure out the fastest growing term and in this case it would be a(n) since if a grows, n will grow also. With that being said, since constants are not important to us, we can remove them and we will end up with T = n which equals to O(N).

Lets take a look at the second function

function addNumberTwo(n){
return (n / 2) * (n + 1); //Runs 1x
}

Using the same example as above, we can re-write this as T = 1 and as you can guess, there is no refactoring required. So in this case, T = 1 = O(1). But what if we have another line of code in this function? It will be written as T = 1 + 1 = T = 2 Our goal is to find the fastest growing term and since there is only 1 variable in this equation T = 2, it would still be considered O(1)

Note: Using array.map or any built-in method does not mean it’s a constant because .map has it’s own logic in it, you will need to find out how the method is running.

So what does this all mean?

As we progress further into the field of programming, we will be handling bigger algorithms and bigger data. As programmer our jobs are not only to get the programs to work, but to make them as efficient as possible and as dynamic as possible.

resources:

--

--