top of page
  • Joy Marcotte

What is Big-O Notation? Time Complexity of Algorithms

Have you seen scary-looking notations like O(n) or O(n log n) when learning to program a sort? What does it all mean, and why is it important? In this article, we’ll answer these questions by looking at time complexities in algorithms.


What is time complexity?


Depending on how an algorithm works, the time it takes to run will vary differently as it deals with larger quantities of input information. Time complexity is the measure of this variation, using notations like the Big-O, Big-Omega, and Big-Theta notations (we’ll focus on the first one for now). 


What is Big-O notation?


The Big-O notation measures the worst-case complexity of an algorithm with respect to its input size. It is an important tool in analyzing and comparing different ways to solve a problem. 


But what does the function in parentheses mean? The letter “n” here represents the size of the input data. As this number tends towards infinity, the function gives the amount of time it takes to run the algorithm concerning n.


For example, say we want to run a linear search to find the position of the number 9 on the following list:

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]


As the list has 10 elements, in this case, n = 10. A linear search requires iterating through the list element by element until the number 9 is reached. If it takes 0.1 milliseconds to iterate through one number, this algorithm will take 1 millisecond.


Now let’s look at this list:

[0, 1, 2, 3… 997, 998, 999]


In this case, n = 1000. To find the number 999, it will take 100 milliseconds to run. As n grows, this algorithm’s duration will be directly proportional to n, giving it an O(n) time complexity.


Different algorithms will contain nested loops or divide-and-conquer mechanisms, affecting their time complexities.


Here is a graph of different time complexities and how they increase with respect to n.



Now, we’ll discuss some typical time complexities and algorithms with that complexity.


O(n) - Linear Time


These algorithms have a time complexity that is directly proportional to n. They usually contain unnested loops, making the loop run longer as it has to iterate through more items. The linear search we covered above has an O(n) time complexity.


O(n^2) - Quadratic Time


Algorithms with an O(n^2) time complexity contain 2 nested loops (a loop in a loop), such as a bubble sort. They are comparatively slower as they take exponentially longer as n increases.


O(log n) -Logarithmic Time


This time complexity is relatively fast as seen above, as the algorithm often contains a divide-and-conquer method that drastically reduces the amount of data to be processed as it runs. The binary search is an example of this, making it much more efficient than a linear search.


If you’re interested: further reading


Now that you understand the Big-O notation, here are some sources to delve deeper into weirder time complexities like O(n!), space complexity, or even big-Omega and big-Theta notations!


More Time Complexities:


Big-Omega notation:



Space Complexity:



Works Cited


GeeksforGeeks (2023). Worst average and best case analysis of algorithms. [online] GeeksforGeeks. Available at: https://www.geeksforgeeks.org/worst-average-and-best-case-analysis-of-algorithms/


Huang, S. (2022). What is Big O Notation Explained: Space and Time Complexity. [online] freeCodeCamp. Available at:   https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/


Kaiser, C. (2022). Big O Time Complexity: What it is and Why it Matters For Your Code. [online] Medium. Available at: https://levelup.gitconnected.com/big-o-time-complexity-what-it-is-and-why-it-matters-for-your-code-6c08dd97ad59?gi=5b5d23a92794


Mejia, A. (2019). 8 time complexities that every programmer should know. [online] AdrianMejia. Available at: https://adrianmejia.com/most-popular-algorithms-time-complexity-every-programmer-should-know-free-online-tutorial-course/#O-n-Linear-tim

Comments


bottom of page