The Basics of Big O Notation

Big O notation is mainly used to classify algorithms based on variables such as their running time and space-requirements-to-input-size ratio.  It characterizes functions according to their growth rates in relation to the size of the data.

Efficiency is definitely a top priority in the world of computer science.  As programmers, we should worry about whether or not our programs will be able to handle large amounts of data within a reasonable amount of time.  Therefore, we are dealing with CPU (time), memory, disk, and network usage.

The performance refers to how much time is used when the program is running.  The code itself is mostly responsible for this, but we must also factor in the machine it is running on, the compiler, and some other miscellaneous variables. An algorithm’s complexity describes what happens as we input more data.  It can affect the program’s performance, but the performance does not affect the program’s complexity.

 

Common Orders

O(1)

With an order of 1, it is safe to assume that the run of the function does not change, even if the amount of data changes.

This is also referred to as a constant function.  If the code runs solely off of simple or basic statements, it will be O(1).  Examples include inserting or deleting items in an array, or searching for an index in a hash table.

In this example, the x axis represents the data input (space), and the y axis represents the amount of time it takes to run the function.big O / O(1) / courtesy of nathan long

 

O(N)

If the performance will change in direct relation to the size of the data, then the program is considered to be O(N).

This is a linear function.  It is found in programs that use if-then-else statements, because it will execute based on certain conditions.  Programs are also classified as O(N) if they use loops with a sequence of statements.  This is because the total time for the statements are O(1), but you loop through them a number of N times.  O(1) * N = O(N).

Some examples include searching an array with bubble sort (best case), or using a sequential search algorithm (worst case).

big O / O(N) / courtesy of nathan long

 

O(Nᵃ)

You can usually figure an algorithm will change in relation to the square of the size of the data, especially if there are nested iterations (or, for loops).

This is referred to as a quadratic function.  Some examples include a best-case run of selection sort, average-runs of bubble & insertion sorts, and worst-case runs of the quick sort algorithm.

ᵃ is an arbitrary variable. If you have one for loop nested inside another for loop in your code, the order will be O(N²).  One more for loop nested inside of that would make it O(N³), and so forth.  If the variable were constant, we would consider it to be a polynomial function.

This image showcases an O(N²) function.

big O / O(N^2) / O(N²) / courtesy of nathan long

 

O(Log N)

Now, we get into logarithmic functions.  Finding the log of the set of data means that less data will be worked through faster, then flatten out as the amount of data grows.  As someone stated on stackoverflow, “time goes up linearly while n goes up exponentially.”  Well said.

An example of O(Log N) would be using a binary search tree or a worst-case binary search algorithm.

Think of the law of diminishing returns: I can eat an entire bag of powdered donuts in one sitting.  At first, I eat the donuts quickly, and enjoy them.  After a while, my chewing slows, i’m not enjoying them as much, the taste grows flat, etc. Looks like this:

big O / O(log N) / courtesy of gamedev.net

 


Some More Resources That Helped Me:

big o / comparative graph

 
 

xoxo,
maryn


Leave a Reply

Your email address will not be published. Required fields are marked *