The Basics of Big O Notation

Big O notation is mainly used to classify algorithms based on variables such as their running time and space-requirements-to-input-size ratio.  It characterizes functions according to their growth rates in relation to the size of the data. Efficiency is definitely a top priority in the world of computer science.  As programmers, we should worry about whether or not our Read More …