Do you know how efficient is your code? I mean, do you really know how your code is affected as your data elements increases? If you have no clue, then keep reading.

## The Big-O Notation

During the computer science infancy, scientist were interested in measuring the efficiency of an algorithm. They came up with a notation called Big-O notation. This notation is used in complex theory, mathematics and obviously in computer science. Basically this notation measures the efficiency of an algorithm. By efficiency I mean CPU (time) Usage, data usage, etc.

Computer scientists wanted to know basically how a computer operation/algorithm is affected as the data-space increases. For example, imagined you have two arrays. One array contains 10 elements, whereas the second array contains 10,000 elements. They asked, how operations such such accessing, removing or searching an element affects the time taken for the operation to execute.

## O(1) Notation

They found out that certain operations, no matter the number of elements, always took the same amount of time. For example, accessing element number 5 in both arrays took the same amount of time. Scientist categorized these operations as O(1) ("Order of 1").

## O(N) Notation

Later computer scientist discovered algorithms which they named O(N) ("Order of N") . "For" loops fall into this category. Basically, O(N) states that as the number of elements increase, the time to complete the operation increases in direct proportion. For example, a For loop which needs to iterate over 10 elements will be done sooner than a "For" loop which needs to iterate over 10,000 elements.

## O(N^{2}) Notation

The next notation scientist discovered are O(N^{2}). Basically, this notations states that the time to complete the algorithm doubles as the number of elements increases. A good example of these types of algorithms are nested "For" loops. For every one iteration in the outer "For" loop, a complete iterations of every element in the inner "For" loop must be executed.

## O(Log N) Notation

Finally, they found a notation they categorized as O(Log N) ("Order of log of N"). In simple words, these types of algorithms will reduce the amount of execution time by 50% after every execution. The most notable example of these types of algorithms is the Binary Search Tree.

Searches are better done with Binary Search Trees rather than arrays. Searching a list of 10,000 names will take a long time if you perform a linear search with an array O(N), but since the Binary Search Tree is a O(Log N) operation, it will take only a fraction of time to find a name.

So what types of algorithms are you using in your applications? Are you doing linear searches with arrays where you should be doing binary searches with a binary search tree? Are you doing a brute-force sorting of 10,000 elements when you should be doing a Heap Sorting instead?