## The Big O notation that every Computer Science Professional Must Understand

**Big O** notation is a pivotal concept in computer science, particularly in the realm of algorithm analysis. It offers a high-level understanding of how an algorithm’s performance scales as the size of inputs increases.

In this blog post I aim to elucidate the purpose of Big O notation, guide you on how to understand and use it, and illustrate various common Big O notations with C# code examples.

I have also included a good article (external) that explains the Pure Mathematics behind the Big O, so please do read that too to get a solid foundation on this topic.

### Purpose of Big O Notation

The primary purpose of Big O notation is to provide a language for comparing the computational complexity of algorithms in terms of time (execution time) and space (memory usage) as a function of input size (n). It helps in:

**Predicting Performance:**Understanding how changes in input size affect an algorithm’s running time or memory requirements.**Comparing Algorithms:**Offering a means to compare the efficiency of different algorithms for the same task.**Scalability Analysis:**Identifying potential bottlenecks and scalability issues in software design.

### Understanding Big O Notation

Big O notation describes the upper bound of an algorithm’s running time or space requirement in terms of the size of the input data. It abstracts away constants and lower-order terms to focus on the dominant term that has the most significant effect on the growth rate.

**Key Concepts**

**O(1) – Constant Time:**Indicates that the algorithm’s performance is constant and does not change with the size of the input data.**O(log n) – Logarithmic Time:**Denotes that the algorithm’s execution time grows logarithmically as the input size increases. This means that each increase in the size of the input data results in a smaller relative increase in the execution time, showcasing efficient scaling compared to linear or quadratic complexities.**O(n) – Linear Time:**Denotes that the algorithm’s execution time increases linearly with the input size.**O(n log n) – “Linearithmic” Time:**Reflects algorithms that grow in complexity linearly and logarithmically with the input size.**O(n^2) – Quadratic Time:**Suggests that the algorithm’s execution time is proportional to the square of the input size.**O(2^n) – Exponential Time:**Indicates algorithms whose execution time doubles with each addition to the input data set.**O(n!) – Factorial Time:**Represents algorithms that grow with the factorial of the input size, typically seen in algorithms that generate all permutations of a set.

### Using Big O Notation

To use Big O notation effectively, analyze the algorithm’s structure, focusing on loops, recursive calls, and other constructs that contribute to the growth rate of the algorithm’s time or space complexity as the input size increases.

### Examples with C# Code

#### O(1) – Constant Time

` ````
void PrintFirstElement(int[] array)
{
if (array.Length > 0)
{
Console.WriteLine(array[0]);
}
}
```

This function prints the first element of an array. No matter the size of the array, it only accesses the first element, demonstrating constant time complexity.

#### O(log n) – Logarithmic Time

` ````
void BinarySearch(int[] array, int target)
{
var left = 0;
var right = array.Length - 1;
while (left <= right)
{
var mid = left + (right - left) / 2;
if (array[mid] == target)
{
Console.WriteLine("Found at " + mid);
return;
}
if (array[mid] < target) left = mid + 1;
else right = mid - 1;
}
Console.WriteLine("Not found");
}
```

Binary search cuts the problem size in half with each step, making it a classic example of logarithmic time complexity.

#### O(n) – Linear Time

` ````
void PrintAllElements(int[] array)
{
foreach (var element in array)
{
Console.WriteLine(element);
}
}
```

This function iterates through each element of the array to print it, showing linear time complexity as the time it takes grows linearly with the size of the array.

#### O(n log n) – Linearithmic Time

` ````
Array.Sort(array); // C#'s Array.Sort typically uses an algorithm that operates in O(n log n) time complexity.
```

Sorting algorithms like * mergesort *and

*, under typical conditions, have O(n log n) complexity.*

**quicksort**#### O(n^2) – Quadratic Time

` ````
void PrintAllPairs(int[] array)
{
for (var outer = 0; outer < array.Length; outer++)
{
for (var inner = 0; inner < array.Length; inner++)
{
Console.WriteLine($"{array[outer]}, {array[inner]}");
}
}
}
```

This nested loop prints all possible pairs of elements in the array, demonstrating quadratic time complexity as the number of operations increases quadratically with the size of the input.

#### O(n!) – Factorial complexity

` ````
using System;
using System.Collections.Generic;
public class Program
{
public static void Main()
{
var numbers = new[] { 1, 2, 3 };
var permutations = GeneratePermutations(numbers);
foreach (var permutation in permutations)
{
Console.WriteLine(string.Join(", ", permutation));
}
}
public static List<List<int>> GeneratePermutations(int[] numbers)
{
var length = numbers.Length;
var resultList = new List<List<int>>();
Permute(numbers, 0, length - 1, resultList);
return resultList;
}
private static void Permute(int[] numbers, int start, int end, List<List<int>> result)
{
if (start == end)
{
var permutation = new List<int>(numbers);
result.Add(permutation);
}
else
{
for (var counter = start; counter <= end; counter++)
{
Swap(ref numbers[start], ref numbers[counter]);
Permute(numbers, start + 1, end, result);
Swap(ref numbers[start], ref numbers[counter]); // backtrack
}
}
}
private static void Swap(ref int startRef, ref int swapRef)
{
int tempRef = startRef;
startRef = swapRef;
swapRef = temp;
}
}
```

The factorial growth (as illustrated above) of the algorithm’s execution time and the number of permutations generated with respect to the input size * n* is what categorizes this algorithm as

*. This growth rate indicates that the algorithm becomes impractically slow even for relatively small input sizes (e.g., an array of length 10 has 3,628,800 permutations), underscoring the exponential increase in complexity associated with factorial time algorithms.*

**O(n!)**### Conclusion

Understanding and applying Big O notation is crucial for evaluating and improving the performance of algorithms and software applications. By using the examples and explanations provided, developers can better anticipate the behavior of their code under various conditions and make informed decisions to optimize efficiency and scalability.

Additional Resources (Similar and related articles)

Have fun!