Algorithm: The order of algorithms

Algorithm: The Order of Algorithms

Definitions and Basic Concepts

  • The order of an algorithm provides a measure of the efficiency of an algorithm in terms of the number of operations required relative to the input size.
  • It is denoted using the Big O notation which helps to compare the fastest growing parts of different algorithm complexities.
  • The Big O notation describes an upper bound or “worst case” scenario for time complexity.

Common Classes of Algorithm Complexity

  • O(1), or constant time complexity, means the algorithm takes the same amount of time to execute, regardless of input size. Operations like accessing an element in an array or inserting a node in a linked list fall in this category.
  • O(log n), or logarithmic time complexity, means the execution time grows logarithmically with input size. Binary search algorithms are an example of this category.
  • O(n), or linear time complexity, means the execution time grows linearly with the input size. Simple searching and sorting algorithms typically have linear time complexity.
  • O(n log n), or linearithmic time complexity, of an algorithm suggests that the running time increases linearly for every input of size ‘n’ but also factors in a growth of log n for every doubling of ‘n’. Most efficient sorting algorithms like quicksort, mergesort, and heapsort fall in this category.
  • O(n²), or quadratic time complexity, suggests the time taken grows quadratically with the input size, n. Bubble sort, selection sort, and insertion sort are examples of this.
  • O(n³), or cubic time complexity, where the function grows cubically for a linear increase in the input size. An example is the naive multiplication of two matrices.
  • O(2^n), or exponential time complexity, involves algorithms where the growth doubles with each addition to the input data set. These algorithms can solve complex problems, but the time complexity rapidly becomes unsustainable as the data size increases. The classic example is the recursive calculation of Fibonacci numbers.

Important Points

  • The lower bound of an algorithm’s time complexity is referred to as Omega(Ω), and the average case is denoted as Theta(Θ). Together with Big O, they form asymptotic notation, providing a comprehensive analysis of algorithm efficiency.
  • When analysing and comparing algorithms, we are generally interested in their asymptotic behaviour as the input size approaches infinity. This masks constant time differences which are of lesser importance for large data.
  • Remember, an efficient algorithm isn’t necessarily the one with the best worst-case complexity (lowest Big O). Other factors such as ease of implementation, actual time and space requirements for realistically sized inputs, and the nature of the input data, could be more decisive.