Big O Notation

bolt

Constant & Logarithmic Time

O(1) - Constant Time

Operations that take the same amount of time regardless of input size. The gold standard for algorithm efficiency, representing instant access or computation.

Key Features
  • Execution time remains constant
  • Most efficient complexity class
  • Common in hash table lookups
  • Array index access
Common Examples
Array index accessHash table lookupStack push/popMath calculationsVariable assignment
O(log n) - Logarithmic Time

Algorithms that divide the problem space in half with each iteration. Extremely efficient for large datasets, commonly seen in search algorithms on sorted data.

Key Features
  • Divides problem space by half each step
  • Scales extremely well with large inputs
  • Common in binary search
  • Tree-based operations
Common Examples
Binary searchBalanced BST operationsBinary heap insert/deleteFinding GCD (Euclidean)Exponential search
trending_up

Linear & Linearithmic Time

O(n) - Linear Time

Performance scales directly with input size. Each element is typically processed once. Most basic iterative algorithms fall into this category.

Key Features
  • Single pass through data
  • Scales linearly with input
  • Common in simple iterations
  • Often unavoidable for complete processing
Common Examples
Array traversalLinear searchFinding min/maxCounting elementsSingle loop iterations
O(n log n) - Linearithmic Time

The complexity of most efficient comparison-based sorting algorithms. Represents divide-and-conquer approaches that process all elements while subdividing the problem.

Key Features
  • Optimal for comparison sorting
  • Divide-and-conquer pattern
  • Better than quadratic, worse than linear
  • Common in efficient sorting
Common Examples
Merge sortQuick sort (average)Heap sortTim sortFast Fourier Transform
show_chart

Polynomial Time

O(n²) - Quadratic Time

Performance grows quadratically with input size, typically from nested iterations. Common in brute-force solutions and simple sorting algorithms. Becomes inefficient for large inputs.

Key Features
  • Nested loop pattern
  • Scales poorly for large inputs
  • Common in brute-force solutions
  • Often can be optimized
Common Examples
Bubble sortSelection sortInsertion sortNested loop operationsComparing all pairs
O(n³) - Cubic Time

Triple nested loops or similar structures. Rarely practical for large datasets. Common in matrix operations and certain geometric algorithms.

Key Features
  • Triple nested structure
  • Very poor scalability
  • Limited practical use
  • Matrix multiplication
Common Examples
Matrix multiplication (naive)Floyd-Warshall algorithmTriple nested loopsAll triplets comparisonCertain dynamic programming
rocket_launch

Exponential & Factorial Time

O(2ⁿ) - Exponential Time

Doubles with each additional input element. Common in recursive algorithms that branch at each step. Quickly becomes impractical even for moderate input sizes.

Key Features
  • Doubles with each element
  • Recursive branching
  • Impractical for n > 30
  • Brute-force combinatorial problems
Common Examples
Fibonacci (naive recursive)Power set generationSubset sum (brute force)Tower of HanoiRecursive backtracking
O(n!) - Factorial Time

The worst common complexity class. Growth is catastrophically fast. Appears in algorithms that generate all permutations. Only feasible for very small inputs (n < 10).

Key Features
  • Catastrophically poor scaling
  • Only for tiny inputs (n < 10)
  • All permutations
  • Usually indicates need for optimization
Common Examples
Generating all permutationsTraveling salesman (brute force)Solving n-queens (naive)All orderingsBrute-force optimization
storage

Space Complexity

O(1) - Constant Space

Uses fixed amount of memory regardless of input size. Achieves space efficiency through in-place operations and using only a constant number of variables.

Key Features
  • Fixed memory usage
  • In-place algorithms
  • Most space-efficient
  • Common optimization goal
Common Examples
In-place array reversalTwo-pointer techniqueIterative algorithms with few variablesBit manipulationConstant variable storage
O(n) - Linear Space

Memory usage grows proportionally with input size. Common when creating auxiliary data structures or storing intermediate results. Often necessary for optimal time complexity.

Key Features
  • Scales with input size
  • Common for auxiliary structures
  • Time-space tradeoff
  • Hash tables, arrays
Common Examples
Hash table storageAuxiliary array in merge sortRecursion call stackDynamic programming tableResult array storage
O(log n) - Logarithmic Space

Memory usage grows logarithmically with input size. Typically seen in recursive algorithms that divide the problem, where the recursion depth is logarithmic.

Key Features
  • Logarithmic memory growth
  • Recursive call stack
  • Divide-and-conquer
  • Very space-efficient
Common Examples
Binary search (recursive)Balanced BST operationsQuick sort recursion stackBinary heap operationsDivide-and-conquer recursion
O(n²) - Quadratic Space

Space grows quadratically with input. Common in adjacency matrices for graphs and certain dynamic programming solutions. Can be prohibitive for large inputs.

Key Features
  • Quadratic memory growth
  • 2D arrays and matrices
  • Graph adjacency matrices
  • Some DP solutions
Common Examples
Adjacency matrix for graphs2D DP tablesDistance matrix (all pairs)Matrix storageGrid-based algorithms