Big O Notation
Constant & Logarithmic Time
Operations that take the same amount of time regardless of input size. The gold standard for algorithm efficiency, representing instant access or computation.
- Execution time remains constant
- Most efficient complexity class
- Common in hash table lookups
- Array index access
Algorithms that divide the problem space in half with each iteration. Extremely efficient for large datasets, commonly seen in search algorithms on sorted data.
- Divides problem space by half each step
- Scales extremely well with large inputs
- Common in binary search
- Tree-based operations
Linear & Linearithmic Time
Performance scales directly with input size. Each element is typically processed once. Most basic iterative algorithms fall into this category.
- Single pass through data
- Scales linearly with input
- Common in simple iterations
- Often unavoidable for complete processing
The complexity of most efficient comparison-based sorting algorithms. Represents divide-and-conquer approaches that process all elements while subdividing the problem.
- Optimal for comparison sorting
- Divide-and-conquer pattern
- Better than quadratic, worse than linear
- Common in efficient sorting
Polynomial Time
Performance grows quadratically with input size, typically from nested iterations. Common in brute-force solutions and simple sorting algorithms. Becomes inefficient for large inputs.
- Nested loop pattern
- Scales poorly for large inputs
- Common in brute-force solutions
- Often can be optimized
Triple nested loops or similar structures. Rarely practical for large datasets. Common in matrix operations and certain geometric algorithms.
- Triple nested structure
- Very poor scalability
- Limited practical use
- Matrix multiplication
Exponential & Factorial Time
Doubles with each additional input element. Common in recursive algorithms that branch at each step. Quickly becomes impractical even for moderate input sizes.
- Doubles with each element
- Recursive branching
- Impractical for n > 30
- Brute-force combinatorial problems
The worst common complexity class. Growth is catastrophically fast. Appears in algorithms that generate all permutations. Only feasible for very small inputs (n < 10).
- Catastrophically poor scaling
- Only for tiny inputs (n < 10)
- All permutations
- Usually indicates need for optimization
Space Complexity
Uses fixed amount of memory regardless of input size. Achieves space efficiency through in-place operations and using only a constant number of variables.
- Fixed memory usage
- In-place algorithms
- Most space-efficient
- Common optimization goal
Memory usage grows proportionally with input size. Common when creating auxiliary data structures or storing intermediate results. Often necessary for optimal time complexity.
- Scales with input size
- Common for auxiliary structures
- Time-space tradeoff
- Hash tables, arrays
Memory usage grows logarithmically with input size. Typically seen in recursive algorithms that divide the problem, where the recursion depth is logarithmic.
- Logarithmic memory growth
- Recursive call stack
- Divide-and-conquer
- Very space-efficient
Space grows quadratically with input. Common in adjacency matrices for graphs and certain dynamic programming solutions. Can be prohibitive for large inputs.
- Quadratic memory growth
- 2D arrays and matrices
- Graph adjacency matrices
- Some DP solutions
