Algorithm Complexity
Learn to analyze algorithm efficiency using asymptotic notation. Understand Big-O, Big-Omega, and Big-Theta to compare and classify algorithms by their growth rates.
- Understand the purpose of asymptotic analysis
- Apply Big-O notation for upper bounds
- Apply Big-Omega notation for lower bounds
- Use Big-Theta for tight bounds
- Analyze common algorithm patterns
- Compare complexity classes
1. Introduction to Algorithm Analysis
An algorithm is a finite sequence of precise instructions for solving a problem or performing a computation.
Algorithm analysis studies the resources (time, space) an algorithm requires as a function of input size.
Why Asymptotic Analysis?
- Exact running time depends on hardware, implementation details
- We care about scalability: how does time grow with input size?
- Asymptotic notation ignores constants and focuses on growth rate
- Allows fair comparison between algorithms
Consider algorithms with running times:
- Algorithm A:
- Algorithm B:
For small n, A seems worse. But for large n:
- n = 100: A takes 100,000; B takes 10,000
- n = 10,000: A takes 10,000,000; B takes 100,000,000
Algorithm A (linear) eventually beats B (quadratic) for large inputs.
2. Big-O Notation (Upper Bound)
Let . We say if there exist positive constants and such that:
This means grows at most as fast as asymptotically.
Show:
We need to find c and n₀ such that for n ≥ n₀.
For n ≥ 1: and
So:
Choose c = 15 and n₀ = 1. ✓
- Constant factor: for any constant c
- Sum rule:
- Product rule:
- Transitivity: If and , then
3. Big-Omega and Big-Theta
if there exist positive constants and such that:
This means grows at least as fast as asymptotically.
if AND .
Equivalently, there exist constants such that:
Summary of Notations:
| Notation | Meaning | Analogy |
|---|---|---|
| Upper bound | ≤ | |
| Lower bound | ≥ | |
| Tight bound | = |
4. Common Complexity Classes
Common complexity classes in increasing order:
O(1) - Constant:
Array access, hash table lookup (average)
O(log n) - Logarithmic:
Binary search, balanced tree operations
O(n) - Linear:
Linear search, single loop through array
O(n log n) - Linearithmic:
Merge sort, quicksort (average), heapsort
O(n²) - Quadratic:
Bubble sort, insertion sort, nested loops
O(2ⁿ) - Exponential:
Naive recursive Fibonacci, subset enumeration
5. Analyzing Algorithms
LinearSearch(A, x):
for i = 1 to n:
if A[i] == x: return i
return -1
Analysis:
- Best case: O(1) - found at first position
- Worst case: O(n) - not found or at last position
- Average case: O(n) - expected n/2 comparisons
BinarySearch(A, x, low, high):
if low > high: return -1
mid = (low + high) / 2
if A[mid] == x: return mid
if A[mid] > x:
return BinarySearch(A, x, low, mid-1)
else:
return BinarySearch(A, x, mid+1, high)
Analysis:
Each step halves the search space.
Common Patterns:
- Single loop over n elements → O(n)
- Nested loops (both over n) → O(n²)
- Divide in half each step → O(log n)
- Divide and conquer with linear merge → O(n log n)
Practice Quiz
Frequently Asked Questions
Why do we ignore constants in Big-O?
Constants depend on implementation and hardware. What matters for scalability is the growth rate as input size increases.
An O(n) algorithm will always eventually beat O(n²), regardless of constants, because n² grows unboundedly faster than n.
When should I use Big-O vs Big-Theta?
Big-O: When you want an upper bound (worst-case guarantee).
Big-Theta: When you want to precisely characterize growth rate.
In practice, Big-O is more commonly used because we often care about worst-case behavior.
How do I analyze recursive algorithms?
Write a recurrence relation, then solve it:
- Substitution method
- Recursion tree method
- Master theorem (for divide-and-conquer)
What's the difference between time and space complexity?
Time complexity: How running time grows with input size.
Space complexity: How memory usage grows with input size.
Often there's a trade-off: algorithms can use more space to save time, or vice versa.
What complexity is considered 'good'?
It depends on context, but general guidelines:
- O(log n), O(n): Excellent - scales well
- O(n log n): Good - acceptable for most purposes
- O(n²): Okay for small inputs
- O(2ⁿ), O(n!): Only for very small inputs