Divide & Conquer (4.1, 4.2, 4.7; 4.3–4.5)
Pattern
- Split problem into subproblems; solve recursively; combine.
- Typical recurrence: T(n) = a T(n/b) + f(n).
Master Theorem (basic)
- Let n^{log_b a} be the pivot term.
- Case 1: If f(n) = O(n^{log_b a - \epsilon}) ⇒ T(n)=Θ(n^{log_b a}).
- Case 2: If f(n) = Θ(n^{log_b a} log^k n) ⇒ T(n)=Θ(n^{log_b a} log^{k+1} n).
- Case 3: If f(n) = Ω(n^{log_b a + \epsilon}) and regularity holds ⇒ T(n)=Θ(f(n)).
Substitution & Recursion Trees
- Guess a bound; prove by induction (tighten as needed).
- Tree sum = cost per level × levels; verify with base & top level costs.
Classics
- MergeSort: T(n)=2T(n/2)+Θ(n)=Θ(n log n).
- Binary Search: T(n)=T(n/2)+Θ(1)=Θ(log n).
- Strassen (idea): reduce a multiplications; trade multiplications for adds.
Quicksort (7.1–7.4, 8.1)
- In‑place partition, average Θ(n log n), worst Θ(n^2).
- Randomized pivot gives expected Θ(n log n).
- Partition correctness invariant: elements < p | p | elements ≥ p.
- Tail recursion elimination on smaller side ⇒ O(log n) stack expected.
- Median‑of‑three helps in practice; not asymptotics.
Stability & In‑place
- Standard quicksort is unstable; in‑place partition uses O(1) extra space.
Order Statistics (9.1–9.3)
- Randomized‑Select: expected Θ(n) by quickselect.
- Deterministic Select (Median‑of‑Medians): worst‑case Θ(n).
- Use to find median, quantiles; applied in selection & partitioning.
Dynamic Programming (14.1–14.5)
Recipe
- Optimal substructure + overlapping subproblems.
- Define subproblem, recurrence, order, base, reconstruct solution.
Classics
- Rod Cutting: R[n]=max_{i} (p_i + R[n-i]).
- Matrix‑Chain: m[i,j]=min_k\{m[i,k]+m[k+1,j]+d_{i-1}d_k d_j\}.
- LCS: c[i,j]=c[i-1,j-1]+1 if match; else max of left/up.
- Knapsack 0/1: dp[w]=max(dp[w], dp[w-w_i]+v_i) in decreasing w.
Tips
- Space optimization via rolling arrays when only previous row/col needed.
- Beware integer overflow in path counts; use mod as needed.
Greedy (15.1–15.3)
- Greedy‑choice property + optimal substructure.
- Proofs: exchange argument; matroid/greedy‑stays‑ahead.
- Examples: activity selection (earliest finish), Huffman coding, interval partitioning.
Amortized Analysis (16.1–16.4)
- Aggregate: total cost over n ops.
- Accounting: overcharge cheap ops, use credit to pay for expensive ones.
- Potential method: \hat c_i = c_i + \Phi(D_i)-\Phi(D_{i-1}).
- Examples: stack ops, binary counter, dynamic array resizing.
Disjoint Sets (19.1)
- Union‑Find with union by rank/size + path compression.
- Amortized time per op: α(n) (inverse Ackermann).
- Applications: Kruskal, connected components.
Elementary Graph Algos (20.1–20.5)
- Representations: adjacency list (Θ(V+E)), matrix (Θ(V^2)).
- BFS (unweighted SSSP): levels; parent tree; queue; Θ(V+E).
- DFS: discovery/finish times; tree/back/forward/cross edges; topological sort.
- Strongly Connected Components: run DFS on G, DFS on G^T in decreasing finish times.
Minimum Spanning Trees (21.1–21.2)
- Cut property: lightest edge crossing any cut that respects the partial MST is safe.
- Cycle property: heaviest edge on any cycle is not in some MST.
- Kruskal: sort edges; add if it doesn’t form a cycle (use DSU). Θ(E log V).
- Prim: grow tree from a root using priority queue. Θ(E log V) (binary heap).
- Uniqueness: distinct weights ⇒ unique MST.
Single‑Source Shortest Paths (22.1–22.4)
- Non‑negative weights: Dijkstra with PQ (Θ(E log V)).
- General weights (no negative cycles): Bellman‑Ford, relax edges V-1 times; detect negative cycle.
- Unweighted: BFS.
- Correctness: triangle inequality via relaxation invariants.
All‑Pairs Shortest Paths (22.5, 23.1–23.2)
- Floyd–Warshall: DP on intermediate vertices k; Θ(V^3).
- Repeated squaring (min‑plus matrix product) for small integer weights.
- Johnson’s algorithm: reweight with potentials (Bellman‑Ford) + run Dijkstra from each vertex.
Maximum Flow (24.1–24.3)
- Residual graph; augmenting paths; flow conservation; capacity constraints.
- Ford–Fulkerson: augment until none exists; value increases each time.
- Edmonds–Karp: BFS shortest augmenting path ⇒ Θ(VE^2).
- Max‑Flow Min‑Cut Theorem: flow value = minimum s‑t cut capacity.
Linear Programming (29.1–29.3)
- Standard form: maximize c^T x subject to Ax \le b, x\ge 0.
- Feasible region is a convex polytope; opt at a vertex (if bounded).
- Simplex: efficient in practice; exponential worst‑case.
- Ellipsoid/interior‑point: polynomial‑time theoretical guarantees.
- Duality: weak/strong duality; complementary slackness conditions.
String Matching (32.1–32.2)
- Naïve: Θ((n-m+1)m) worst.
- KMP: prefix‑function (π) / failure function; Θ(n+m).
- Rabin–Karp: rolling hash; expected linear for many patterns; collision‑aware.
Build π for pattern P:
π[0]=0; for i=1..m-1: k=π[i-1]; while k>0 and P[k]≠P[i]: k=π[k-1]; if P[k]==P[i] k++; π[i]=k.
NP‑Completeness (34.1–34.5)
- Classes: P ⊆ NP; NP = problems verifiable in polytime given a certificate.
- NP‑hard: at least as hard as every NP problem; NP‑complete: NP‑hard ∩ NP.
- Reductions: poly‑time many‑one reductions preserve membership.
- Cook–Levin: SAT is NP‑complete ⇒ many reductions from SAT/3‑SAT.
- Canonical NP‑complete: 3‑SAT, CLIQUE, VERTEX‑COVER, INDEPENDENT‑SET, HAMILTONIAN‑CYCLE, PARTITION, SUBSET‑SUM, 3‑COLOR.
- Proving NP‑C: (1) in NP; (2) reduce known NP‑C to it.
Handy Tables
Sorting (comparison‑based)
| Algorithm | Best | Avg | Worst | Stable | In‑place |
| MergeSort | n log n | n log n | n log n | Yes | No |
| HeapSort | n log n | n log n | n log n | No | Yes |
| QuickSort | n log n | n log n | n² | No | Yes |
Graph SSSP & APSP
| Setting | Algorithm | Time |
| Unweighted | BFS | Θ(V+E) |
| Non‑negative | Dijkstra (heap) | Θ(E log V) |
| General weights | Bellman‑Ford | Θ(VE) |
| APSP dense | Floyd‑Warshall | Θ(V³) |
| APSP sparse | Johnson | Θ(VE log V) |
Proof/Correctness Nuggets
- Loop invariants: init, maintain, terminate.
- Greedy: exchange argument; show greedy choice can be swapped into an optimal solution.
- Flows: cut capacity is an upper bound on any s‑t flow; equality when no augmenting path exists.
- MST: cut & cycle properties justify Kruskal/Prim selections.
- DP optimal substructure: show optimal breaks into optimal subsolutions.
Common Recurrences (ready‑to‑use)
- T(n)=T(n/2)+c ⇒ Θ(log n).
- T(n)=2T(n/2)+cn ⇒ Θ(n log n).
- T(n)=T(n-1)+c ⇒ Θ(n).
- T(n)=T(n-1)+T(n-2)+c ⇒ exponential.
- T(n)=aT(n/b)+n^k: compare a vs b^k (Master).
Relaxation Patterns (SSSP)
// Edge (u,v,w)
if d[v] > d[u] + w:
d[v] = d[u] + w; parent[v] = u
Invariants: d[v] is an upper bound; after algorithm, shortest‑path distances hold (assuming no negative cycles).
Huffman & Prefix Codes
- Combine two least‑frequent symbols; build tree bottom‑up.
- Optimality via greedy‑choice + optimal substructure (exchange).
Tips & Pitfalls
- Check preconditions (e.g., non‑negative weights for Dijkstra).
- Amortized ≠ average over inputs; it’s average over operations.
- Reductions direction: known NP‑C ≤p target problem.
- Graph traversals: watch for off‑by‑one in time stamps and parent init.