For ⌈lg n⌉!:
lg(⌈lg n⌉!)
= Θ(⌈lg n⌉ · lg ⌈lg n⌉)
= Θ(lg n · lg lg n)
= ω(lg n).
Since lg(⌈lg n⌉!) grows strictly faster than lg n, it cannot satisfy
lg(f(n)) = O(lg n). Therefore, ⌈lg n⌉! is not polynomially bounded.
For ⌈lg lg n⌉!:
lg(⌈lg lg n⌉!)
= Θ(⌈lg lg n⌉ · lg ⌈lg lg n⌉)
= Θ(lg lg n · lg lg lg n)
= o(lg n).
Because lg(⌈lg lg n⌉!) grows strictly slower than lg n, we have
lg(⌈lg lg n⌉!) = O(lg n). Hence, ⌈lg lg n⌉! is polynomially bounded.
Recursion Tree
| Level | recursion tree | work |
|---|---|---|
| 0 | \(n\) | \(c \, n\) |
| 1 | \(c\alpha n \;c(1-\alpha)n\) | \(c\alpha n + c(1-\alpha)n = c n\) |
| 2 | \(c\alpha^2 n \;c\alpha(1-\alpha)n \;c(1-\alpha)\alpha n \;c(1-\alpha)^2 n\) | \(c n\) |
| ... | … | \(c n\) |
Recursion Tree Analysis
n, cost = c n.
αn and (1−α)n, total cost
c·αn + c·(1−α)n = c n.
n, cost = c n.
2i nodes, total cost = c n.
The recursion stops when subproblem sizes reach Θ(1). Since each level shrinks the largest piece by a factor max(α,1−α)<1, the depth is
L = Θ(log n).
Summing across all levels:
T(n) = lg 1/(1−α)n ∑i=0 cn = ( lg 1/(1−α)n ∑i=1 cn ) + cn = cn·( log 1/(1−α)n ) + cn = Ω( n·log 1/(1−α)n ) = Ω( n·lg n )
Each of the \(\Theta(\log n)\) levels costs \(\Theta(n)\), so \[ T(n) = \underbrace{c n}_{\text{level 0}} + \underbrace{c n}_{\text{level 1}} + \cdots + \underbrace{c n}_{\text{level }h-1} = c n \cdot h = \Theta(n \log n). \]
decreasing array: A = [n, n‑1, n‑2, … , 2, 1]
left sub‑array size is 0
right sub‑array size is n‑1
Recurrence Expansion
\[ \begin{aligned} T(n) &= T(n-1) + c\,n \\[6pt] &= \bigl[T(n-2) + c\,(n-1)\bigr] + c\,n \\[4pt] &= T(n-2) + c\,(n-1 + n) \\[4pt] &= \Theta(1) + cn^2-2cn+c \\[4pt] &= \Theta(n^2). \end{aligned} \]
Every comparison sort can be represented as a binary decision tree:
each internal node = one comparison,
each root to leaf path = the answers to those comparisons for one input,
each leaf = the final permutation the algorithm outputs.
Total possible inputs of length n = n! .
linear time on at least n!/2 of them.
But then the tree needs at least n!/2 leaves among those first c n levels, use Stirling’s approximation lg(n!) = Θ(n lg n):
lg(n!/2) ≤ c · n
Θ(n lg n) ≤ c · n
impossible for large n
Answer: No comparison sort can be linear on at least half of all inputs.
Now the algorithm may be linear on at most (n!)/n = (n − 1)! inputs:
lg((n-1)!) ≤ c · n
Θ(n lg n) ≤ c · n
same contradiction
Answer: impossible.
Allow only n!/2n quick inputs:
lg(n!/2^n) = lg(n!) − n ≤ c · n
Θ(n lg n) − n ≤ c · n
Θ(n lg n) still dominates
Answer: impossible.
No comparison sort can run in linear time on ½, 1/n, or even 1/2n of all inputs.
Even a vanishingly small constant size fraction of the n! possible permutations demands Θ(n log n) bits hence Θ(n log n) comparisons.
SELECTION(A, k) SELECTION(A,k)
BLACK-BOX(A)
IF k =n/2 return A[n/2]
DIVIDE(A)
IF k < n/2 SELECTION(A1,k)
ELSE SELECTION(A2,n/2−k)
END SELECTION
In each call on an array of size n :
T(n) ≤ cn +T(n/2)
= c(n+n/2+n/4+n/8+...+T(1))
≤ 2cn
= O(n)
The recurrence \[ T(n) = T\bigl(\lceil n/2\rceil\bigr) + Θ(n) \] Answer: \[T(n) = Θ(n). \]
Try every parenthesization = Make a list of every possible way that could insert parentheses, then evaluate each one.
RECURSIVE‑MATRIX‑CHAIN = Pick the first place that might cut the chain, then recursively do the same thing on the left piece and on the right piece.
Example:
| n | number of complete parenthesizations | RECURSIVE‑MATRIX‑CHAIN |
|---|---|---|
| 2 | 1 | 1 |
| 3 | 2 | 3 |
| 4 | 5 | 7 |
| 5 | 14 | 17 |
| 6 | 42 | 41 |
| 7 | 132 | 99 |
Growth rate different. Listing all parenthesizations ≈ \( 4^{n}\) steps. RECURSIVE-MATRIX-CHAIN ≈ \(3^{n} \) steps.
Answer: Running RECURSIVE-MATRIX-CHAIN is more efficient than enumerating all the ways of parenthesizing, because \(3^{n} \ll 4^{n}\) for large \(n\).
X and Y are the input sequences.m = |X|, n = |Y|, s = min{m,n}.m > n we first swap the two strings so that s = m.
store only the current and previous row, swapping them after each j pass
function LCS_2row(X[1..s], Y[1..ℓ]):
prev[0..s] ← 0
curr[0..s] ← 0
curr[0] ← 0
for i ← 1 to s do
if X[i] = Y[j] then
curr[i] ← prev[i‑1] + 1
else
curr[i] ← max(prev[i], curr[i‑1])
swap(prev, curr)
return prev[s]
Space Analysis: two arrays of length s+1 plus loop indices → 2·min{m,n} + O(1).
recycle a single row; carry the diagonal value in a scalar topleft
function LCS_1row(X[1..s], Y[1..ℓ]):
c[0..s] ← 0
for j ← 1 to ℓ do
topleft ← 0
for i ← 1 to s do
save ← c[i]
if X[i] = Y[j] then
c[i] ← topleft + 1
else
c[i] ← max(c[i], c[i‑1])
topleft ← save
return c[s]
Space Analysis: one length‑s+1 array plus a few scalars → min{m,n} + O(1).
Greedy algorithm rule: Assign the next activity to the hall that becomes free soonest provided it is free by the activity’s start time. If no hall is free, open a new hall.
Order all activities by nondecreasing start.function SCHEDULE(A):
sort A by start time
H ← empty min-heap of (finish, hall_id)
hall_count ← 0
assignment ← {}
for (start, finish) in A:
if H ≠ ∅ and H.min.finish ≤ start:
(old_finish, h) ← H.extract_min()
assignment[(start, finish)] ← h
else:
hall_count ← hall_count + 1
h ← hall_count
assignment[(start, finish)] ← h
H.insert((finish, h))
return hall_count, assignment
Explanation:
When an activity starts, picking the hall that frees up earliest preserves maximal flexibility for future activitiesThis greedy method is optimal for minimising the number of lecture halls.
RESET makes the entire counter appear zero simply by doing epoch ← epoch + 1;
Function INCREMENT()
i ← 0
loop
if tag[i] ≠ epoch or bit[i] = 0 then
bit[i] ← 1
tag[i] ← epoch
return
else
bit[i] ← 0
i ← i + 1
Function RESET()
epoch ← epoch + 1
By adding a single integer epoch and tagging each bit with the epoch in which it was last written, we convert RESET which scans all k bits into an O(1) operation while keeping INCREMENT at amortised O(1) . So, any mix of n INCREMENTs and RESETs finishes in O(n) time.