Q1. Is the function ⌈lg 𝑛⌉! polynomially bounded? Is the function ⌈lg lg 𝑛⌉! polynomially bounded? Prove or disprove it.

For ⌈lg n⌉!:

lg(⌈lg n⌉!)
= Θ(⌈lg n⌉ · lg ⌈lg n⌉)
= Θ(lg n · lg lg n)
= ω(lg n).

Since lg(⌈lg n⌉!) grows strictly faster than lg n, it cannot satisfy lg(f(n)) = O(lg n). Therefore, ⌈lg n⌉! is not polynomially bounded.

Answer: ⌈lg n⌉! Not Polynomially Bounded

For ⌈lg lg n⌉!:

lg(⌈lg lg n⌉!)
= Θ(⌈lg lg n⌉ · lg ⌈lg lg n⌉)
= Θ(lg lg n · lg lg lg n)
= o(lg n).

Because lg(⌈lg lg n⌉!) grows strictly slower than lg n, we have lg(⌈lg lg n⌉!) = O(lg n). Hence, ⌈lg lg n⌉! is polynomially bounded.

Answer: ⌈lg lg n⌉! Polynomially Bounded

Q2. Use a recursion tree to justify a good guess for the solution to the recurrence\( T(𝑛) = T(\alpha 𝑛) + T((1-\alpha) 𝑛) + \Theta(𝑛)\), where \(\alpha\) is a constant in the range 0 < \(\alpha\) < 1.

Recursion Tree

Level recursion tree work
0 \(n\) \(c \, n\)
1 \(c\alpha n       \;c(1-\alpha)n\) \(c\alpha n + c(1-\alpha)n = c n\)
2 \(c\alpha^2 n  \;c\alpha(1-\alpha)n  \;c(1-\alpha)\alpha n  \;c(1-\alpha)^2 n\) \(c n\)
... \(c n\)

Recursion Tree Analysis

  1. Level 0: one node of size n, cost = c n.
  2. Level 1: two nodes of sizes αn and (1−α)n, total cost c·αn + c·(1−α)n = c n.
  3. Level 2: four nodes sizes sum to n, cost = c n.
  4. Level i: 2i nodes, total cost = c n.

The recursion stops when subproblem sizes reach Θ(1). Since each level shrinks the largest piece by a factor max(α,1−α)<1, the depth is

L = Θ(log n).

Summing across all levels:

T(n)
= lg 1/(1−α)ni=0 cn
= ( lg 1/(1−α)ni=1 cn ) + cn
= cn·( log 1/(1−α)n ) + cn
= Ω( n·log 1/(1−α)n )
= Ω( n·lg n )
  

Each of the \(\Theta(\log n)\) levels costs \(\Theta(n)\), so \[ T(n) = \underbrace{c n}_{\text{level 0}} + \underbrace{c n}_{\text{level 1}} + \cdots + \underbrace{c n}_{\text{level }h-1} = c n \cdot h = \Theta(n \log n). \]

Answer:\[ T(n) = \Theta\bigl(n \log n\bigr). \]

3. Show that the running time of QUICKSORT is \(\Theta(𝑛2)\) when the array A contains distinct elements and is sorted in decreasing order.

decreasing array: A = [n, n‑1, n‑2, … , 2, 1]

left sub‑array size is 0
right sub‑array size is n‑1

\[ T(n) \;=\; T(n-1) \;+\; T(0) \;+\; \Theta(n) \;=\; T(n-1) \;+\; c\,n, \]

Recurrence Expansion

\[ \begin{aligned} T(n) &= T(n-1) + c\,n \\[6pt] &= \bigl[T(n-2) + c\,(n-1)\bigr] + c\,n \\[4pt] &= T(n-2) + c\,(n-1 + n) \\[4pt] &= \Theta(1) + cn^2-2cn+c \\[4pt] &= \Theta(n^2). \end{aligned} \]

Answer: the running time in this worst case ordering is \(\displaystyle T(n) = \Theta(n^2)\).

4. Show that there is no comparison sort whose running time is linear for at least half of the n! inputs of length n. What about a fraction of 1/n of the inputs of length n? What about a fraction 1/2𝑛?

Every comparison sort can be represented as a binary decision tree:

each internal node = one comparison,
each root to leaf path = the answers to those comparisons for one input,
each leaf = the final permutation the algorithm outputs.

Case One Half of All Inputs:

Total possible inputs of length  n = n! .
linear time on at least n!/2 of them.

But then the tree needs at least n!/2 leaves among those first c n levels, use Stirling’s approximation  lg(n!) = Θ(n lg n):

lg(n!/2)  ≤  c · n
Θ(n lg n) ≤  c · n      
 impossible for large n

Answer: No comparison sort can be linear on at least half of all inputs.

Case only a 1/n Fraction

Now the algorithm may be linear on at most (n!)/n = (n − 1)! inputs:

lg((n-1)!)  ≤  c · n
Θ(n lg n)   ≤  c · n     
 same contradiction

Answer: impossible.

Case 1/2n :

Allow only n!/2n quick inputs:

lg(n!/2^n) = lg(n!) − n  ≤  c · n
Θ(n lg n) − n            ≤  c · n    
Θ(n lg n) still dominates

Answer: impossible.

Final Answer:

No comparison sort can run in linear time on ½, 1/n

Even a vanishingly small constant size fraction of the n! possible permutations demands Θ(n log n) bits hence Θ(n log n) comparisons.

5. You have a “black-box” worst-case linear-time median subroutine. Give a simple, linear-time algorithm that solves the selection problem for an arbitrary order statistic

Algorithm: SELECTION(A, k)

 SELECTION(A,k)
     BLACK-BOX(A)
     IF k =n/2 return A[n/2]
     DIVIDE(A)
     IF k < n/2 SELECTION(A1,k)
     ELSE SELECTION(A2,n/2−k)
 END SELECTION

Time Complexity Analysis

In each call on an array of size n :

     T(n) ≤ cn +T(n/2)
          = c(n+n/2+n/4+n/8+...+T(1))
          ≤ 2cn
          = O(n)

The recurrence \[ T(n) = T\bigl(\lceil n/2\rceil\bigr) + Θ(n) \] Answer: \[T(n) = Θ(n). \]

6. Which is a more efficient way to determine the optimal number of multiplications in a matrix-chain multiplication problem: enumerating all the ways of parenthesizing the product and computing the number of multiplications for each, or running RECURSIVE-MATRIX-CHAIN? Justify your answer.

Try every parenthesization = Make a list of every possible way that could insert parentheses, then evaluate each one.

RECURSIVE‑MATRIX‑CHAIN = Pick the first place that might cut the chain, then recursively do the same thing on the left piece and on the right piece.

Example:

n number of complete parenthesizations RECURSIVE‑MATRIX‑CHAIN
211
323
457
51417
64241
713299

Growth rate different. Listing all parenthesizations ≈  \( 4^{n}\) steps. RECURSIVE-MATRIX-CHAIN ≈  \(3^{n} \) steps.

Answer: Running RECURSIVE-MATRIX-CHAIN is more efficient than enumerating all the ways of parenthesizing, because \(3^{n} \ll 4^{n}\) for large \(n\).

7. Show how to compute the length of an LCS using only 2.min{m, n} entries in the c table plus O(1) additional space. Then show how to do the same thing, but using 2.min{m, n} entries plus O(1) additional space.

X and Y are the input sequences.
m = |X|, n = |Y|, s = min{m,n}.
If m > n we first swap the two strings so that s = m.

Algorithm 1, Two Row DP

store only the current and previous row, swapping them after each j pass

function LCS_2row(X[1..s], Y[1..ℓ]):        
    prev[0..s] ← 0                          
    curr[0..s] ← 0                           
        curr[0] ← 0
        for i ← 1 to s do
            if X[i] = Y[j] then
                curr[i] ← prev[i‑1] + 1      
            else
                curr[i] ← max(prev[i], curr[i‑1])
        swap(prev, curr)                     
    return prev[s]

Space Analysis: two arrays of length s+1 plus loop indices → 2·min{m,n} + O(1).

Algorithm 2, One Row DP

recycle a single row; carry the diagonal value in a scalar topleft

function LCS_1row(X[1..s], Y[1..ℓ]):
    c[0..s] ← 0                           
    for j ← 1 to ℓ do
        topleft ← 0                      
        for i ← 1 to s do
            save ← c[i]                  
            if X[i] = Y[j] then
                c[i] ← topleft + 1
            else
                c[i] ← max(c[i], c[i‑1])
            topleft ← save               
    return c[s]

Space Analysis: one length‑s+1 array plus a few scalars → min{m,n} + O(1).

8. You are given a set of activities to schedule among a large number of lecture halls, where any activity can take place in any lecture hall. You wish to schedule all the activities using as few lecture halls as possible. Give an efficient greedy algorithm to determine which activity should use which lecture hall.

Greedy algorithm rule: Assign the next activity to the hall that becomes free soonest provided it is free by the activity’s start time. If no hall is free, open a new hall.

Order all activities by nondecreasing start.
maintaining a min heap keyed by finish times of the last activity scheduled in each lecture hall.
function SCHEDULE(A):                
    sort A by start time                              

    H ← empty min-heap of (finish, hall_id)           
    hall_count ← 0
    assignment ← {}                                   

    for (start, finish) in A:                         
        if H ≠ ∅ and H.min.finish ≤ start:            
            (old_finish, h) ← H.extract_min()
            assignment[(start, finish)] ← h
        else:
            hall_count ← hall_count + 1               
            h ← hall_count
            assignment[(start, finish)] ← h
        H.insert((finish, h))

    return hall_count, assignment

Explanation:

When an activity starts, picking the hall that frees up earliest preserves maximal flexibility for future activities
no later algorithm can place this activity in a hall that frees up sooner.
At any moment, the algorithm uses exactly as many halls as activities currently overlap. Hence it never exceeds and therefore matches the theoretical minimum.

OperationCost:
Sort by start time\(O(n \log n)\) Heap inserts/extracts (\(n\) total)\(O(n \log n)\) Total\(O(n \log n)\) time, \(O(n)\) space for heap + output

This greedy method is optimal for minimising the number of lecture halls.

9. You wish not only to increment a counter but also to reset it to 0 (i.e., make all bits in it 0). Counting the time to examine or modify a bit as (1), show how to implement a counter as an array of bits so that any sequence of n INCREMENT and RESET operations takes O(n) time on an initially zero counter.

bit[i] =actual bit value.
tag[i] = epoch in which bit[i] was last written.
epoch= current global epoch integer, starts 0.
A bit is logically 1 if tag[i] == epoch and bit[i] == 1 If tag[i] ≠ epoch, treat the bit as 0, even though the old physical value is still stored.

RESET makes the entire counter appear zero simply by doing epoch ← epoch + 1;

Function INCREMENT()
    i ← 0
    loop               
        if tag[i] ≠ epoch or bit[i] = 0 then    
            bit[i] ← 1
            tag[i] ← epoch
            return
        else                                    
            bit[i] ← 0         
            i ← i + 1
Function RESET()
    epoch ← epoch + 1   

By adding a single integer epoch and tagging each bit with the epoch in which it was last written, we convert RESET which scans all k bits into an O(1) operation while keeping INCREMENT at amortised O(1) . So, any mix of n INCREMENTs and RESETs finishes in O(n) time.