The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.

Uploaded by
Carl Sagan

108 downloads 2359 Views 233KB Size

Mergeable persistent data structures

We may have all come on different ships, but we're in the same boat now. M.L.King

Data analysis and modeling in human genomics

The only limits you see are the ones you impose on yourself. Dr. Wayne Dyer

Smart frequent itemsets mining algorithm based on FP-tree and DIFFset data structures

Don't ruin a good today by thinking about a bad yesterday. Let it go. Anonymous

Data Sheet Rev. C

Love only grows by sharing. You can only have more for yourself by giving it away to others. Brian

Data Sheet Rev. C

If you want to go quickly, go alone. If you want to go far, go together. African proverb

Data Analysis and Information Visualization

No amount of guilt can solve the past, and no amount of anxiety can change the future. Anonymous

Data Structures and Algorithm Analysis in C (second edition) Solutions Manual

Mark Allen Weiss Florida International University

Preface Included in this manual are answers to most of the exercises in the textbook Data Structures and Algorithm Analysis in C, second edition, published by Addison-Wesley. These answers reflect the state of the book in the first printing. Specifically omitted are likely programming assignments and any question whose solution is pointed to by a reference at the end of the chapter. Solutions vary in degree of completeness; generally, minor details are left to the reader. For clarity, programs are meant to be pseudo-C rather than completely perfect code. Errors can be reported to [email protected] Thanks to Grigori Schwarz and Brian Harvey for pointing out errors in previous incarnations of this manual.

Table of Contents

1. Chapter 1: Introduction ......................................................................................................

1

2. Chapter 2: Algorithm Analysis ..........................................................................................

4

3. Chapter 3: Lists, Stacks, and Queues .................................................................................

7

4. Chapter 4: Trees .................................................................................................................

14

5. Chapter 5: Hashing ............................................................................................................

25

6. Chapter 6: Priority Queues (Heaps) ...................................................................................

29

7. Chapter 7: Sorting ..............................................................................................................

36

8. Chapter 8: The Disjoint Set ADT .......................................................................................

42

9. Chapter 9: Graph Algorithms .............................................................................................

45

10. Chapter 10: Algorithm Design Techniques ......................................................................

54

11. Chapter 11: Amortized Analysis ......................................................................................

63

12. Chapter 12: Advanced Data Structures and Implementation ............................................

66

-iii-

Chapter 1: Introduction 1.3

Because of round-off errors, it is customary to specify the number of decimal places that should be included in the output and round up accordingly. Otherwise, numbers come out looking strange. We assume error checks have already been performed; the routine SeparateO is left to the reader. Code is shown in Fig. 1.1.

1.4

The general way to do this is to write a procedure with heading void ProcessFile( const char *FileName ); which opens FileName,O does whatever processing is needed, and then closes it. If a line of the form #include SomeFile is detected, then the call ProcessFile( SomeFile ); is made recursively. Self-referential includes can be detected by keeping a list of files for which a call to ProcessFileO has not yet terminated, and checking this list before making a new call to ProcessFile.O

1.5

(a) The proof is by induction. The theorem is clearly true for 0 < XO ≤ 1, since it is true for XO = 1, and for XO < 1, log XO is negative. It is also easy to see that the theorem holds for 1 < XO ≤ 2, since it is true for XO = 2, and for XO < 2, log XO is at most 1. Suppose the theorem is true for pO < XO ≤ 2pO (where pO is a positive integer), and consider any 2pO < YO ≤ 4pO (pO ≥ 1). Then log YO = 1 + log (YO / 2) < 1 + YO / 2 < YO / 2 + YO / 2 ≤ YO, where the first inequality follows by the inductive hypothesis. (b) Let 2XO = AO. Then AOBO = (2XO)BO = 2XBO. Thus log AOBO = XBO. Since XO = log AO, the theorem is proved.

1.6

(a) The sum is 4/3 and follows directly from the formula. 1 2 3 2 3 (b) SO = __ + ___ + ___ + . . . . 4SO = 1+ __ + ___ + . . . . Subtracting the first equation from 2 3 4 4 4 4 42 2 1 ___ the second gives 3SO = 1 + __ + 2 + . . . . By part (a), 3SO = 4/ 3 so SO = 4/ 9. 4 4 4 9 4 ___ 9 16 1 ___ ___ __ + 2 + ___ + . . . . Subtracting the first equa(c) SO = __ + 2 + 3 + . . . . 4SO = 1 + 4 4 4 4 43 4 5 7 3 ___ ___ tion from the second gives 3SO = 1+ __ + 2 + 3 + . . . . Rewriting, we get 4 4 4 ∞ i 1 ___ + ∞ ___ 3SO = 2 Σ iO Σ iO . Thus 3SO = 2(4/ 9) + 4/ 3 = 20/ 9. Thus SO = 20/ 27. iO=0 4 iO=0 4 ∞ iONO (d) Let SNO = Σ ___ . Follow the same method as in parts (a) - (c) to obtain a formula for SNO iO iO=0 4 in terms of SNO−1, SNO−2, ..., SO0 and solve the recurrence. Solving the recurrence is very difficult.

-1-

_______________________________________________________________________________ double RoundUp( double N, int DecPlaces ) { int i; double AmountToAdd = 0.5; for( i = 0; i < DecPlaces; i++ ) AmountToAdd /= 10; return N + AmountToAdd; } void PrintFractionPart( double FractionPart, int DecPlaces ) { int i, Adigit; for( i = 0; i < DecPlaces; i++ ) { FractionPart *= 10; ADigit = IntPart( FractionPart ); PrintDigit( Adigit ); FractionPart = DecPart( FractionPart ); } } void PrintReal( double N, int DecPlaces ) { int IntegerPart; double FractionPart; if( N < 0 ) { putchar(’-’); N = -N; } N = RoundUp( N, DecPlaces ); IntegerPart = IntPart( N ); FractionPart = DecPart( N ); PrintOut( IntegerPart ); /* Using routine in text */ if( DecPlaces > 0 ) putchar(’.’); PrintFractionPart( FractionPart, DecPlaces ); } Fig. 1.1. _______________________________________________________________________________ N 1 1 1 ∼ __ __ − OINO/ 2 − 1OK __ = ∼ ln NO − ln NO/ 2 ∼∼ ln 2. Σ Σ Σ i i i iO=OINO/ 2OK iO=1 iO=1 N

1.7

-2-

1.8

24 = 16 ≡ 1 (modO 5). (24)25 ≡ 125 (modO 5). Thus 2100 ≡ 1 (modO 5).

1.9

(a) Proof is by induction. The statement is clearly true for NO = 1 and NO = 2. Assume true for NO = 1, 2, ..., kO. Then

kO+1

Σ FiO =

iO=1

k

Σ FiO+FkO+1.

By the induction hypothesis, the value of the

iO=1

sum on the right is FkO+2 − 2 + FkO+1 = FkO+3 − 2, where the latter equality follows from the definition of the Fibonacci numbers. This proves the claim for NO = kO + 1, and hence for all NO. (b) As in the text, the proof is by induction. Observe that φ + 1 = φ2. This implies that φ−1 + φ−2 = 1. For NO = 1 and NO = 2, the statement is true. Assume the claim is true for NO = 1, 2, ..., kO. FkO+1 = FkO + FkO−1 by the definition and we can use the inductive hypothesis on the right-hand side, obtaining F < φkO + φkO−1 kO+1

< φ−1φkO+1 + φ−2φkO+1 FkO+1 < (φ−1 + φ−2)φkO+1 < φkO+1 and proving the theorem. (c) See any of the advanced math references at the end of the chapter. The derivation involves the use of generating functions. N

1.10 (a)

N

N

(2iO−1) = 2 Σ iO − Σ 1 = NO(NO+1) − NO = NO2. Σ iO=1 iO=1 iO=1

(b) The easiest way to prove this is by induction. The case NO = 1 is trivial. Otherwise, NO+1

N

iO3 Σ iO3 = (NO+1)3 + iΣ iO=1 O=1 NO2(NO+1)2 = (NO+1)3 + _________ 4 H ___ J NO2 + (NO+1)OA = (NO+1)2OA I 4 K H ___________ NO2 + 4NO + 4 J OA = (NO+1)2OA 4 I K (NO+1)2(NO+2)2 _____________ 22 2 H ___________ (NO+1)(NO+2) J = OA OA 2 I K 2 HNO+1 J = OA Σ iOA I iO=1 K =

-3-

Chapter 2: Algorithm Analysis 2.1

2/NO, 37, √MM NOO, NO, NOlog log NO, NOlog NO, NOlog (NO2), NOlog2NO, NO1.5, NO2, NO2log NO, NO3, 2NO/ 2, NO 2 . NOlog NO and NOlog (NO2) grow at the same rate.

2.2

(a) True. (b) False. A counterexample is TO1(NO) = 2NO, TO2(NO) = NO, and PfOO(NO) = NO.

(c) False. A counterexample is TO1(NO) = NO2, TO2(NO) = NO, and PfOO(NO) = NO2. (d) False. The same counterexample as in part (c) applies.

2.3

We claim that NOlog NO is the slower growing function. To see this, suppose otherwise. √MMMMM Then, NOε/ log NOO would grow slower than log NO. Taking logs of both sides, we find that, MMMMM under this assumption, ε/ √Mlog NOOlog NO grows slower than log log NO. But the first expresMMMMMM MMOO grows slower than sion simplifies to ε √log NOO. If LO = log NO, then we are claiming that ε √L log LO, or equivalently, that ε2LO grows slower than log2 LO. But we know that log2 LO = ο (LO), so the original assumption is false, proving the claim.

2.4

Clearly, logkO1NO = ο(logkO2NO) if kO1 < kO2, so we need to worry only about positive integers. The claim is clearly true for kO = 0 and kO = 1. Suppose it is true for kO < iO. Then, by L’Hospital’s rule, logiON logiO−1N lim ______ = lim i _______ N NO→∞ N NO→∞ The second limit is zero by the inductive hypothesis, proving the claim.

2.5

Let PfOO(NO) = 1 when NO is even, and NO when NO is odd. Likewise, let gO(NO) = 1 when NO is odd, and NO when NO is even. Then the ratio PfOO(NO) / gO(NO) oscillates between 0 and ∞.

2.6

For all these programs, the following analysis will agree with a simulation: (I) The running time is OO(NO). (II) The running time is OO(NO2). (III) The running time is OO(NO3). (IV) The running time is OO(NO2). (V) PjO can be as large as iO2, which could be as large as NO2. kO can be as large as PjO, which is NO2. The running time is thus proportional to NO.NO2.NO2, which is OO(NO5). (VI) The ifO statement is executed at most NO3 times, by previous arguments, but it is true only OO(NO2) times (because it is true exactly iO times for each iO). Thus the innermost loop is only executed OO(NO2) times. Each time through, it takes OO(PjO2) = OO(NO2) time, for a total of OO(NO4). This is an example where multiplying loop sizes can occasionally give an overestimate.

2.7

(a) It should be clear that all algorithms generate only legal permutations. The first two algorithms have tests to guarantee no duplicates; the third algorithm works by shuffling an array that initially has no duplicates, so none can occur. It is also clear that the first two algorithms are completely random, and that each permutation is equally likely. The third algorithm, due to R. Floyd, is not as obvious; the correctness can be proved by induction.

-4-

See J. Bentley, "Programming Pearls," Communications of the ACM 30 (1987), 754-757. Note that if the second line of algorithm 3 is replaced with the statement Swap( A[i], A[ RandInt( 0, N-1 ) ] ); then not all permutations are equally likely. To see this, notice that for NO = 3, there are 27 equally likely ways of performing the three swaps, depending on the three random integers. Since there are only 6 permutations, and 6 does not evenly divide 27, each permutation cannot possibly be equally represented. (b) For the first algorithm, the time to decide if a random number to be placed in AO[iO] has not been used earlier is OO(iO). The expected number of random numbers that need to be tried is NO/ (NO − iO). This is obtained as follows: iO of the NO numbers would be duplicates. Thus the probability of success is (NO − iO) / NO. Thus the expected number of independent trials is NO/ (NO − iO). The time bound is thus NO−1 NO2 N 1 Ni 1 ____ __ = OO(NO2 ____ < NO2NO−1 ____ O2 < < N log NO) Σ Σ Σ Σ N O −i N O −i N O −i iO=0 iO=0 iO=0 PjO=1 Pj

NO−1

The second algorithm saves a factor of iO for each random number, and thus reduces the time bound to OO(NOlog NO) on average. The third algorithm is clearly linear. (c, d) The running times should agree with the preceding analysis if the machine has enough memory. If not, the third algorithm will not seem linear because of a drastic increase for large NO. (e) The worst-case running time of algorithms I and II cannot be bounded because there is always a finite probability that the program will not terminate by some given time TO. The algorithm does, however, terminate with probability 1. The worst-case running time of the third algorithm is linear - its running time does not depend on the sequence of random numbers. 2.8

Algorithm 1 would take about 5 days for NO = 10,000, 14.2 years for NO = 100,000 and 140 centuries for NO = 1,000,000. Algorithm 2 would take about 3 hours for NO = 100,000 and about 2 weeks for NO = 1,000,000. Algorithm 3 would use 11⁄2 minutes for NO = 1,000,000. These calculations assume a machine with enough memory to hold the array. Algorithm 4 solves a problem of size 1,000,000 in 3 seconds.

2.9

(a) OO(NO2). (b) OO(NOlog NO).

2.10 (c) The algorithm is linear. 2.11 Use a variation of binary search to get an OO(log NO) solution (assuming the array is preread). NOO. 2.13 (a) Test to see if NO is an odd number (or 2) and is not divisible by 3, 5, 7, ..., √MM MMOO), assuming that all divisions count for one unit of time. (b) OO( √N

(c) BO = OO(log NO). (d) OO(2BO/ 2). (e) If a 20-bit number can be tested in time TO, then a 40-bit number would require about TO2 time. (f) BO is the better measure because it more accurately represents the sizeO of the input.

-5-

2.14 The running time is proportional to NO times the sum of the reciprocals of the primes less than NO. This is OO(NOlog log NO). See Knuth, Volume 2, page 394. 2.15 Compute XO2, XO4, XO8, XO10, XO20, XO40, XO60, and XO62. 2.16 Maintain an OIarray PowersOfXO that can be filled in a for loop. The array will contain XO, XO2, log NOK XO4, up to XO2 . The binary representation of NO (which can be obtained by testing even or odd and then dividing by 2, until all bits are examined) can be used to multiply the appropriate entries of the array. 2.17 For NO = 0 or NO = 1, the number of multiplies is zero. If bO(NO) is the number of ones in the binary representation of NO, then if NO > 1, the number of multiplies used is

OIlog NOK + bO(NO) − 1 2.18 (a) AO. (b) BO. (c) The information given is not sufficient to determine an answer. We have only worstcase bounds. (d) Yes. 2.19 (a) Recursion is unnecessary if there are two or fewer elements. (b) One way to do this is to note that if the first NO−1 elements have a majority, then the last element cannot change this. Otherwise, the last element could be a majority. Thus if NO is odd, ignore the last element. Run the algorithm as before. If no majority element emerges, then return the NOthO element as a candidate. (c) The running time is OO(NO), and satisfies TO(NO) = TO(NO/ 2) + OO(NO). (d) One copy of the original needs to be saved. After this, the BO array, and indeed the recursion can be avoided by placing each BiO in the AO array. The difference is that the original recursive strategy implies that OO(log NO) arrays are used; this guarantees only two copies. 2.20 Otherwise, we could perform operations in parallel by cleverly encoding several integers into one. For instance, if A = 001, B = 101, C = 111, D = 100, we could add A and B at the same time as C and D by adding 00A00C + 00B00D. We could extend this to add NO pairs of numbers at once in unit cost. 2.22 No. If LowO = 1, HighO = 2, then MidO = 1, and the recursive call does not make progress. 2.24 No. As in Exercise 2.22, no progress is made.

-6-

Chapter 3: Lists, Stacks, and Queues 3.2

The comments for Exercise 3.4 regarding the amount of abstractness used apply here. The running time of the procedure in Fig. 3.1 is O (L + P ). _______________________________________________________________________________ O

O

O

void PrintLots( List L, List P ) { int Counter; Position Lpos, Ppos; Lpos = First( L ); Ppos = First( P ); Counter = 1; while( Lpos != NULL && Ppos != NULL ) { if( Ppos->Element == Counter++ ) { printf( "%? ", Lpos->Element ); Ppos = Next( Ppos, P ); } Lpos = Next( Lpos, L ); } } Fig. 3.1. _______________________________________________________________________________ 3.3

(a) For singly linked lists, the code is shown in Fig. 3.2.

-7-

_______________________________________________________________________________ /* BeforeP is the cell before the two adjacent cells that are to be swapped. */ /* Error checks are omitted for clarity. */ void SwapWithNext( Position BeforeP, List L ) { Position P, AfterP; P = BeforeP->Next; AfterP = P->Next; /* Both P and AfterP assumed not NULL. */ P->Next = AfterP->Next; BeforeP->Next = AfterP; AfterP->Next = P; } Fig. 3.2. _______________________________________________________________________________ (b) For doubly linked lists, the code is shown in Fig. 3.3. _______________________________________________________________________________ /* P and AfterP are cells to be switched. Error checks as before. */ void SwapWithNext( Position P, List L ) { Position BeforeP, AfterP; BeforeP = P->Prev; AfterP = P->Next; P->Next = AfterP->Next; BeforeP->Next = AfterP; AfterP->Next = P; P->Next->Prev = P; P->Prev = AfterP; AfterP->Prev = BeforeP; } Fig. 3.3. _______________________________________________________________________________ 3.4

Intersect is shown on page 9. O

-8-

_______________________________________________________________________________ /* This code can be made more abstract by using operations such as /* Retrieve and IsPastEnd to replace L1Pos->Element and L1Pos != NULL. /* We have avoided this because these operations were not rigorously defined.

*/ */ */

List Intersect( List L1, List L2 ) { List Result; Position L1Pos, L2Pos, ResultPos; L1Pos = First( L1 ); L2Pos = First( L2 ); Result = MakeEmpty( NULL ); ResultPos = First( Result ); while( L1Pos != NULL && L2Pos != NULL ) { if( L1Pos->Element < L2Pos->Element ) L1Pos = Next( L1Pos, L1 ); else if( L1Pos->Element > L2Pos->Element ) L2Pos = Next( L2Pos, L2 ); else { Insert( L1Pos->Element, Result, ResultPos ); L1 = Next( L1Pos, L1 ); L2 = Next( L2Pos, L2 ); ResultPos = Next( ResultPos, Result ); } } return Result; } _______________________________________________________________________________ 3.5

Fig. 3.4 contains the code for Union.

3.7

(a) One algorithm is to keep the result in a sorted (by exponent) linked list. Each of the MN multiplies requires a search of the linked list for duplicates. Since the size of the linked list is O (MN ), the total running time is O (M 2N 2).

O

O

O

O

O

O

O

(b) The bound can be improved by multiplying one term by the entire other polynomial, and then using the equivalent of the procedure in Exercise 3.2 to insert the entire sequence. Then each sequence takes O (MN ), but there are only M of them, giving a time bound of O (M 2N ). O

O

O

O

O

O

(c) An O (MN log MN ) solution is possible by computing all MN pairs and then sorting by exponent using any algorithm in Chapter 7. It is then easy to merge duplicates afterward. O

O

O

O

(d) The choice of algorithm depends on the relative values of M and N . If they are close, then the solution in part (c) is better. If one polynomial is very small, then the solution in part (b) is better. O

-9-

O

_______________________________________________________________________________ List Union( List L1, List L2 ) { List Result; ElementType InsertElement; Position L1Pos, L2Pos, ResultPos; L1Pos = First( L1 ); L2Pos = First( L2 ); Result = MakeEmpty( NULL ); ResultPos = First( Result ); while ( L1Pos != NULL && L2Pos != NULL ) { if( L1Pos->Element < L2Pos->Element ) { InsertElement = L1Pos->Element; L1Pos = Next( L1Pos, L1 ); } else if( L1Pos->Element > L2Pos->Element ) { InsertElement = L2Pos->Element; L2Pos = Next( L2Pos, L2 ); } else { InsertElement = L1Pos->Element; L1Pos = Next( L1Pos, L1 ); L2Pos = Next( L2Pos, L2 ); } Insert( InsertElement, Result, ResultPos ); ResultPos = Next( ResultPos, Result ); } /* Flush out remaining list */ while( L1Pos != NULL ) { Insert( L1Pos->Element, Result, ResultPos ); L1Pos = Next( L1Pos, L1 ); ResultPos = Next( ResultPos, Result ); } while( L2Pos != NULL ) { Insert( L2Pos->Element, Result, ResultPos ); L2Pos = Next( L2Pos, L2 ); ResultPos = Next( ResultPos, Result ); } return Result; } Fig. 3.4. _______________________________________________________________________________ 3.8

One can use the Pow function in Chapter 2, adapted for polynomial multiplication. If P is small, a standard method that uses O (P ) multiplies instead of O (log P ) might be better because the multiplies would involve a large number with a small number, which is good for the multiplication routine in part (b). O

O

O

O

O

O

3.10 This is a standard programming project. The algorithm can be sped up by setting M' = M mod N , so that the hot potato never goes around the circle more than once, and O

O

O

O

-10-

then if M' > N / 2, passing the potato appropriately in the alternative direction. This requires a doubly linked list. The worst-case running time is clearly O (N min (M , N )), although when these heuristics are used, and M and N are comparable, the algorithm might be significantly faster. If M = 1, the algorithm is clearly linear. The VAX/VMS C compiler’s memory management routines do poorly with the particular pattern of free s in this case, causing O (N log N ) behavior. O

O

O

O

O

O

O

O

O

O

P

O

O

O

O

3.12 Reversal of a singly linked list can be done nonrecursively by using a stack, but this requires O (N ) extra space. The solution in Fig. 3.5 is similar to strategies employed in garbage collection algorithms. At the top of the while loop, the list from the start to PreviousPos is already reversed, whereas the rest of the list, from CurrentPos to the end, is normal. This algorithm uses only constant extra space. _______________________________________________________________________________ O

O

O

O

O

/* Assuming no header and L is not empty. */ List ReverseList( List L ) { Position CurrentPos, NextPos, PreviousPos; PreviousPos = NULL; CurrentPos = L; NextPos = L->Next; while( NextPos != NULL ) { CurrentPos->Next = PreviousPos; PreviousPos = CurrentPos; CurrentPos = NextPos; NextPos = NextPos->Next; } CurrentPos->Next = PreviousPos; return CurrentPos; } Fig. 3.5. _______________________________________________________________________________ 3.15 (a) The code is shown in Fig. 3.6. (b) See Fig. 3.7. (c) This follows from well-known statistical theorems. See Sleator and Tarjan’s paper in the Chapter 11 references. 3.16 (c) Delete takes O (N ) and is in two nested for loops each of size N , giving an obvious O (N 3) bound. A better bound of O (N 2) is obtained by noting that only N elements can be deleted from a list of size N , hence O (N 2) is spent performing deletes. The remainder of the routine is O (N 2), so the bound follows. O

O

O

O

O

O

O

O

O

O

O

O

O

O

(d) O (N 2). O

O

-11-

_______________________________________________________________________________ /* Array implementation, starting at slot 1 */ Position Find( ElementType X, List L ) { int i, Where; Where = 0; for( i = 1; i < L.SizeOfList; i++ ) if( X == L[i].Element ) { Where = i; break; } if( Where ) /* Move to front. */ { for( i = Where; i > 1; i-- ) L[i].Element = L[i-1].Element; L[1].Element = X; return 1; } else return 0; /* Not found. */ } Fig. 3.6. _______________________________________________________________________________ (e) Sort the list, and make a scan to remove duplicates (which must now be adjacent). 3.17 (a) The advantages are that it is simpler to code, and there is a possible savings if deleted keys are subsequently reinserted (in the same place). The disadvantage is that it uses more space, because each cell needs an extra bit (which is typically a byte), and unused cells are not freed. 3.21 Two stacks can be implemented in an array by having one grow from the low end of the array up, and the other from the high end down. 3.22 (a) Let E be our extended stack. We will implement E with two stacks. One stack, which we’ll call S , is used to keep track of the Push and Pop operations, and the other, M , keeps track of the minimum. To implement Push(X,E), we perform Push(X,S). If X is smaller than or equal to the top element in stack M , then we also perform Push(X,M). To implement Pop(E), we perform Pop(S). If X is equal to the top element in stack M , then we also Pop(M). FindMin(E) is performed by examining the top of M . All these operations are clearly O (1). O

O

O

O

O

O

O

O

O

O

O

O

(b) This result follows from a theorem in Chapter 7 that shows that sorting must take Ω(N log N ) time. O (N ) operations in the repertoire, including DeleteMin , would be sufficient to sort. O

O

O

O

O

-12-

_______________________________________________________________________________ /* Assuming a header. */ Position Find( ElementType X, List L ) { Position PrevPos, XPos; PrevPos = FindPrevious( X, L ); if( PrevPos->Next != NULL ) /* Found. */ { XPos = PrevPos ->Next; PrevPos->Next = XPos->Next; XPos->Next = L->Next; L->Next = XPos; return XPos; } else return NULL; } Fig. 3.7. _______________________________________________________________________________ 3.23 Three stacks can be implemented by having one grow from the bottom up, another from the top down, and a third somewhere in the middle growing in some (arbitrary) direction. If the third stack collides with either of the other two, it needs to be moved. A reasonable strategy is to move it so that its center (at the time of the move) is halfway between the tops of the other two stacks. 3.24 Stack space will not run out because only 49 calls will be stacked. However, the running time is exponential, as shown in Chapter 2, and thus the routine will not terminate in a reasonable amount of time. 3.25 The queue data structure consists of pointers Q->Front and Q->Rear, which point to the beginning and end of a linked list. The programming details are left as an exercise because it is a likely programming assignment. O

O

3.26 (a) This is a straightforward modification of the queue routines. It is also a likely programming assignment, so we do not provide a solution.

-13-

Chapter 4: Trees 4.1

(a) A . O

(b) G , H , I , L , M , and K . O

4.2

O

O

O

O

O

For node B : O

(a) A . O

(b) D and E . O

O

(c) C . O

(d) 1. (e) 3. 4.3

4.

4.4

There are N nodes. Each node has two pointers, so there are 2N pointers. Each node but the root has one incoming pointer from its parent, which accounts for N −1 pointers. The rest are NULL. O

O

O

O

4.5

Proof is by induction. The theorem is trivially true for H = 0. Assume true for H = 1, 2, ..., k . A tree of height k +1 can have two subtrees of height at most k . These can have at most 2k +1−1 nodes each by the induction hypothesis. These 2k +2−2 nodes plus the root prove the theorem for height k +1 and hence for all heights. O

O

O

O

O

O

O

O

4.6

This can be shown by induction. Alternatively, let N = number of nodes, F = number of full nodes, L = number of leaves, and H = number of half nodes (nodes with one child). Clearly, N = F + H + L . Further, 2F + H = N − 1 (see Exercise 4.4). Subtracting yields L − F = 1. O

O

O

O

4.7

O

O

O

O

O

O

O

This can be shown by induction. In a tree with no nodes, the sum is zero, and in a one-node tree, the root is a leaf at depth zero, so the claim is true. Suppose the theorem is true for all trees with at most k nodes. Consider any tree with k +1 nodes. Such a tree consists of an i node left subtree and a k − i node right subtree. By the inductive hypothesis, the sum for the left subtree leaves is at most one with respect to the left tree root. Because all leaves are one deeper with respect to the original tree than with respect to the subtree, the sum is at most 1⁄2 with respect to the root. Similar logic implies that the sum for leaves in the right subtree is at most 1⁄2, proving the theorem. The equality is true if and only if there are no nodes with one child. If there is a node with one child, the equality cannot be true because adding the second child would increase the sum to higher than 1. If no nodes have one child, then we can find and remove two sibling leaves, creating a new tree. It is easy to see that this new tree has the same sum as the old. Applying this step repeatedly, we arrive at a single node, whose sum is 1. Thus the original tree had sum 1. O

O

O

4.8

O

O

O

(a) - * * a b + c d e. (b) ( ( a * b ) * ( c + d ) ) - e. (c) a b * c d + * e -.

-14-

O

4.9

3

4

1

4

1

2

6

6

2

5

5

9

9

7

7 4.11 This problem is not much different from the linked list cursor implementation. We maintain an array of records consisting of an element field, and two integers, left and right. The free list can be maintained by linking through the left field. It is easy to write the CursorNew and CursorDispose routines, and substitute them for malloc and free. O

O

4.12 (a) Keep a bit array B . If i is in the tree, then B [i ] is true; otherwise, it is false. Repeatedly generate random integers until an unused one is found. If there are N elements already in the tree, then M − N are not, and the probability of finding one of these is (M − N ) / M . Thus the expected number of trials is M / (M −N ) = α / (α − 1). O

O

O

O

O

O

O

O

O

O

O

O

O

(b) To find an element that is in the tree, repeatedly generate random integers until an already-used integer is found. The probability of finding one is N / M , so the expected number of trials is M / N = α. O

O

O

O

(c) The total cost for one insert and one delete is α / (α − 1) + α = 1 + α + 1 / (α − 1). Setting α = 2 minimizes this cost. 4.15 (a) N (0) = 1, N (1) = 2, N (H ) = N (H −1) + N (H −2) + 1. O

O

O

O

O

O

O

O

(b) The heights are one less than the Fibonacci numbers. 4.16

4 2

6 3

1

5

9 7

4.17 It is easy to verify by hand that the claim is true for 1 ≤ k ≤ 3. Suppose it is true for k = 1, 2, 3, ... H . Then after the first 2H − 1 insertions, 2H −1 is at the root, and the right subtree is a balanced tree containing 2H −1 + 1 through 2H − 1. Each of the next 2H −1 insertions, namely, 2H through 2H + 2H −1 − 1, insert a new maximum and get placed in the right O

O

O

O

O

O

O

O

O

O

-15-

O

subtree, eventually forming a perfectly balanced right subtree of height H −1. This follows by the induction hypothesis because the right subtree may be viewed as being formed from the successive insertion of 2H −1 + 1 through 2H + 2H −1 − 1. The next insertion forces an imbalance at the root, and thus a single rotation. It is easy to check that this brings 2H to the root and creates a perfectly balanced left subtree of height H −1. The new key is attached to a perfectly balanced right subtree of height H −2 as the last node in the right path. Thus the right subtree is exactly as if the nodes 2H + 1 through 2H + 2H −1 were inserted in order. By the inductive hypothesis, the subsequent successive insertion of 2H + 2H −1 + 1 through 2H +1 − 1 will create a perfectly balanced right subtree of height H −1. Thus after the last insertion, both the left and the right subtrees are perfectly balanced, and of the same height, so the entire tree of 2H +1 − 1 nodes is perfectly balanced (and has height H ). O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

4.18 The two remaining functions are mirror images of the text procedures. Just switch Right and Left everywhere.

O

O

4.20 After applying the standard binary search tree deletion algorithm, nodes on the deletion path need to have their balance changed, and rotations may need to be performed. Unlike insertion, more than one node may need rotation. 4.21 (a) O (log log N ). O

O

(b) The minimum AVL tree of height 255 (a huge tree). 4.22 _______________________________________________________________________________ Position DoubleRotateWithLeft( Position K3 ) { Position K1, K2; K1 = K3->Left; K2 = K1->Right; K1->Right = K2->Left; K3->Left = K2->Right; K2->Left = K1; K2->Right = K3; K1->Height = Max( Height(K1->Left), Height(K1->Right) ) + 1; K3->Height = Max( Height(K3->Left), Height(K3->Right) ) + 1; K2->Height = Max( K1->Height, K3->Height ) + 1; return K3; } _______________________________________________________________________________

-16-

4.23 After accessing 3,

3 2

10 4

1

11 6

12

5

8 7

13 9

After accessing 9,

9 3 2

10 4

11

1

8 6 5

-17-

12 13

7

After accessing 1,

1 9 2

10 3

11 4

12 8

13

6 5

7

After accessing 5,

5 1

9 2

6

10

4

8

3

7

11 12 13

-18-

4.24

5 9

1 2

8 4

7

3

10 11 12 13

4.25 (a) 523776. (b) 262166, 133114, 68216, 36836, 21181, 13873. (c) After Find (9). O

4.26 (a) An easy proof by induction. 4.28 (a-c) All these routines take linear time. _______________________________________________________________________________ /* These functions use the type BinaryTree, which is the same */ /* as TreeNode *, in Fig 4.16. */ int CountNodes( BinaryTree T ) { if( T == NULL ) return 0; return 1 + CountNodes(T->Left) + CountNodes(T->Right); } int CountLeaves( BinaryTree T ) { if( T == NULL ) return 0; else if( T->Left == NULL && T->Right == NULL ) return 1; return CountLeaves(T->Left) + CountLeaves(T->Right); } _______________________________________________________________________________

-19-

_______________________________________________________________________________ /* An alternative method is to use the results of Exercise 4.6. */ int CountFull( BinaryTree T ) { if( T == NULL ) return 0; return ( T->Left != NULL && T->Right != NULL ) + CountFull(T->Left) + CountFull(T->Right); } _______________________________________________________________________________ 4.29 We assume the existence of a function RandInt(Lower,Upper), which generates a uniform random integer in the appropriate closed interval. MakeRandomTree returns NULL if N is not positive, or if N is so large that memory is exhausted. _______________________________________________________________________________ O

O

O

O

SearchTree MakeRandomTree1( int Lower, int Upper ) { SearchTree T; int RandomValue; T = NULL; if( Lower <= Upper ) { T = malloc( sizeof( struct TreeNode ) ); if( T != NULL ) { T->Element = RandomValue = RandInt( Lower, Upper ); T->Left = MakeRandomTree1( Lower, RandomValue - 1 ); T->Right = MakeRandomTree1( RandomValue + 1, Upper ); } else FatalError( "Out of space!" ); } return T; } SearchTree MakeRandomTree( int N ) { return MakeRandomTree1( 1, N ); } _______________________________________________________________________________

-20-

4.30 _______________________________________________________________________________ /* LastNode is the address containing last value that was assigned to a node */ SearchTree GenTree( int Height, int *LastNode ) { SearchTree T; if( Height >= 0 ) { T = malloc( sizeof( *T ) ); /* Error checks omitted; see Exercise 4.29. */ T->Left = GenTree( Height - 1, LastNode ); T->Element = ++*LastNode; T->Right = GenTree( Height - 2, LastNode ); return T; } else return NULL; } SearchTree MinAvlTree( int H ) { int LastNodeAssigned = 0; return GenTree( H, &LastNodeAssigned ); } _______________________________________________________________________________ 4.31 There are two obvious ways of solving this problem. One way mimics Exercise 4.29 by replacing RandInt(Lower,Upper) with (Lower+Upper) / 2. This requires computing 2H +1−1, which is not that difficult. The other mimics the previous exercise by noting that the heights of the subtrees are both H −1. The solution follows: O

O

-21-

_______________________________________________________________________________ /* LastNode is the address containing last value that was assigned to a node. */ SearchTree GenTree( int Height, int *LastNode ) { SearchTree T = NULL; if( Height >= 0 ) { T = malloc( sizeof( *T ) ); /* Error checks omitted; see Exercise 4.29. */ T->Left = GenTree( Height - 1, LastNode ); T->Element = ++*LastNode; T->Right = GenTree( Height - 1, LastNode ); } return T; } SearchTree PerfectTree( int H ) { int LastNodeAssigned = 0; return GenTree( H, &LastNodeAssigned ); } _______________________________________________________________________________ 4.32 This is known as one-dimensional range searching. The time is O (K ) to perform the inorder traversal, if a significant number of nodes are found, and also proportional to the depth of the tree, if we get to some leaves (for instance, if no nodes are found). Since the average depth is O (log N ), this gives an O (K + log N ) average bound. _______________________________________________________________________________ O

O

O

O

O

O

O

void PrintRange( ElementType Lower, ElementType Upper, SearchTree T ) { if( T != NULL ) { if( Lower <= T->Element ) PrintRange( Lower, Upper, T->Left ); if( Lower <= T->Element && T->Element <= Upper ) PrintLine( T->Element ); if( T->Element <= Upper ) PrintRange( Lower, Upper, T->Right ); } } _______________________________________________________________________________

-22-

4.33 This exercise and Exercise 4.34 are likely programming assignments, so we do not provide code here. 4.35 Put the root on an empty queue. Then repeatedly Dequeue a node and Enqueue its left and right children (if any) until the queue is empty. This is O (N ) because each queue operation is constant time and there are N Enqueue and N Dequeue operations. O

O

O

O

O

O

O

O

4.36 (a)

6:-

2:4

0,1

2, 3

8:-

4, 5

6, 7

8, 9

(b)

4:6

1,2,3

4, 5

-23-

6,7,8

4.39

A B

C

D H

E I

G

F

J

K

N

L

M

O

P Q

R

4.41 The function shown here is clearly a linear time routine because in the worst case it does a traversal on both T 1 and T 2. _______________________________________________________________________________ O

O

int Similar( BinaryTree T1, BinaryTree T2 ) { if( T1 == NULL || T2 == NULL ) return T1 == NULL && T2 == NULL; return Similar( T1->Left, T2->Left ) && Similar( T1->Right, T2->Right ); } _______________________________________________________________________________ 4.43 The easiest solution is to compute, in linear time, the inorder numbers of the nodes in both trees. If the inorder number of the root of T2 is x , then find x in T1 and rotate it to the root. Recursively apply this strategy to the left and right subtrees of T1 (by looking at the values in the root of T2’s left and right subtrees). If dN is the depth of x , then the running time satisfies T (N ) = T (i ) + T (N −i −1) + dN , where i is the size of the left subtree. In the worst case, dN is always O (N ), and i is always 0, so the worst-case running time is quadratic. Under the plausible assumption that all values of i are equally likely, then even if dN is always O (N ), the average value of T (N ) is O (N log N ). This is a common recurrence that was already formulated in the chapter and is solved in Chapter 7. Under the more reasonable assumption that dN is typically logarithmic, then the running time is O (N ). O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

4.44 Add a field to each node indicating the size of the tree it roots. This allows computation of its inorder traversal number. 4.45 (a) You need an extra bit for each thread. (c) You can do tree traversals somewhat easier and without recursion. The disadvantage is that it reeks of old-style hacking.

-24-

Chapter 5: Hashing 5.1

(a) On the assumption that we add collisions to the end of the list (which is the easier way if a hash table is being built by hand), the separate chaining hash table that results is shown here. 0 1

4371

2 3

1323

4

4344

6173

5 6 7 8 4199

9

(b) 0

9679

1

4371

2

1989

3

1323

4

6173

5

4344

6 7 8 9

4199

-25-

9679

1989

(c) 0

9679

1

4371

2 3

1323

4

6173

5

4344

6 7 8

1989

9

4199

(d) 1989 cannot be inserted into the table because hash 2(1989) = 6, and the alternative locations 5, 1, 7, and 3 are already taken. The table at this point is as follows: O

0 1

4371

2 3

1323

4

6173

5

9679

6 7

4344

8 9

5.2

4199

When rehashing, we choose a table size that is roughly twice as large and prime. In our case, the appropriate new table size is 19, with hash function h (x ) = x (mod 19). O

O

O

O

(a) Scanning down the separate chaining hash table, the new locations are 4371 in list 1, 1323 in list 12, 6173 in list 17, 4344 in list 12, 4199 in list 0, 9679 in list 8, and 1989 in list 13. (b) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in bucket 12, 6173 in bucket 17, 4344 in bucket 14 because both 12 and 13 are already occupied, and 4199 in bucket 0.

-26-

(c) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in bucket 12, 6173 in bucket 17, 4344 in bucket 16 because both 12 and 13 are already occupied, and 4199 in bucket 0. (d) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in bucket 12, 6173 in bucket 17, 4344 in bucket 15 because 12 is already occupied, and 4199 in bucket 0. 5.4

We must be careful not to rehash too often. Let p be the threshold (fraction of table size) at which we rehash to a smaller table. Then if the new table has size N , it contains 2pN elements. This table will require rehashing after either 2N − 2pN insertions or pN deletions. Balancing these costs suggests that a good choice is p = 2/ 3. For instance, suppose we have a table of size 300. If we rehash at 200 elements, then the new table size is N = 150, and we can do either 100 insertions or 100 deletions until a new rehash is required. O

O

O

O

O

O

O

O

If we know that insertions are more frequent than deletions, then we might choose p to be somewhat larger. If p is too close to 1.0, however, then a sequence of a small number of deletions followed by insertions can cause frequent rehashing. In the worst case, if p = 1.0, then alternating deletions and insertions both require rehashing. O

O

O

5.5

(a) Since each table slot is eventually probed, if the table is not empty, the collision can be resolved. (b) This seems to eliminate primary clustering but not secondary clustering because all elements that hash to some location will try the same collision resolution sequence. (c, d) The running time is probably similar to quadratic probing. The advantage here is that the insertion can’t fail unless the table is full. (e) A method of generating numbers that are not random (or even pseudorandom) is given in the references. An alternative is to use the method in Exercise 2.7.

5.6

Separate chaining hashing requires the use of pointers, which costs some memory, and the standard method of implementing calls on memory allocation routines, which typically are expensive. Linear probing is easily implemented, but performance degrades severely as the load factor increases because of primary clustering. Quadratic probing is only slightly more difficult to implement and gives good performance in practice. An insertion can fail if the table is half empty, but this is not likely. Even if it were, such an insertion would be so expensive that it wouldn’t matter and would almost certainly point up a weakness in the hash function. Double hashing eliminates primary and secondary clustering, but the computation of a second hash function can be costly. Gonnet and Baeza-Yates [8] compare several hashing strategies; their results suggest that quadratic probing is the fastest method.

5.7

Sorting the MN records and eliminating duplicates would require O (MN log MN ) time using a standard sorting algorithm. If terms are merged by using a hash function, then the merging time is constant per term for a total of O (MN ). If the output polynomial is small and has only O (M + N ) terms, then it is easy to sort it in O ((M + N )log (M + N )) time, which is less than O (MN ). Thus the total is O (MN ). This bound is better because the model is less restrictive: Hashing is performing operations on the keys rather than just comparison between the keys. A similar bound can be obtained by using bucket sort instead of a standard sorting algorithm. Operations such as hashing are much more expensive than comparisons in practice, so this bound might not be an improvement. On the other hand, if the output polynomial is expected to have only O (M + N ) terms, then using a hash table saves a huge amount of space, since under these conditions, the hash table needs only O

O

O

O

O

O

O

O

O

O

-27-

O

O

O

O

O

O

O

O

O

O

O

O (M + N ) space. O

O

O

Another method of implementing these operations is to use a search tree instead of a hash table; a balanced tree is required because elements are inserted in the tree with too much order. A splay tree might be particularly well suited for this type of a problem because it does well with sequential accesses. Comparing the different ways of solving the problem is a good programming assignment. 5.8

The table size would be roughly 60,000 entries. Each entry holds 8 bytes, for a total of 480,000 bytes.

5.9

(a) This statement is true. (b) If a word hashes to a location with value 1, there is no guarantee that the word is in the dictionary. It is possible that it just hashes to the same value as some other word in the dictionary. In our case, the table is approximately 10% full (30,000 words in a table of 300,007), so there is a 10% chance that a word that is not in the dictionary happens to hash out to a location with value 1. (c) 300,007 bits is 37,501 bytes on most machines. (d) As discussed in part (b), the algorithm will fail to detect one in ten misspellings on average. (e) A 20-page document would have about 60 misspellings. This algorithm would be expected to detect 54. A table three times as large would still fit in about 100K bytes and reduce the expected number of errors to two. This is good enough for many applications, especially since spelling detection is a very inexact science. Many misspelled words (especially short ones) are still words. For instance, typing them instead of then is a misspelling that won’t be detected by any algorithm. O

O

5.10 To each hash table slot, we can add an extra field that we’ll call WhereOnStack, and we can keep an extra stack. When an insertion is first performed into a slot, we push the address (or number) of the slot onto the stack and set the WhereOnStack field to point to the top of the stack. When we access a hash table slot, we check that WhereOnStack points to a valid part of the stack and that the entry in the (middle of the) stack that is pointed to by the WhereOnStack field has that hash table slot as an address. O

O

O

O

5.14 000

001

010

011

100

101

110

111

(2)

(2)

(3)

(3)

(2)

00000010

01010001

10010110

10111101

11001111

00001011

01100001

10011011

10111110

11011011

00101011

01101111

10011110

01111111

-28-

11110000

Chapter 6: Priority Queues (Heaps) 6.1

Yes. When an element is inserted, we compare it to the current minimum and change the minimum if the new element is smaller. DeleteMinO operations are expensive in this scheme.

6.2

1

1 3

2

6 15

6.3

7 14

12

3

5 9

10

4 11

2

12

13

8

15

6 14

9

4 7

5

8 11

13

10

The result of three DeleteMins,O starting with both of the heaps in Exercise 6.2, is as follows:

4

4 6

5

13 15

7 14

12

10 9

6 8

5

12

11

15

7 14

9

10 13

8

11

6.4 6.5

These are simple modifications to the code presented in the text and meant as programming exercises.

6.6

225. To see this, start with iO=1 and position at the root. Follow the path toward the last node, doubling iO when taking a left child, and doubling iO and adding one when taking a right child.

-29-

6.7

(a) We show that HO(NO), which is the sum of the heights of nodes in a complete binary tree of NO nodes, is NO − bO(NO), where bO(NO) is the number of ones in the binary representation of NO. Observe that for NO = 0 and NO = 1, the claim is true. Assume that it is true for values of kO up to and including NO−1. Suppose the left and right subtrees have LO and RO nodes, respectively. Since the root has height OIlog NOK, we have HO(NO) = OIlog NOK + HO(LO) + HO(RO) = OIlog NOK + LO − bO(LO) + RO − bO(RO) = NO − 1 + (OIlog NOK − bO(LO) − bO(RO)) The second line follows from the inductive hypothesis, and the third follows because LO + RO = NO − 1. Now the last node in the tree is in either the left subtree or the right subtree. If it is in the left subtree, then the right subtree is a perfect tree, and bO(RO) = OIlog NOK − 1. Further, the binary representation of NO and LO are identical, with the exception that the leading 10 in NO becomes 1 in LO. (For instance, if NO = 37 = 100101, LO = 10101.) It is clear that the second digit of NO must be zero if the last node is in the left subtree. Thus in this case, bO(LO) = bO(NO), and HO(NO) = NO − bO(NO) If the last node is in the right subtree, then bO(LO) = OIlog NOK. The binary representation of RO is identical to NO, except that the leading 1 is not present. (For instance, if NO = 27 = 101011, LO = 01011.) Thus bO(RO) = bO(NO) − 1, and again HO(NO) = NO − bO(NO) (b) Run a single-elimination tournament among eight elements. This requires seven comparisons and generates ordering information indicated by the binomial tree shown here. a e g

c

f

b

d

h The eighth comparison is between bO and cO. If cO is less than bO, then bO is made a child of cO. Otherwise, both cO and dO are made children of bO. (c) A recursive strategy is used. Assume that NO = 2kO. A binomial tree is built for the NO elements as in part (b). The largest subtree of the root is then recursively converted into a binary heap of 2kO−1 elements. The last element in the heap (which is the only one on an extra level) is then inserted into the binomial queue consisting of the remaining binomial trees, thus forming another binomial tree of 2kO−1 elements. At that point, the root has a subtree that is a heap of 2kO−1 − 1 elements and another subtree that is a binomial tree of 2kO−1 elements. Recursively convert that subtree into a heap; now the whole structure is a binary heap. The running time for NO = 2kO satisfies TO(NO) = 2TO(NO/ 2) + log NO. The base case is TO(8) = 8.

-30-

6.8

Let DO1, DO2, ..., DkO be random variables representing the depth of the smallest, second smallest, and kOthO smallest elements, respectively. We are interested in calculating EO(DkO). In what follows, we assume that the heap size NO is one less than a power of two (that is, the bottom level is completely filled) but sufficiently large so that terms bounded by OO(1 / NO) are negligible. Without loss of generality, we may assume that the kOthO smallest element is in the left subheap of the root. Let pPjO,kO be the probability that this element is the PjOthO smallest element in the subheap. Lemma: For kO>1, EO(DkO) =

kO−1

Σ pPjO,kO(EO(DPjO) + 1).

PjO=1

Proof: An element that is at depth dO in the left subheap is at depth dO + 1 in the entire subheap. Since EO(DPjO + 1) = EO(DPjO) + 1, the theorem follows. Since by assumption, the bottom level of the heap is full, each of second, third, ..., kO−1thO smallest elements are in the left subheap with probability of 0.5. (Technically, the probability should be 1⁄2 − 1/(NO−1) of being in the right subheap and 1⁄2 + 1/(NO−1) of being in the left, since we have already placed the kOthO smallest in the right. Recall that we have assumed that terms of size OO(1/NO) can be ignored.) Thus pPjO,kO = pkO−PjO,kO =

1 ____ kO−2 2

( kPjOO−1−2 )

Theorem: EO(DkO) ≤ log kO. Proof: The proof is by induction. The theorem clearly holds for kO = 1 and kO = 2. We then show that it holds for arbitrary kO > 2 on the assumption that it holds for all smaller kO. Now, by the inductive hypothesis, for any 1 ≤ PjO ≤ kO−1, EO(DPjO) + EO(DkO−PjO) ≤ log PjO + log kO−PjO Since PfOO(xO) = log xO is convex for xO > 0, log PjO + log kO−PjO ≤ 2log (kO/ 2) Thus EO(DPjO) + EO(DkO−PjO) ≤ log (kO/ 2) + log (kO/ 2) Furthermore, since pPjO,kO = pkO−PjO,kO, pPjO,kOEO(DPjO) + pkO−PjO,kOEO(DkO−PjO) ≤pPjO,kOlog (kO/ 2) + pkO−PjO,kOlog (kO/ 2) From the lemma, EO(DkO) =

kO−1

Σ pPjO,kO(EO(DPjO) + 1)

PjO=1

=1+

kO−1

Σ pPjO,kOEO(DPjO)

PjO=1

Thus EO(DkO) ≤ 1 +

kO−1

Σ pPjO,kOlog (kO/ 2)

PjO=1

-31-

kO−1

≤ 1 + log (kO/ 2) Σ pPjO,kO PjO=1

≤ 1 + log (kO/ 2) ≤ log kO completing the proof. ∼ log (kO−1) − 0.273548. It can also be shown that asymptotically, EO(DkO) ∼ 6.9

(a) Perform a preorder traversal of the heap. (b) Works for leftist and skew heaps. The running time is OO(KdO) for dO-heaps.

6.11 Simulations show that the linear time algorithm is the faster, not only on worst-case inputs, but also on random data. 6.12 (a) If the heap is organized as a (min) heap, then starting at the hole at the root, find a path down to a leaf by taking the minimum child. The requires roughly log NO comparisons. To find the correct place where to move the hole, perform a binary search on the log NO elements. This takes OO(log log NO) comparisons. (b) Find a path of minimum children, stopping after log NO − log log NO levels. At this point, it is easy to determine if the hole should be placed above or below the stopping point. If it goes below, then continue finding the path, but perform the binary search on only the last log log NO elements on the path, for a total of log NO + log log log NO comparisons. Otherwise, perform a binary search on the first log NO − log log NO elements. The binary search takes at most log log NO comparisons, and the path finding took only log NO − log log NO, so the total in this case is log NO. So the worst case is the first case. (c) The bound can be improved to log NO + log*NO + OO(1), where log*NO is the inverse Ackerman function (see Chapter 8). This bound can be found in reference [16]. 6.13 The parent is at position OI(iO + dO − 2)/dOK. The children are in positions (iO − 1)dO + 2, ..., idO + 1. 6.14 (a) OO((MO + dNO)logdONO). (b) OO((MO + NO)log NO). (c) OO(MO + NO2). (d) dO= max (2, MO / NO). (See the related discussion at the end of Section 11.4.)

-32-

6.16

2 4

11

9 18

5 10

12

8

31

17

6

15

18

11 21

6.17

1 2

3

7 8

6 9

10

5 11

12

4 13

14

15

6.18 This theorem is true, and the proof is very much along the same lines as Exercise 4.17. 6.19 If elements are inserted in decreasing order, a leftist heap consisting of a chain of left children is formed. This is the best because the right path length is minimized. 6.20 (a) If a DecreaseKeyO is performed on a node that is very deep (very left), the time to percolate up would be prohibitive. Thus the obvious solution doesn’t work. However, we can still do the operation efficiently by a combination of DeleteO and InsertO. To DeleteO an arbitrary node xO in the heap, replace xO by the MergeO of its left and right subheaps. This might create an imbalance for nodes on the path from xO’s parent to the root that would need to be fixed by a child swap. However, it is easy to show that at most log NO nodes can be affected, preserving the time bound. This is discussed in Chapter 11. 6.21 Lazy deletion in leftist heaps is discussed in the paper by Cheriton and Tarjan [9]. The general idea is that if the root is marked deleted, then a preorder traversal of the heap is formed, and the frontier of marked nodes is removed, leaving a collection of heaps. These can be merged two at a time by placing all the heaps on a queue, removing two, merging them, and placing the result at the end of the queue, terminating when only one heap remains. 6.22 (a) The standard way to do this is to divide the work into passes. A new pass begins when the first element reappears in a heap that is dequeued. The first pass takes roughly

-33-

2*O1*O(NO/ 2) time units because there are NO/ 2 merges of trees with one node each on the right path. The next pass takes 2*O2*O(NO/ 4) time units because of the roughly NO/ 4 merges of trees with no more than two nodes on the right path. The third pass takes 2*O3*O(NO/ 8) time units, and so on. The sum converges to 4NO. (b) It generates heaps that are more leftist. 6.23

2 4

11

5

9

6

8

11

12

18

15

17

10

18

31

21

6.24

1 3

2

7 15

5 11

13

6 9

14

4 10

12

8

6.25 This claim is also true, and the proof is similar in spirit to Exercise 4.17 or 6.18. 6.26 Yes. All the single operation estimates in Exercise 6.22 become amortized instead of worst-case, but by the definition of amortized analysis, the sum of these estimates is a worst-case bound for the sequence. 6.27 Clearly the claim is true for kO = 1. Suppose it is true for all values iO = 1, 2, ..., kO. A BkO+1 tree is formed by attaching a BkO tree to the root of a BkO tree. Thus by induction, it contains a BO0 through BkO−1 tree, as well as the newly attached BkO tree, proving the claim.

6.28 Proof is by induction. Clearly the claim is true for kO = 1. Assume true for all values iO = 1, 2, ..., kO. A BkO+1 tree is formed by attaching a BkO tree to the original BkO tree. The original

-34-

( )

(

)

thus had dk nodes at depth dO. The attached tree had dOk−1 nodes at depth dO−1, which are now at depth dO. Adding these two terms and using a well-known formula establishes the theorem. 6.29

4 13

15

23 18

12 51

24

21 65

24

14 65

26

16 18

2 11

29 55

6.30 This is established in Chapter 11. 6.31 The algorithm is to do nothing special − merely InsertO them. This is proved in Chapter 11. 6.35 Don’t keep the key values in the heap, but keep only the difference between the value of the key in a node and the value of the key in its parent. 6.36 OO(NO + kOlog NO) is a better bound than OO(NOlog kO). The first bound is OO(NO) if kO = OO(NO / log NO). The second bound is more than this as soon as kO grows faster than a constant. For the other values Ω(NO / log NO) = kO = ο(NO), the first bound is better. When kO = Θ(NO), the bounds are identical.

-35-

Chapter 7: Sorting 7.1 _______________________________________________ _____________________________________________ L _L _____________________________________________ Original L 3 1 4 1 5 9 2 6 5 LL LL L LL L L after PO=2 L 1 3 4 1 5 9 2 6 5 L L 3 4 1 5 9 2 6 5 LL L L after PO=3 L 1 L L after PO=4 L 1 1 3 4 5 9 2 6 5 L L L L after PO=5 L 1 1 3 4 5 9 2 6 5 LL L L after PO=6 L 1 1 3 4 5 9 2 6 5 L L LL L LL L L after PO=7 L 1 1 2 3 4 5 9 6 5 L L L L after PO=8 L 1 1 2 3 4 5 6 9 5 L L after PO=9 L 1 1 2 3 4 5 5 6 9 LL L_L______________________________________________ _____________________________________________ 7.2

OO(NO) because the whileO loop terminates immediately. Of course, accidentally changing the test to include equalities raises the running time to quadratic for this type of input.

7.3

The inversion that existed between AO[iO] and AO[iO + kO] is removed. This shows at least one inversion is removed. For each of the kO − 1 elements AO[iO + 1], AO[iO + 2], ..., AO[iO + kO − 1], at most two inversions can be removed by the exchange. This gives a maximum of 2(kO − 1) + 1 = 2kO − 1.

7.4 ________________________________________________ ______________________________________________ L L_______________________________________________ Original L 9 8 7 6 5 4 3 2 1 LL LL L LL L L after 7-sort L 2 1 7 6 5 4 3 9 8 L L L L after 3-sort L 2 1 4 3 5 7 6 9 8 L L after 1-sort L 1 2 3 4 5 6 7 8 9 L LL LL_L_______________________________________________ ______________________________________________ 7.5

(a) Θ(NO2). The 2-sort removes at most only three inversions at a time; hence the algorithm is Ω(NO2). The 2-sort is two insertion sorts of size NO/ 2, so the cost of that pass is OO(NO2). The 1-sort is also OO(NO2), so the total is OO(NO2).

7.6

Part (a) is an extension of the theorem proved in the text. Part (b) is fairly complicated; see reference [11].

7.7

See reference [11].

7.8

Use the input specified in the hint. If the number of inversions is shown to be Ω(NO2), then the bound follows, since no increments are removed until an htO/ 2 sort. If we consider the pattern formed hkO through hO2kO−1, where kO = tO/ 2 + 1, we find that it has length NO = hkO(hkO + 1)−1, and the number of inversions is roughly hkO4/ 24, which is Ω(NO2).

7.9

(a) OO(NOlog NO). No exchanges, but each pass takes OO(NO). (b) OO(NOlog NO). It is easy to show that after an hkO sort, no element is farther than hkO from its rightful position. Thus if the increments satisfy hkO+1 ≤ chkO for a constant cO, which implies OO(log NO) increments, then the bound is OO(NOlog NO).

-36-

7.10 (a) No, because it is still possible for consecutive increments to share a common factor. An example is the sequence 1, 3, 9, 21, 45, htO+1 = 2htO + 3. (b) Yes, because consecutive increments are relatively prime. The running time becomes OO(NO3/ 2). 7.11 The input is read in as 142, 543, 123, 65, 453, 879, 572, 434, 111, 242, 811, 102 The result of the heapify is 879, 811, 572, 434, 543, 123, 142, 65, 111, 242, 453, 102 879 is removed from the heap and placed at the end. We’ll place it in italics to signal that it is not part of the heap. 102 is placed in the hole and bubbled down, obtaining 811, 543, 572, 434, 453, 123, 142, 65, 111, 242, 102, 879 Continuing the process, we obtain 572, 543, 142, 434, 453, 123, 102, 65, 111, 242, 811, 879 543, 453, 142, 434, 242, 123, 102, 65, 111, 572, 811, 879 453, 434, 142, 111, 242, 123, 102, 65, 543, 572, 811, 879 434, 242, 142, 111, 65, 123, 102, 453, 543, 572, 811, 879 242, 111, 142, 102, 65, 123, 434, 453, 543, 572, 811, 879 142, 111, 123, 102, 65, 242, 434, 453, 543, 572, 811, 879 123, 111, 65, 102, 142, 242, 434, 453, 543, 572, 811, 879 111, 102, 65, 123, 142, 242, 434, 453, 543, 572, 811, 879 102, 65, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879 65, 102, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879 7.12 Heapsort uses at least (roughly) NOlog NO comparisons on any input, so there are no particularly good inputs. This bound is tight; see the paper by Schaeffer and Sedgewick [16]. This result applies for almost all variations of heapsort, which have different rearrangement strategies. See Y. Ding and M. A. Weiss, "Best Case Lower Bounds for Heapsort," ComputingO 49 (1992). 7.13 First the sequence {3, 1, 4, 1} is sorted. To do this, the sequence {3, 1} is sorted. This involves sorting {3} and {1}, which are base cases, and merging the result to obtain {1, 3}. The sequence {4, 1} is likewise sorted into {1, 4}. Then these two sequences are merged to obtain {1, 1, 3, 4}. The second half is sorted similarly, eventually obtaining {2, 5, 6, 9}. The merged result is then easily computed as {1, 1, 2, 3, 4, 5, 6, 9}. 7.14 Mergesort can be implemented nonrecursively by first merging pairs of adjacent elements, then pairs of two elements, then pairs of four elements, and so on. This is implemented in Fig. 7.1. 7.15 The merging step always takes Θ(NO) time, so the sorting process takes Θ(NOlog NO) time on all inputs. 7.16 See reference [11] for the exact derivation of the worst case of mergesort. 7.17 The original input is 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5 After sorting the first, middle, and last elements, we have 3, 1, 4, 1, 5, 5, 2, 6, 5, 3, 9 Thus the pivot is 5. Hiding it gives 3, 1, 4, 1, 5, 3, 2, 6, 5, 5, 9 The first swap is between two fives. The next swap has iO and PjO crossing. Thus the pivot is

-37-

_______________________________________________________________________________ void Mergesort( ElementType A[ ], int N ) { ElementType *TmpArray; int SubListSize, Part1Start, Part2Start, Part2End; TmpArray = malloc( sizeof( ElementType ) * N ); for( SubListSize = 1; SubListSize < N; SubListSize *= 2 ) { Part1Start = 0; while( Part1Start + SubListSize < N - 1 ) { Part2Start = Part1Start + SubListSize; Part2End = min( N, Part2Start + SubListSize - 1 ); Merge( A, TmpArray, Part1Start, Part2Start, Part2End ); Part1Start = Part2End + 1; } } } Fig. 7.1. _______________________________________________________________________________ swapped back with iO: 3, 1, 4, 1, 5, 3, 2, 5, 5, 6, 9 We now recursively quicksort the first eight elements: 3, 1, 4, 1, 5, 3, 2, 5 Sorting the three appropriate elements gives 1, 1, 4, 3, 5, 3, 2, 5 Thus the pivot is 3, which gets hidden: 1, 1, 4, 2, 5, 3, 3, 5 The first swap is between 4 and 3: 1, 1, 3, 2, 5, 4, 3, 5 The next swap crosses pointers, so is undone; iO points at 5, and so the pivot is swapped: 1, 1, 3, 2, 3, 4, 5, 5 A recursive call is now made to sort the first four elements. The pivot is 1, and the partition does not make any changes. The recursive calls are made, but the subfiles are below the cutoff, so nothing is done. Likewise, the last three elements constitute a base case, so nothing is done. We return to the original call, which now calls quicksort recursively on the right-hand side, but again, there are only three elements, so nothing is done. The result is 1, 1, 3, 2, 3, 4, 5, 5, 5, 6, 9 which is cleaned up by insertion sort. 7.18 (a) OO(NOlog NO) because the pivot will partition perfectly. (b) Again, OO(NOlog NO) because the pivot will partition perfectly. (c) OO(NOlog NO); the performance is slightly better than the analysis suggests because of the median-of-three partition and cutoff.

-38-

7.19 (a) If the first element is chosen as the pivot, the running time degenerates to quadratic in the first two cases. It is still OO(NOlog NO) for random input. (b) The same results apply for this pivot choice. (c) If a random element is chosen, then the running time is OO(NOlog NO) expected for all inputs, although there is an OO(NO2) worst case if very bad random numbers come up. There is, however, an essentially negligible chance of this occurring. Chapter 10 discusses the randomized philosophy. (d) This is a dangerous road to go; it depends on the distribution of the keys. For many distributions, such as uniform, the performance is OO(NOlog NO) on average. For a skewed distribution, such as with the input {1, 2, 4, 8, 16, 32, 64, ... }, the pivot will be consistently terrible, giving quadratic running time, independent of the ordering of the input. 7.20 (a) OO(NOlog NO) because the pivot will partition perfectly. (b) Sentinels need to be used to guarantee that iO and PjO don’t run past the end. The running time will be Θ(NO2) since, because iO won’t stop until it hits the sentinel, the partitioning step will put all but the pivot in SO1. (c) Again a sentinel needs to be used to stop PjO. This is also Θ(NO2) because the partitioning is unbalanced.

7.21 Yes, but it doesn’t reduce the average running time for random input. Using median-ofthree partitioning reduces the average running time because it makes the partition more balanced on average. 7.22 The strategy used here is to force the worst possible pivot at each stage. This doesn’t necessarily give the maximum amount of work (since there are few exchanges, just lots of comparisons), but it does give Ω(NO2) comparisons. By working backward, we can arrive at the following permutation: 20, 3, 5, 7, 9, 11, 13, 15, 17, 19, 4, 10, 2, 12, 6, 14, 1, 16, 8, 18 A method to extend this to larger numbers when NO is even is as follows: The first element is NO, the middle is NO − 1, and the last is NO − 2. Odd numbers (except 1) are written in decreasing order starting to the left of center. Even numbers are written in decreasing order by starting at the rightmost spot, always skipping one available empty slot, and wrapping around when the center is reached. This method takes OO(NOlog NO) time to generate the permutation, but is suitable for a hand calculation. By inverting the actions of quicksort, it is possible to generate the permutation in linear time. 7.24 This recurrence results from the analysis of the quick selection algorithm. TO(NO) = OO(NO). 7.25 Insertion sort and mergesort are stable if coded correctly. Any of the sorts can be made stable by the addition of a second key, which indicates the original position. 7.26 (d) PfOO(NO) can be OO(NO/ log NO). Sort the PfOO(NO) elements using mergesort in OO(PfOO(NO)log PfOO(NO)) time. This is OO(NO) if PfOO(NO) is chosen using the criterion given. Then merge this sorted list with the already sorted list of NO numbers in OO(NO + PfOO(NO)) = OO(NO) time. 7.27 A decision tree would have NO leaves, so OHlog NOJ comparisons are required. 7.28 log NO! ∼ ∼ NOlog NO − NOlog eO. 7.29 (a)

( 2NN ).

-39-

( )

(b) The information-theoretic lower bound is log 2N N . Applying Stirling’s formula, we 1 can estimate the bound as 2NO − ⁄2log NO. A better lower bound is known for this case: 2NO−1 comparisons are necessary. Merging two lists of different sizes MO and NO likewise requires at least log

( MON+ N ) comparisons.

7.30 It takes OO(1) to insert each element into a bucket, for a total of OO(NO). It takes OO(1) to extract each element from a bucket, for OO(MO). We waste at most OO(1) examining each empty bucket, for a total of OO(MO). Adding these estimates gives OO(MO + NO). 7.31 We add a dummy NO + 1thO element, which we’ll call MaybeO. MaybeO satisfies

PfalseO < MaybeO

7.33 (a) OHlog 4!OJ=5. (b) Compare and exchange (if necessary) aO1 and aO2 so that aO1 ≥ aO2, and repeat with aO3 and aO4. Compare and exchange aO1 and aO3. Compare and exchange aO2 and aO4. Finally, compare and exchange aO2 and aO3. 7.34 (a) OHlog 5!OJ = 7. (b) Compare and exchange (if necessary) aO1 and aO2 so that aO1 ≥ aO2, and repeat with aO3 and aO4 so that aO3 ≥ aO4. Compare and exchange (if necessary) the two winners, aO1 and aO3. Assume without loss of generality that we now have aO1 ≥ aO3 ≥ aO4, and aO1 ≥ aO2. (The other case is obviously identical.) Insert aO5 by binary search in the appropriate place among aO1 , aO3,aO4. This can be done in two comparisons. Finally, insert aO2 among aO3 , aO4 , aO5. If it is the largest among those three, then it goes directly after aO1 since it is already known to be larger than aO1. This takes two more comparisons by a binary search. The total is thus seven comparisons. 7.38 (a) For the given input, the pivot is 2. It is swapped with the last element. iO will point at the second element, and PjO will be stopped at the first element. Since the pointers have crossed, the pivot is swapped with the element in position 2. The input is now 1, 2, 4, 5, 6, ..., NO − 1, NO, 3. The recursive call on the right subarray is thus on an increasing sequence of numbers, except for the last number, which is the smallest. This is exactly the same form as the original. Thus each recursive call will have only two fewer elements than the previous. The running time will be quadratic. (b) Although the first pivot generates equal partitions, both the left and right halves will have the same form as part (a). Thus the running time will be quadratic because after the first partition, the algorithm will grind slowly. This is one of the many interesting tidbits in reference [20].

-40-

7.39 We show that in a binary tree with LO leaves, the average depth of a leaf is at least log LO. We can prove this by induction. Clearly, the claim is true if LO = 1. Suppose it is true for trees with up to LO − 1 leaves. Consider a tree of LO leaves with minimum average leaf depth. Clearly, the root of such a tree must have non-NULL left and right subtrees. Suppose that the left subtree has LLO leaves, and the right subtree has LRO leaves. By the inductive hypothesis, the total depth of the leaves (which is their average times their number) in the left subtree is LLO(1 + log LLO), and the total depth of the right subtree’s leaves is LRO(1 + log LRO) (because the leaves in the subtrees are one deeper with respect to the root of the tree than with respect to the root of their subtree). Thus the total depth of all the leaves is LO + LLOlog LLO + LROlog LRO. Since PfOO(xO) = xOlog xO is convex for xO ≥ 1, we know that PfOO(xO) + PfOO(yO) ≥ 2fOO((xO+yO) / 2). Thus, the total depth of all the leaves is at least LO + 2(LO/ 2)log (LO/ 2) ≥ LO + LO(log LO − 1) ≥ LOlog LO. Thus the average leaf depth is at least log LO.

-41-

Chapter 8: The Disjoint Set ADT 8.1

We assume that unions operated on the roots of the trees containing the arguments. Also, in case of ties, the second tree is made a child of the first. Arbitrary union and union by height give the same answer (shown as the first tree) for this problem. Union by size gives the second tree. 1

2

6

3 4

5

7

10

8 11

12

14 13

9

15

16 17

3 1 2

6

4

5

7

10

11

12

8

8.4

14 15

9

8.2

13

16 17

In both cases, have nodes 16 and 17 point directly to the root. Claim: A tree of height HO has at least 2HO nodes. The proof is by induction. A tree of height 0 clearly has at least 1 node, and a tree of height 1 clearly has at least 2. Let TO be the tree of height HO with fewest nodes. Thus at the time of TO’s last union, it must have been a tree of height HO−1, since otherwise TO would have been smaller at that time than it is now and still would have been of height HO, which is impossible by assumption of TO’s minimality. Since TO’s height was updated, it must have been as a result of a union with another tree of height HO−1. By the induction hypothesis, we know that at the time of the union, TO had at least 2HO−1 nodes, as did the tree attached to it, for a total of 2HO nodes, proving the claim. Thus an NO-node tree has depth at most OIlog NOK.

8.5

All answers are OO(MO) because in all cases α(MO, NO) = 1.

8.6

Assuming that the graph has only nine vertices, then the union/find tree that is formed is shown here. The edge (4,6) does not result in a union because at the time it is examined, 4 and 6 are already in the same component. The connected components are {1,2,3,4,6} and

-42-

{5,7,8,9}. 1

5 2

3

7 4

8.8

6

8 9

(a) When we perform a union, we push onto a stack the two roots and the old values of their parents. To implement a Deunion,O we only have to pop the stack and restore the values. This strategy works fine in the absence of path compression. (b) If path compression is implemented, the strategy described in part (a) does not work because path compression moves elements out of subtrees. For instance, the sequence Union(1,2), Union(3,4), Union(1,3), Find(4), Deunion(1,3) will leave 4 in set 1 if path compression is implemented.

8.9

We assume that the tree is implemented with pointers instead of a simple array. Thus FindO will return a pointer instead of an actual set name. We will keep an array to map set numbers to their tree nodes. Union and FindO are implemented in the standard manner. To perform Remove(X),O first perform a Find(X)O with path compression. Then mark the node containing XO as vacant. Create a new one-node tree with XO and have it pointed to by the appropriate array entry. The time to perform a RemoveO is the same as the time to perform a Find,O except that there potentially could be a large number of vacant nodes. To take care of this, after NO RemoveOs are performed, perform a FindO on every node, with path compression. If a FindO(XO) returns a vacant root, then place XO in the root node, and make the old node containing XO vacant. The results of Exercise 8.11 guarantee that this will take linear time, which can be charged to the NO RemoveOs. At this point, all vacant nodes (indeed all nonroot nodes) are children of a root, and vacant nodes can be disposed (if an array of pointers to them has been kept). This also guarantees that there are never more than 2NO nodes in the forest and preserves the MOα(MO, NO) asymptotic time bound.

8.11 Suppose there are uO UnionOs and PfOO FindOs. Each union costs constant time, for a total of uO. A FindO costs one unit per vertex visited. We charge, as in the text, under the following slightly modified rules: (A) the vertex is a root or child of the root (B) otherwise Essentially, all vertices are in one rank group. During any Find,O there can be at most two rule (A) charges, for a total of 2fOO. Each vertex can be charged at most once under rule (B) because after path compression it will be a child of the root. The number of vertices that are not roots or children of roots is clearly bounded by uO, independent of the unioning strategy, because each UnionO changes exactly one vertex from root to nonroot status, and this bounds the number of type (B) nodes. Thus the total rule (B) charges are at most uO. Adding all charges gives a bound of 2fOO + 2uO, which is linear in the number of operations. 8.13 For each vertex vO, let the pseudorank RvO be defined as OAlog SvOOA, where SvO is the number of I K descendents (including itself) of vO in the final tree, after all UnionOs are performed, ignoring

-43-

path compression. Although the pseudorank is not maintained by the algorithm, it is not hard to show that the pseudorank satisfies the same properties as the ranks do in union-by-rank. Clearly, a vertex with pseudorank RvO has at least 2RvO descendents (by its definition), and the number of vertices of pseudorank RO is at most NO/ 2RO. The union-by-size rule ensures that the parent of a node has twice as many descendents as the node, so the pseudoranks monotonically increase on the path toward the root if there is no path compression. The argument in Lemma 8.3 tells us that path compression does not destroy this property. If we partition the vertices by pseudoranks and assign the charges in the same manner as in the text proof for union-by-rank, the same steps follow, and the identical bound is obtained. 8.14 This is most conveniently implemented without recursion and is faster because, even if full path compression is implemented nonrecursively, it requires two passes up the tree. This requires only one. We leave the coding to the reader since comparing the various UnionO and FindO strategies is a reasonable programming project. The worst-case running time remains the same because the properties of the ranks are unchanged. Instead of charging one unit to each vertex on the path to the root, we can charge two units to alternating vertices (namely, the vertices whose parents are altered by path halving). These vertices get parents of higher rank, as before, and the same kind of analysis bounds the total charges.

-44-

Chapter 9: Graph Algorithms 9.1

The following ordering is arrived at by using a queue and assumes that vertices appear on an adjacency list alphabetically. The topological order that results is then s, G, D, H, A, B, E, I, F, C, t

9.2

Assuming the same adjacency list, the topological order produced when a stack is used is s, G, H, D, A, E, I, F, B, C, t Because a topological sort processes vertices in the same manner as a breadth-first search, it tends to produce a more natural ordering.

9.4

The idea is the same as in Exercise 5.10.

9.5

(a) (Unweighted paths) A->B, A->C, A->B->G, A->B->E, A->C->D, A->B->E->F. (b) (Weighted paths) A->C, A->B, A->B->G, A->B->G->E, A->B->G->E->F, A->B->G>E->D.

9.6

We’ll assume that Dijkstra’s algorithm is implemented with a priority queue of vertices that uses the DecreaseKeyO operation. Dijkstra’s algorithm uses O|OEO|O DecreaseKeyO operations, which cost OO(logdOO|OVO|O) each, and O|OVO|O DeleteMinO operations, which cost OO(dOlogdOO|OVO|O) each. The running time is thus OO(O|OEO|OlogdOO|OVO|O + O|OVO|OdOlogdOO|OVO|O). The cost of the DecreaseKeyO operations balances the InsertO operations when dO = O|OEO|O/O|OVO|O. For a sparse graph, this might give a value of dO that is less than 2; we can’t allow this, so dO is chosen to be max (2,OIO|OEO|O/O|OVO|OOK). This gives a running time of OO(O|OEO|Olog2 + O|OEO|O/O|OVO|OO|OVO|O), which is a slight theoretical improvement. Moret and Shapiro report (indirectly) that dO-heaps do not improve the running time in practice.

9.7

(a) The graph shown here is an example. Dijkstra’s algorithm gives a path from AO to CO of cost 2, when the path from AO to BO to CO has cost 1. C 2 A

-2 3 B

(b) We define a pass of the algorithm as follows: Pass 0 consists of marking the start vertex as known and placing its adjacent vertices on the queue. For PjO > 0, pass PjO consists of marking as known all vertices on the queue at the end of pass PjO − 1. Each pass requires linear time, since during a pass, a vertex is placed on the queue at most once. It is easy to show by induction that if there is a shortest path from sO to vO containing kO edges, then dvO will equal the length of this path by the beginning of pass kO. Thus there are at most O|OVO|O passes,

-45-

giving an OO(O|OEO|OO|OVO|O) bound. 9.8

See the comments for Exercise 9.19.

9.10 (a) Use an array CountO such that for any vertex uO, Count[u]O is the number of distinct paths from sO to uO known so far. When a vertex vO is marked as known, its adjacency list is traversed. Let wO be a vertex on the adjacency list. If dvO + cvO,wO = dwO, then increment Count[w]O by Count[v]O because all shortest paths from sO to vO with last edge (vO,wO) give a shortest path to wO. If dvO + cvO,wO < dwO, then pwO and dwO get updated. All previously known shortest paths to wO are now invalid, but all shortest paths to vO now lead to shortest paths for wO, so set Count[w]O to equal Count[v].O Note: Zero-cost edges mess up this algorithm. (b) Use an array NumEdgesO such that for any vertex uO, NumEdges[u]O is the shortest number of edges on a path of distance duO from sO to uO known so far. Thus NumEdgesO is used as a tiebreaker when selecting the vertex to mark. As before, vO is the vertex marked known, and wO is adjacent to vO. If dvO + cvO,wO = dwO, then change pwO to vO and NumEdges[w]O to NumEdges[v]+1O if NumEdges[v]+1 < NumEdges[w]. If dvO + cvO,wO < dwO, then update pwO and dwO, and set NumEdges[w]O to NumEdges[v]+1.O 9.11 (This solution is not unique). First send four units of flow along the path s, G, H, I, t. This gives the following residual graph: 2

A 1

3 4

s 2 4

2

2

1 2

C 2

3

D

G

2

B

3

E 2

1

4

3

F

3

1

2

H

4 t

4

I

4

Next, send three units of flow along s, D, E, F, t. The residual graph that results is as follows: 2

A 1 s 2 4

3 1 3 G

2

B 2 3

D 2

1 2

C 2 3

E 2

4

4 3

F

3 2

H

1

1

t

4

I

4

Now two units of flow are sent along the path s, G, D, A, B, C, t, yielding the following residual graph:

-46-

2

A 1

1 1 2 3 2

s 6

2 D

C 2

3 1

2

G

2

B

2

4

F

3

1

2

H

2

3

E

2

1

3

t

4

I

4

One unit of flow is then sent along s, D, A, E, C, t: 2

A 1 3 s

4

D 2

6

1 1

C 1

3

1 2

G

2

B

E 2

4

1

13

F

3

1

2

H

3

1

3

t

4

I

4

Finally, one unit of flow can go along the path s, A, E, C, t: 2

A 1

2 3

s

2

B

4

3

D 2

6

1 2

G

C 2

4

F

3

1

2

H

4

3

E 2

1 3

t

4

I

4

The preceding residual graph has no path from s to t. Thus the algorithm terminates. The final flow graph, which carries 11 units, is as follows: 2

A 1

3 4

s 6

2

0 4

C 2

3

D 2

G

2

B

3

E 0

4 3

F

0 4

H

0

0

t

4

I

This flow is not unique. For instance, two units of the flow that goes from G to D to A to E could go by G to H to E.

-47-

9.12 Let TO be the tree with root rO, and children rO1, rO2, ..., rkO, which are the roots of TO1, TO2, ..., TkO, which have maximum incoming flow of cO1, cO2, ..., ckO, respectively. By the problem statement, we may take the maximum incoming flow of rO to be infinity. The recursive function FindMaxFlow( T, IncomingCap ) finds the value of the maximum flow in TO (finding the actual flow is a matter of bookkeeping); the flow is guaranteed not to exceed IncomingCapO. If TO is a leaf, then FindMaxFlowO returns IncomingCapO since we have assumed a sink of infinite capacity. Otherwise, a standard postorder traversal can be used to compute the maximum flow in linear time. _______________________________________________________________________________ FlowType FindMaxFlow( Tree T, FlowType IncomingCap ) { FlowType ChildFlow, TotalFlow; if( IsLeaf( T ) ) return IncomingCap; else { TotalFlow = 0; for( each subtree TiO of T ) { ChildFlow = FindMaxFlow( TiO, min( IncomingCap, ciO ) ); TotalFlow += ChildFlow; IncomingCap -= ChildFlow; } return TotalFlow; } } _______________________________________________________________________________ 9.13 (a) Assume that the graph is connected and undirected. If it is not connected, then apply the algorithm to the connected components. Initially, mark all vertices as unknown. Pick any vertex vO, color it red, and perform a depth-first search. When a node is first encountered, color it blue if the DFS has just come from a red node, and red otherwise. If at any point, the depth-first search encounters an edge between two identical colors, then the graph is not bipartite; otherwise, it is. A breadth-first search (that is, using a queue) also works. This problem, which is essentially two-coloring a graph, is clearly solvable in linear time. This contrasts with three-coloring, which is NP-complete. (b) Construct an undirected graph with a vertex for each instructor, a vertex for each course, and an edge between (vO,wO) if instructor vO is qualified to teach course wO. Such a graph is bipartite; a matching of MO edges means that MO courses can be covered simultaneously. (c) Give each edge in the bipartite graph a weight of 1, and direct the edge from the instructor to the course. Add a vertex sO with edges of weight 1 from sO to all instructor vertices. Add a vertex tO with edges of weight 1 from all course vertices to tO. The maximum flow is equal to the maximum matching.

-48-

(d) The running time is OO(O|OEO|OO|OVO|O ⁄ ) because this is the special case of the network flow problem mentioned in the text. All edges have unit cost, and every vertex (except sO and tO) has either an indegree or outdegree of 1. 1

2

9.14 This is a slight modification of Dijkstra’s algorithm. Let PfOiO be the best flow from sO to iO at any point in the algorithm. Initially, PfOiO = 0 for all vertices, except sO: PfOsO = ∞. At each stage, we select vO such that PfOvO is maximum among all unknown vertices. Then for each wO adjacent to vO, the cost of the flow to wO using vO as an intermediate is min (PfOvO,cvO,wO). If this value is higher than the current value of PfOwO, then PfOwO and pwO are updated. 9.15 One possible minimum spanning tree is shown here. This solution is not unique. 3

A 4

B 2

D

3

E 2 H

C 1 F

2

G

1 I

7

J

9.16 Both work correctly. The proof makes no use of the fact that an edge must be nonnegative. 9.17 The proof of this fact can be found in any good graph theory book. A more general theorem follows: Theorem: Let GO = (VO, EO) be an undirected, unweighted graph, and let AO be the adjacency matrix for GO (which contains either 1s or 0s). Let DO be the matrix such that DO[vO][vO] is equal to the degree of vO; all nondiagonal matrices are 0. Then the number of spanning trees of GO is equal to the determinant of AO + DO. 9.19 The obvious solution using elementary methods is to bucket sort the edge weights in linear time. Then the running time of Kruskal’s algorithm is dominated by the Union/FindO operations and is OO(O|OEO|Oα(O|OEO|O,O|OVO|O)). The Van-Emde Boas priority queues (see Chapter 6 references) give an immediate OO(O|OEO|OloglogO|OVO|O) running time for Dijkstra’s algorithm, but this isn’t even as good as a Fibonacci heap implementation. More sophisticated priority queue methods that combine these ideas have been proposed, including M. L. Fredman and D. E. Willard, "Trans-dichotomous Algorithms for Minimum Spanning Trees and Shortest Paths," Proceedings of the Thirty-first Annual IEEE Symposium on the Foundations of Computer Science (1990), 719-725. The paper presents a linear-time minimum spanning tree algorithm and an OO(O|OEO|O+O|OVO|O logO|OVO|O/ loglogO|OVO|O) implementation of Dijkstra’s algorithm if the edge costs are suitably small. 9.20 Since the minimum spanning tree algorithm works for negative edge costs, an obvious solution is to replace all the edge costs by their negatives and use the minimum spanning tree algorithm. Alternatively, change the logic so that < is replaced by >, MinO by MaxO, and vice versa. 9.21 We start the depth-first search at AO and visit adjacent vertices alphabetically. The articulation points are CO, EO, and FO. CO is an articulation point because LowO[BO] ≥ NumO[CO]; EO is an

-49-

articulation point because LowO[HO] ≥ NumO[EO]; and FO is an articulation point because LowO[GO] ≥ NumO[FO]; the depth-first spanning tree is shown in Fig. 9.1. A 1/1

C 2/1

B 3/2

D 11/1

E 4/2

F 5/2

H 7/4

G 6/6

J 8/4

K 9/4

I 10/4

Fig. 9.1. 9.22 The only difficult part is showing that if some nonroot vertex aO is an articulation point, then there is no back edge between any proper descendent of aO and a proper ancestor of aO in the depth-first spanning tree. We prove this by a contradiction. Let uO and vO be two vertices such that every path from uO to vO goes through aO. At least one of uO and vO is a proper descendent of aO, since otherwise there is a path from uO to vO that avoids aO. Assume without loss of generality that uO is a proper descendent of aO. Let cO be the child of aO that contains uO as a descendent. If there is no back edge between a descendent of cO and a proper ancestor of aO, then the theorem is true immediately, so suppose for the sake of contradiction that there is a back edge (sO, tO). Then either vO is a proper descendent of aO or it isn’t. In the second case, by taking a path from uO to sO to tO to vO, we can avoid aO, which is a contradiction. In the first case, clearly vO cannot be a descendent of cO, so let c'O be the child of aO that contains vO as a descendent. By a similar argument as before, the only possibility is that there is a back edge (s'O, t'O) between a descendent of c'O and a proper ancestor of aO. Then there is a path from uO to sO to tO to t'O to s'O to vO; this path avoids aO, which is also a contradiction.

-50-

9.23 (a) Do a depth-first search and count the number of back edges. (b) This is the feedback edge set problem. See reference [1] or [20]. 9.24 Let (vO,wO) be a cross edge. Since at the time wO is examined it is already marked, and wO is not a descendent of vO (else it would be a forward edge), processing for wO is already complete when processing for vO commences. Thus under the convention that trees (and subtrees) are placed left to right, the cross edge goes from right to left. 9.25 Suppose the vertices are numbered in preorder and postorder. If (vO,wO) is a tree edge, then vO must have a smaller preorder number than wO. It is easy to see that the converse is true. If (vO,wO) is a cross edge, then vO must have both a larger preorder and postorder number than wO. The converse is shown as follows: because vO has a larger preorder number, wO cannot be a descendent of vO; because it has a larger postorder number, vO cannot be a descendent of wO; thus they must be in different trees. Otherwise, vO has a larger preorder number but is not a cross edge. To test if (vO,wO) is a back edge, keep a stack of vertices that are active in the depth-first search call (that is, a stack of vertices on the path from the current root). By keeping a bit array indicating presence on the stack, we can easily decide if (vO,wO) is a back edge or a forward edge. 9.26 The first depth-first spanning tree is A

B

C

D

G

E

F

GrO, with the order in which to perform the second depth-first search, is shown next. The strongly connected components are {F} and all other vertices.

-51-

B,6

A,7

G,5

C,4

D,2

E,3

F,1

9.28 This is the algorithm mentioned in the references. 9.29 As an edge (vO,wO) is implicitly processed, it is placed on a stack. If vO is determined to be an articulation point because LowO[wO] ≥ NumO[vO], then the stack is popped until edge (vO,wO) is removed: The set of popped edges is a biconnected component. An edge (vO,wO) is not placed on the stack if the edge (wO,vO) was already processed as a back edge. 9.30 Let (uO,vO) be an edge of the breadth-first spanning tree. (uO,vO) are connected, thus they must be in the same tree. Let the root of the tree be rO; if the shortest path from rO to uO is duO, then uO is at level duO; likewise, vO is at level dvO. If (uO,vO) were a back edge, then duO > dvO, and vO is visited before uO. But if there were an edge between uO and vO, and vO is visited first, then there would be a tree edge (vO,uO), and not a back edge (uO,vO). Likewise, if (uO,vO) were a forward edge, then there is some wO, distinct from uO and vO, on the path from uO to vO; this contradicts the fact that dvO = dwO + 1. Thus only tree edges and cross edges are possible. 9.31 Perform a depth-first search. The return from each recursive call implies the edge traversal in the opposite direction. The time is clearly linear. 9.33 If there is an Euler circuit, then it consists of entering and exiting nodes; the number of entrances clearly must equal the number of exits. If the graph is not strongly connected, there cannot be a cycle connecting all the vertices. To prove the converse, an algorithm similar in spirit to the undirected version can be used. 9.34 Neither of the proposed algorithms works. For example, as shown, a depth-first search of a biconnected graph that follows A, B, C, D is forced back to A, where it is stranded. D

C E

A

F

B

9.35 These are classic graph theory results. Consult any graph theory for a solution to this exercise. 9.36 All the algorithms work without modification for multigraphs.

-52-

9.37 Obviously, GO must be connected. If each edge of GO can be converted to a directed edge and produce a strongly connected graph G'O, then GO is convertible.O Then, if the removal of a single edge disconnects GO, GO is not convertible since this would also disconnect G'O. This is easy to test by checking to see if there are any single-edge biconnected components. Otherwise, perform a depth-first search on GO and direct each tree edge away from the root and each back edge toward the root. The resulting graph is strongly connected because, for any vertex vO, we can get to a higher level than vO by taking some (possibly 0) tree edges and a back edge. We can apply this until we eventually get to the root, and then follow tree edges down to any other vertex. 9.38 (b) Define a graph where each stick is represented by a vertex. If stick SiO is above SPjO and thus must be removed first, then place an edge from SiO to SPjO. A legal pick-up ordering is given by a topological sort; if the graph has a cycle, then the sticks cannot be picked up. 9.39 Given an instance of clique, form the graph G'O that is the complement graph of GO: (vO,wO) is an edge in G'O if and only if it is not an edge in GO. Then G'O has a vertex cover of at most O|OVO|O − KO if GO has a clique of size at least KO. (The vertices that form the vertex cover are exactly those not in the clique.) The details of the proof are left to the reader. 9.40 A proof can be found in Garey and Johnson [20]. 9.41 Clearly, the baseball card collector problem (BCCP) is in NPO, because it is easy to check if KO packets contain all the cards. To show it is NP-complete, we reduce vertex cover to it. Let GO = (VO, EO) and KO be an instance of vertex cover. For each vertex vO, place all edges adjacent to vO in packet PvO. The KO packets will contain all edges (baseball cards) iff GO can be covered by KO vertices.

-53-

Chapter 10: Algorithm Design Techniques 10.1

First, we show that if NO evenly divides PO, then each of PjO(iO−1)PO+1 through PjiPO must be placed as the iOthO job on some processor. Suppose otherwise. Then in the supposed optimal ordering, we must be able to find some jobs PjxO and PjyO such that PjxO is the tOthO job on some processor and PjyO is the tO+1thO job on some processor but txO > tyO. Let PjzO be the job immediately following PjxO. If we swap PjyO and PjzO, it is easy to check that the mean processing time is unchanged and thus still optimal. But now PjyO follows PjxO, which is impossible because we know that the jobs on any processor must be in sorted order from the results of the one processor case. Let PjeO1, PjeO2, ..., PjeMO be the extra jobs if NO does not evenly divide PO. It is easy to see that the processing time for these jobs depends only on how quickly they can be scheduled, and that they must be the last scheduled job on some processor. It is easy to see that the first MO processors must have jobs PjO(iO−1)PO+1 through PjiPO+MO; we leave the details to the reader.

10.3

, 0 4

7

8

9

sp 1

5 nl

6 3

10.4

2

:

One method is to generate code that can be evaluated by a stack machine. The two operations are PushO (the one node tree corresponding to) a symbol onto a stack and Combine,O which pops two trees off the stack, merges them, and pushes the result back on. For the example in the text, the stack instructions are Push(s), Push(nl), Combine, Push(t), Combine, Push(a), Combine, Push(e), Combine, Push(i), Push (sp), Combine, Combine. By encoding a CombineO with a 0 and a PushO with a 1 followed by the symbol, the total extra space is 2NO − 1 bits if all the symbols are of equal length. Generating the stack machine code can be done with a simple recursive procedure and is left to the reader.

10.6

Maintain two queues, QO1 and QO2. QO1 will store single-node trees in sorted order, and QO2 will store multinode trees in sorted order. Place the initial single-node trees on QO1, enqueueing the smallest weight tree first. Initially, QO2 is empty. Examine the first two entries of each of QO1 and QO2, and dequeue the two smallest. (This requires an easily implemented extension to the ADT.) Merge the tree and place the result at the end of QO2. Continue this step until QO1 is empty and only one tree is left in QO2.

-54-

10.9

To implement first fit, we keep track of bins biO, which have more room than any of the lower numbered bins. A theoretically easy way to do this is to maintain a splay tree ordered by empty space. To insert wO, we find the smallest of these bins, which has at least wO empty space; after wO is added to the bin, if the resulting amount of empty space is less than the inorder predecessor in the tree, the entry can be removed; otherwise, a DecreaseKeyO is performed. To implement best fit, we need to keep track of the amount of empty space in each bin. As before, a splay tree can keep track of this. To insert an item of size wO, perform an insert of wO. If there is a bin that can fit the item exactly, the insert will detect it and splay it to the root; the item can be added and the root deleted. Otherwise, the insert has placed wO at the root (which eventually needs to be removed). We find the minimum element MO in the right subtree, which brings MO to the right subtree’s root, attach the left subtree to MO, and delete wO. We then perform an easily implemented DecreaseKeyO on MO to reflect the fact that the bin is less empty.

10.10

Next fit: 12 bins (.42, .25, .27), (.07, .72), (.86, .09), (.44, .50), (.68), (.73), (.31), (.78, .17), (.79), (.37), (.73, .23), (.30). First fit: 10 bins (.42, .25, .27), (.07, .72, .09), (.86), (.44, .50), (.68, .31), (.73, .17), (.78), (.79), (.37, .23, .30), (.73). Best fit: 10 bins (.42, .25, .27), (.07, .72, .09), (.86), (.44, .50), (.68, .31), (.73, .23), (.78, .17), (.79), (.37, .30 ), (.73). First fit decreasing: 10 bins (.86, .09), (.79, .17), (.78, .07), (.73, .27), (.73, .25), (.72, .23), (.68, .31), (.50, .44), (.42, .37), (.30). Best fit decreasing: 10 bins (.86, .09), (.79, .17), (.78), (.73, .27), (.73, .25), (.72, .23), (.68, .31), (.50, .44), (.42, .37, .07), (.30). Note that use of 10 bins is optimal.

10.12

We prove the second case, leaving the first and third (which give the same results as Theorem 10.6) to the reader. Observe that log pONO = log pO(bOmO) = mOpOlog pObO Working this through, Equation (10.9) becomes OmO

OmO

TO(NO) = TO(b ) = a

iO B ___ bOkO E OpO OC OF i log pObO Σ a G iO=0 D m

If aO = bOkO, then TO(NO) = aOmOlog pObO

m

iOpO Σ iO=0

= OO(aOmOmOpO+1log pObO) Since mO = log NO/log bO, and aOmO = NOkO, and bO is a constant, we obtain TO(NO) = OO(NOkOlog pO+1NO)

10.13

The easiest way to prove this is by an induction argument.

-55-

10.14

Divide the unit square into NO−1 square grids each with side 1/ √MNMMM O−1. Since there are NO points, some grid must contain two points. Thus the shortest distance is conservatively 2/ (NO−1). given by at most √MMMMMMMM

10.15

The results of the previous exercise imply that the width of the strip is OO(1/ √MM NOO). NOO), and thus covers only OO(1/ √MM NOO) of the area of Because the width of the strip is OO(1/ √MM the square, we expect a similar fraction of the points to fall in the strip. Thus only MMOO). NOO) points are expected in the strip; this is OO( √N OO(NO/ √MM

10.17

The recurrence works out to TO(NO) = TO(2NO/ 3) + TO(NO/ 3) + OO(NO) This is not linear, because the sum is not less than one. The running time is OO(NOlog NO).

10.18

The recurrence for median-of-median-of-seven partitioning is TO(NO) = TO(5NO/ 7) + TO(NO/ 7) + OO(NO) If all we are concerned about is the linearity, then median-of-median-of-seven can be used.

10.20

When computing the median-of-median-of-five, 30% of the elements are known to be smaller than the pivot, and 30% are known to be larger. Thus these elements do not need to be involved in the partitioning phase. (Extra work would need to be done to implement this, but since the whole algorithm isn’t practical anyway, we can ignore any extra work that doesn’t involve element comparisons.) The original paper [9] describes the exact constants in the worst-case bound, with and without this extra effort.

10.21

We derive the values of sO and δ, following the style in the original paper [17]. Let RtO,XO be the rank of element tO in some sample XO. If a sample S'O of elements is chosen randomly from SO, and O|OS'O|O = sO, O|OSO|O = NO, then we’ve already seen that NO+1 EO(RtO,SO) = _____ RtO,S'O sO+1 where EO means expected value. For instance, if tO is the third largest in a sample of 5 elements, then in a group of 19 elements it is expected to be the tenth largest. We can also calculate the variance: VO(RtO,SO) =

O)(sO−R O+1)(NO+1)(NO−sO) = OO(NO/ √M_(R________________________ MMMMMMMMM (sO+1) (sO+2) tO,S

tO,S

2

√MsMOO)

We choose vO1 and vO2 so that EO(RvO1,SO) + 2dVO(RvO1,SO) ∼ ∼ kO ∼ ∼ EO(RvO2,SO) − 2dVO(RvO2,SO) where dO indicates how many variances we allow. (The larger dO is, the less likely the element we are looking for will not be in S'O.) The probability that kO is not between vO1 and vO2 is ∞

2 ∫ erfOO(xO) dxO = OO(eO−d /dO) O2

d

If dO = log1/ 2NO, then this probability is ο(1/ NO), specifically OO(1/ (NOlog NO)). This means that the expected work in this case is OO(log−1NO) because OO(NO) work is performed with very small probability.

-56-

These mean and variance equations imply (sO+1) RvO1,S'O ≥ k ______ − dO √MsMOO (NO+1) and (sO+1) RvO2,S'O ≤ k ______ + dO √MsMOO (NO+1) This gives equation (A): δ = dO √MsMOO = √MsMOOlog 1/ 2NO

(A)

If we first pivot around vO2, the cost is NO comparisons. If we now partition elements in SO NO+1 that are less than vO2 around vO1, the cost is RvO2,SO, which has expected value kO + δ _____ . sO+1 NO+1 Thus the total cost of partitioning is NO + kO + δ _____ . The cost of the selections to find vO1 sO+1 and vO2 in the sample S'O is OO(sO). Thus the total expected number of comparisons is NO + kO + OO(sO) + OO(NOδ/ sO) The low order term is minimized when sO = NOδ/ sO

(B)

Combining Equations (A) and (B), we see that sO2 = NOδ = √MsMONOlog 1/ 2NO O3/ 2

s

= NOlog

sO = N

10.22

O2/ 3

1/ 2

(C)

NO

(D) NO

(E)

δ = NO1/ 3log 2/ 3NO

(F)

log

1/ 3

First, we calculate 12*43. In this case, XLO = 1, XRO = 2, YLO = 4, YRO = 3, DO1 = −1, DO2 = −1, XLOYLO = 4, XROYRO = 6, DO1DO2 = 1, DO3 = 11, and the result is 516. Next, we calculate 34*21. In this case, XLO = 3, XRO = 4, YLO = 2, YRO = 1, DO1 = −1, DO2 = −1, XLOYLO = 6, XROYRO = 4, DO1DO2 = 1, DO3 = 11, and the result is 714.

Third, we calculate 22*22. Here, XLO = 2, XRO = 2, YLO = 2, YRO = 2, DO1 = 0, DO2 = 0, XLOYLO = 4, XROYRO = 4, DO1DO2 = 0, DO3 = 8, and the result is 484. Finally, we calculate 1234*4321. XLO = 12, XRO = 34, YLO = 43, YRO = 21, DO1 = −22, DO2 = −2. By previous calculations, XLOYLO = 516, XROYRO = 714, and DO1DO2 = 484. Thus DO3 = 1714, and the result is 714 + 171400 + 5160000 = 5332114. 10.23

The multiplication evaluates to (acO − bdO) + (bcO + adO)iO. (aO − bO)(dO − cO) + acO + bdO.

10.24

The algebra is easy to verify. The problem with this method is that if XO and YO are positive NO bit numbers, their sum might be an NO+1 bit number. This causes complications.

10.26

Matrix multiplication is not commutative, so the algorithm couldn’t be used recursively on matrices if commutativity was used.

-57-

Compute acO, bdO, and

10.27

If the algorithm doesn’t use commutativity (which turns out to be true), then a divide and conquer algorithm gives a running time of OO(NOlog70143640) = OO(NO2.795).

10.28

1150 scalar multiplications are used if the order of evaluation is ( ( AO1AO2 ) ( ( ( AO3AO4 ) AO5 ) AO6 ) )

10.29

(a) Let the chain be a 1x1 matrix, a 1xA matrix, and an AxB matrix. Multiplication by using the first two matrices first makes the cost of the chain AO + ABO. The alternative method gives a cost of ABO + BO, so if AO > BO, then the algorithm fails. Thus, a counterexample is multiplying a 1x1 matrix by a 1x3 matrix by a 3x2 matrix. (b, c) A counterexample is multiplying a 1x1 matrix by a 1x2 matrix by a 2x3 matrix.

10.31

The optimal binary search tree is the same one that would be obtained by a greedy strategy: IO is at the root and has children andO and itO; aO and orO are leaves; the total cost is 2.14.

10.33

This theorem is from F. Yao’s paper, reference [58].

A recursive procedure is clearly called for; if there is an intermediate vertex, StopOverO on the path from sO to tO, then we want to print out the path from sO to StopOverO and then StopOverO to tO. We don’t want to print out StopOverO twice, however, so the procedure does not print out the first or last vertex on the path and reserves that for the driver. _______________________________________________________________________________ 10.34

/* Print the path between S and T, except do not print */ /* the first or last vertex. Print a trailing " to " only. */ void PrintPath1( TwoDArray Path, int S, int T ) { int StopOver = Path[ S ][ T ]; if( S != T && StopOver != 0 ) { PrintPath1( Path, S, StopOver ); printf( "%d to ", StopOver ); PrintPath1( Path, StopOver, T ); } } /* Assume the existence of a Path of length at least 1 */ void PrintPath( TwoDArray Path, int S, int T ) { printf( "%d to ", S ); PrintPath1( Path, S, T ); printf( "%d", T ); NewLine( ); } _______________________________________________________________________________

-58-

10.35

Many random number generators are poor. The default UNIX random number generator randO uses a modulus of the form 2bO, as does the VAX/VMS random number generator. UNIX does, however, provide better random number generators in the form of random.O The Turbo random number generators, likewise, are also deficient. The paper by Park and Miller [44] discusses the random number generators on many machines.

10.38

If the modulus is a power of two, then the least significant bit of the "random" number oscillates. Thus FlipO will always return heads and tails alternately, and the level chosen for a skip list insertion will always be one or two. Consequently, the performance of the skip list will be Θ(NO) per operation.

10.39

(a) 25 ≡ 32 modO 341, 210 ≡ 1 modO341. Since 322 ≡ 1 modO 341, this proves that 341 is not prime. We can also immediately conclude that 2340 ≡ 1 modO341 by raising the last equation to the 34thO power. The exponentiation would continue as follows: 220 ≡ 1 modO341, 221 ≡ 2 modO341, 242 ≡ 4 modO341, 284 ≡ 16 modO341, 285 ≡ 32 modO341, 2170 ≡ 1 modO341, and 2340 ≡ 1 modO341. (b) If AO = 2, then although 2560 ≡ 1 modO 561, 2280 ≡ 1 modO561 proves that 561 is not prime. If AO = 3, then 3561 ≡ 375 modO 561, which proves that 561 is not prime. AO = 4 obviously doesn’t fool the algorithm, since 4140 ≡ 1 modO 561. AO = 5 fools the algorithm: 52 ≡ 25 modO 561, 54 ≡ 64 modO561, 58 ≡ 169 modO561, 51 ≡ 5 modO 561, 16 17 34 5 = 311 modO561, 5 ≡ 229 modO561, 535 ≡ 23 modO561, 5 ≡ 511 modO561, 70 140 280 560 5 ≡ 529 modO561, 5 ≡ 463 modO561, 5 ≡ 67 modO561, 5 ≡ 1 modO561.

10.41

The two point sets are {0, 4, 6, 9, 16, 17} and {0, 5, 7, 13, 16, 17}.

10.42

To find all point sets, we backtrack even if Found == true, and print out information when line 2 is executed. In the case where there are some duplicate distances, it is possible that several symmetries will be printed.

10.43 68

Max

68

65

68

86

65

68

94

94

65

17

10.44

65

86

68

59

39

88

68

65

88

86

92

59

Max

65

86

92

50

65

41

79

65

50

50

79

Min

50

Min

Max

20

Actually, it implements both; AlphaO is lowered for subsequent recursive calls on line 9 when ValueO is changed at line 12. BetaO is used to terminate the whileO loop when a line of play leads to a position so good that it is obvious the human player (who has already seen a more favorable line for himself or herself before making this call) won’t play into it. To implement the complementary routine, switch the roles of the human and computer. Lines that change are 3 and 4 (obvious changes), 5 (replace AlphaO with BetaO), 6 (replace *ValueOAlphaO), 8 (obvious), 9 (replace HumanO . . . *ValueO,BetaO with CompO . . . AlphaO,*ValueO), and 11 (obvious).

-59-

10.46

10.47

We place circles in order. Suppose we are trying to place circle PjO, of radius rPjO. If some rMMM circle iO of radius riO is centered at xiO, then PjO is tangent to iO if it is placed at xiO + 2 √M iOrsPjO. To see this, notice that the line connecting the centers has length riO + rPjO, and the difference in yO-coordinates of the centers is O|OrPjO − riOO|O. The difference in xO-coordinates follows from the Pythagorean theorem.

To place circle PjO, we compute where it would be placed if it were tangent to each of the first PjO−1 circles, selecting the maximum value. If this value is less than rPjO, then we place circle PjO at xPjO. The running time is OO(NO2). Construct a minimum spanning tree TO of GO, pick any vertex in the graph, and then find a path in TO that goes through every edge exactly once in each direction. (This is done by a depth-first search; see Exercise 9.31.) This path has twice the cost of the minimum spanning tree, but it is not a simple cycle. Make it a simple cycle, without increasing the total cost, by bypassing a vertex when it is seen a second time (except that if the start vertex is seen, close the cycle) and going to the next unseen vertex on the path, possibly bypassing several vertices in the process. The cost of this direct route cannot be larger than original because of the triangle inequality. If there were a tour of cost KO, then by removing one edge on the tour, we would have a minimum spanning tree of cost less than KO (assuming that edge weights are positive). Thus the minimum spanning tree is a lower bound on the optimal traveling salesman tour. This implies that the algorithm is within a factor of 2 of optimal.

10.48

If there are two players, then the problem is easy, so assume kO>1. If the players are numbered 1 through NO, then divide them into two groups: 1 through NO / 2 and NO / 2+1 though NO. On the iOthO day, for 1 ≤ iO ≤ NO / 2, player pO in the second group plays players ((pO + iO) modO NO / 2) + 1 in the first group. Thus after NO / 2 days, everyone in group 1 has played everyone in group 2. In the last NO / 2−1 days, recursively conduct round-robin tournaments for the two groups of players.

10.49

Divide the players into two groups of size OINO / 2OK and OHNO / 2OJ, respectively, and recursively arrange the players in any order. Then merge the two lists (declare that pxO > pyO if xO has defeated yO, and pyO > pxO if yO has defeated xO − exactly one is possible) in linear time as is done in mergesort.

10.50 10.51

Divide and conquer algorithms (among others) can be used for both problems, but neither is trivial to implement. See the computational geometry references for more information.

-60-

10.52

(a) Use dynamic programming. Let SkO= the best setting of words wkO, wkO+1, ... wNO, UkO= the ugliness of this setting, and lkO= for this setting, (a pointer to) the word that starts the second line. To compute SkO−1, try putting wkO−1, wkO, ..., wMO all on the first line for kO < MO and M

Σ wiO< LO. iO=kO−1

Compute the ugliness of each of these possibilities by, for each MO, comput-

ing the ugliness of setting the first line and adding UmO+1. Let M'O be the value of MO that yields the minimum ugliness. Then UkO−1 = this value, and lkO−1 = M'O + 1. Compute values of UO and lO starting with the last word and working back to the first. The minimum ugliness of the paragraph is UO1; the actual setting can be found by starting at lO1 and following the pointers in lO since this will yield the first word on each line. (b) The running time is quadratic in the case where the number of words that can fit on a line is consistently Θ(NO). The space is linear to keep the arrays UO and lO. If the line length is restricted to some constant, then the running time is linear because only OO(1) words can go on a line. (c) Put as many words on a line as can fit. This clearly minimizes the number of lines, and hence the ugliness, as can be shown by a simple calculation. 10.53

An obvious OO(NO2) solution to construct a graph with vertices 1, 2, ..., NO and place an edge (vO,wO) in GO iff avO < awO. This graph must be acyclic, thus its longest path can be found in time linear in the number of edges; the whole computation thus takes OO(NO2) time. Let BESTO(kO) be the increasing subsequence of exactly kO elements that has the minimum last element. Let tO be the length of the maximum increasing subsequence. We show how to update BESTO(kO) as we scan the input array. Let LASTO(kO) be the last element in BESTO(kO). It is easy to show that if iO

10.54

Let LCSO(AO, MO, BO, NO) be the longest common subsequence of AO1, AO2, ..., AMO and BO1, BO2, ..., BNO. If either MO or NO is zero, then the longest common subsequence is the empty string. If xMO = yNO, then LCSO(AO, MO, BO, NO) = LCSO(AO, MO−1, BO, NO−1),AMO. Otherwise, LCSO(AO, MO, BO, NO) is either LCSO(AO, MO, BO, NO−1) or LCSO(AO, MO−1, BO, NO), whichever is longer. This yields a standard dynamic programming solution.

10.56

(a) A dynamic programming solution resolves part (a). Let FITSO(iO, sO) be 1 if a subset of the first iO items sums to exactly sO; FITSO(iO, 0) is always 1. Then FITSO(xO, tO) is 1 if either FITSO(xO − 1, tO − axO) or FITSO(xO − 1, tO) is 1, and 0 otherwise.

(b) This doesn’t show that PO = NPO because the sizeO of the problem is a function of NO and log KO. Only log KO bits are needed to represent KO; thus an OO(NKO) solution is exponential in the input size.

-61-

10.57

(a) Let the minimum number of coins required to give xO cents in change be COINO(xO); COINO(0) = 0. Then COINO(xO) is one more than the minimum value of COINO(xO − ciO), giving a dynamic programming solution. (b) Let WAYSO(xO, iO) be the number of ways to make xO cents in change without using the first iO coin types. If there are NO types of coins, then WAYSO(xO, NO) = 0 if xO ≠ 0, and WAYSO(0, iO) = 1. Then WAYSO(xO, iO − 1) is equal to the sum of WAYSO(xO − pciO, iO), for integer values of pO no larger than xO/ciO (but including 0).

10.58

(a) Place eight queens randomly on the board, making sure that no two are on the same row or column. This is done by generating a random permutation of 1..8. There are only 5040 such permutations, and 92 of these give a solution.

10.59

(a) Since the knight leaves every square once, it makes BO2 moves. If the squares are alternately colored black and white like a checkerboard, then a knight always moves to a different colored square. If BO is odd, then so is BO2, which means that at the end of the tour the knight will be on a different colored square than at the start of the tour. Thus the knight cannot be at the original square.

10.60

(a) If the graph has a cycle, then the recursion does not always make progress toward a base case, and thus an infinite loop will result. (b) If the graph is acyclic, the recursive call makes progress, and the algorithm terminates. This could be proved formally by induction. (c) This algorithm is exponential.

-62-

Chapter 11: Amortized Analysis 11.1

When the number of trees after the insertions is more than the number before.

11.2

Although each insertion takes roughly log N , and each DeleteMin takes 2log N actual time, our accounting system is charging these particular operations as 2 for the insertion and 3log N −2 for the DeleteMin. The total time is still the same; this is an accounting gimmick. If the number of insertions and DeleteMins are roughly equivalent, then it really is just a gimmick and not very meaningful; the bound has more significance if, for instance, there are N insertions and O (N / log N ) DeleteMins (in which case, the total time is linear). O

O

O

O

O

O

O

11.3

O

O

O

O

Insert the sequence N , N + 1, N − 1, N + 2, N − 2, N + 3, ..., 1, 2N into an initially empty skew heap. The right path has N nodes, so any operation could take Ω(N ) time. O

O

O

O

O

O

O

O

11.5

O

We implement DecreaseKey(X, H) as follows: If lowering the value of X creates a heap order violation, then cut X from its parent, which creates a new skew heap H 1 with the new value of X as a root, and also makes the old skew heap H smaller. This operation might also increase the potential of H , but only by at most log N . We now merge H and H 1. The total amortized time of the Merge is O (log N ), so the total time of the DecreaseKey operation is O (log N ). O

O

O

O

O

O

O

O

O

O

11.8

O

O

O

O

O

For the zig −zig case, the actual cost is 2, and the potential change is R f ( X ) + R f ( P ) + R f ( G ) − Ri ( X ) − Ri ( P ) − Ri ( G ). This gives an amortized time bound of O

O

P OO

O

O

P OO

O

P OO

O

O

O

O

O

O

ATzig −zig = 2 + R f ( X ) + R f ( P ) + R f ( G ) − Ri ( X ) − Ri ( P ) − Ri ( G ) O

O

O

P OO

O

P OO

O

P OO

O

O

O

O

O

O

Since R f ( X ) = Ri ( G ), this reduces to O

P OO

O

O

= 2 + R f ( P ) + R f ( G ) − Ri ( X ) − Ri ( P ) O

P OO

O

P OO

O

O

O

O

Also, R f ( X ) > R f ( P ) and Ri ( X ) < Ri ( P ), so O

P OO

O

P OO

O

O

O

O

ATzig −zig < 2 + R f ( X ) + R f ( G ) − 2Ri ( X ) O

O

O

P OO

O

P OO

O

O

Since Si ( X ) + S f ( G ) < S f ( X ), it follows that Ri ( X ) + R f ( G ) < 2R f ( X ) − 2. Thus O

O

O

P OO

O

P OO

O

O

P OO

O

O

P OO

ATzig −zig < 3R f ( X ) − 3Ri ( X ) O

11.9

O

P OO

O

O

O

(a) Choose W (i ) = 1/ N for each item. Then for any access of node X , R f ( X ) = 0, and Ri ( X ) ≥ −log N , so the amortized access for each item is at most 3 log N + 1, and the net potential drop over the sequence is at most N log N , giving a bound of O (M log N + M + N log N ), as claimed. O

O

O

O

O

O

O

O

O

O

O

O

O

O

P OO

O

O

O

(b) Assign a weight of qi /M to items i . Then R f ( X ) = 0, Ri ( X ) ≥ log(qi /M ), so the amortized cost of accessing item i is at most 3 log(M /qi ) + 1, and the theorem follows immediately. O

O

O

P OO

O

11.10

O

O

O

O

O

O

O

(a) To merge two splay trees T 1 and T 2, we access each node in the smaller tree and insert it into the larger tree. Each time a node is accessed, it joins a tree that is at least O

O

-63-

twice as large; thus a node can be inserted log N times. This tells us that in any sequence of N −1 merges, there are at most N log N inserts, giving a time bound of O (N log2N ). This presumes that we keep track of the tree sizes. Philosophically, this is ugly since it defeats the purpose of self-adjustment. O

O

O

O

O

O

O

(b) Port and Moffet [6] suggest the following algorithm: If T 2 is the smaller tree, insert its root into T 1. Then recursively merge the left subtrees of T 1 and T 2, and recursively merge their right subtrees. This algorithm is not analyzed; a variant in which the median of T 2 is splayed to the root first is with a claim of O (N log N ) for the sequence of merges. O

O

O

O

O

11.11

O

O

O

The potential function is c times the number of insertions since the last rehashing step, where c is a constant. For an insertion that doesn’t require rehashing, the actual time is 1, and the potential increases by c , for a cost of 1 + c . O

O

O

O

If an insertion causes a table to be rehashed from size S to 2S , then the actual cost is 1 + dS , where dS represents the cost of initializing the new table and copying the old table back. A table that is rehashed when it reaches size S was last rehashed at size S / 2, so S / 2 insertions had taken place prior to the rehash, and the initial potential was cS / 2. The new potential is 0, so the potential change is −cS / 2, giving an amortized bound of (d − c / 2)S + 1. We choose c = 2d , and obtain an O (1) amortized bound in both cases. O

O

O

O

O

O

O

O

O

O

11.12

O

O

O

O

O

We show that the amortized number of node splits is 1 per insertion. The potential function is the number of three-child nodes in T . If the actual number of nodes splits for an insertion is s , then the change in the potential function is at most 1 − s , because each split converts a three-child node to two two-child nodes, but the parent of the last node split gains a third child (unless it is the root). Thus an insertion costs 1 node split, amortized. An N node tree has N units of potential that might be converted to actual time, so the total cost is O (M + N ). (If we start from an initially empty tree, then the bound is O (M ).) O

O

O

O

O

O

O

11.13

O

O

O

(a) This problem is similar to Exercise 3.22. The first four operations are easy to implement by placing two stacks, SL and SR , next to each other (with bottoms touching). We can implement the fifth operation by using two more stacks, ML and MR (which hold minimums). O

O

O

O

If both SL and SR never empty, then the operations can be implemented as follows: O

O

Push(X,D): push X onto SL ; if X is smaller than or equal to the top of ML , push X onto ML as well. O

O

O

O

O

O

Inject(X,D): same operation as Push , except use SR and MR . O

O

O

Pop(D): pop SL ; if the popped item is equal to the top of ML , then pop ML as well. O

O

O

Eject(D): same operation as Pop , except use SR and MR . O

O

O

FindMin(D): return the minimum of the top of ML and MR . O

O

These operations don’t work if either SL or SR is empty. If a Pop or E ject is attempted on an empty stack, then we clear ML and MR . We then redistribute the elements so that half are in SL and the rest in SR , and adjust ML and MR to reflect what the state would be. We can then perform the Pop or E ject in the normal fashion. Fig. 11.1 shows a transformation. O

O

O

P

O

O

O

O

O

O

O

P

O

O

Define the potential function to be the absolute value of the number of elements in SL minus the number of elements in SR . Any operation that doesn’t empty SL or SR can O

O

O

-64-

O

3, 1, 4, 6, 5, 9, 2, 6

SL

O

3, 1, 4, 6

SR

O

O

SR

O

1, 2, 6

ML

5, 9, 2, 6

SL

O

1, 4, 6

MR

5, 2

ML

O

MR

O

O

Fig. 11.1. increase the potential by only 1; since the actual time for these operations is constant, so is the amortized time. To complete the proof, we show that the cost of a reorganization is O (1) amortized time. Without loss of generality, if SR is empty, then the actual cost of the reorganization is | SL | units. The potential before the reorganization is | SL | ; afterward, it is at most 1. Thus the potential change is 1− | SL | , and the amortized bound follows. O

O

O O

O O

O O

O

O O

O O

O

-65-

O O

O

Chapter 12: Advanced Data Structures and Implementation 12.3

Incorporate an additional field for each node that indicates the size of its subtree. These fields are easy to update during a splay. This is difficult to do in a skip list.

12.6

If there are B black nodes on the path from the root to all leaves, it is easy to show by induction that there are at most 2B leaves. Consequently, the number of black nodes on a path is at most log N . Since there can’t be two consecutive red nodes, the height is bounded by 2log N . O

O

O

O

12.7

Color nonroot nodes red if their height is even and their parents height is odd, and black otherwise. Not all red black trees are AVL trees (since the deepest red black tree is deeper than the deepest AVL tree).

12.19

See H. N. Gabow, J. L. Bentley, and R. E. Tarjan, "Scaling and Related Techniques for Computational Geometry," Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing (1984), 135-143, or C. Levcopoulos and O. Petersson, "Heapsort Adapted for Presorted Files," Journal of Algorithms 14 (1993), 395-413.

12.29

Pointers are unnecessary; we can store everything in an array. This is discussed in reference [12]. The bounds become O (k log N ) for insertion, O (k 2log N ) for deletion of a minimum, O (k 2N ) for creation (an improvement over the bound in [12]). O

O

O

12.35

O

O

O

O

O

O

Consider the pairing heap with 1 as the root and children 2, 3, ... N . A DeleteMin removes 1, and the resulting pairing heap is 2 as the root with children 3, 4, ... N ; the cost of this operation is N units. A subsequent DeleteMin sequence of 2, 3, 4, ... will take total time Ω(N 2). O

O

O

O

O

O

-66-

Mark Allen Weiss Florida International University

Preface Included in this manual are answers to most of the exercises in the textbook Data Structures and Algorithm Analysis in C, second edition, published by Addison-Wesley. These answers reflect the state of the book in the first printing. Specifically omitted are likely programming assignments and any question whose solution is pointed to by a reference at the end of the chapter. Solutions vary in degree of completeness; generally, minor details are left to the reader. For clarity, programs are meant to be pseudo-C rather than completely perfect code. Errors can be reported to [email protected] Thanks to Grigori Schwarz and Brian Harvey for pointing out errors in previous incarnations of this manual.

Table of Contents

1. Chapter 1: Introduction ......................................................................................................

1

2. Chapter 2: Algorithm Analysis ..........................................................................................

4

3. Chapter 3: Lists, Stacks, and Queues .................................................................................

7

4. Chapter 4: Trees .................................................................................................................

14

5. Chapter 5: Hashing ............................................................................................................

25

6. Chapter 6: Priority Queues (Heaps) ...................................................................................

29

7. Chapter 7: Sorting ..............................................................................................................

36

8. Chapter 8: The Disjoint Set ADT .......................................................................................

42

9. Chapter 9: Graph Algorithms .............................................................................................

45

10. Chapter 10: Algorithm Design Techniques ......................................................................

54

11. Chapter 11: Amortized Analysis ......................................................................................

63

12. Chapter 12: Advanced Data Structures and Implementation ............................................

66

-iii-

Chapter 1: Introduction 1.3

Because of round-off errors, it is customary to specify the number of decimal places that should be included in the output and round up accordingly. Otherwise, numbers come out looking strange. We assume error checks have already been performed; the routine SeparateO is left to the reader. Code is shown in Fig. 1.1.

1.4

The general way to do this is to write a procedure with heading void ProcessFile( const char *FileName ); which opens FileName,O does whatever processing is needed, and then closes it. If a line of the form #include SomeFile is detected, then the call ProcessFile( SomeFile ); is made recursively. Self-referential includes can be detected by keeping a list of files for which a call to ProcessFileO has not yet terminated, and checking this list before making a new call to ProcessFile.O

1.5

(a) The proof is by induction. The theorem is clearly true for 0 < XO ≤ 1, since it is true for XO = 1, and for XO < 1, log XO is negative. It is also easy to see that the theorem holds for 1 < XO ≤ 2, since it is true for XO = 2, and for XO < 2, log XO is at most 1. Suppose the theorem is true for pO < XO ≤ 2pO (where pO is a positive integer), and consider any 2pO < YO ≤ 4pO (pO ≥ 1). Then log YO = 1 + log (YO / 2) < 1 + YO / 2 < YO / 2 + YO / 2 ≤ YO, where the first inequality follows by the inductive hypothesis. (b) Let 2XO = AO. Then AOBO = (2XO)BO = 2XBO. Thus log AOBO = XBO. Since XO = log AO, the theorem is proved.

1.6

(a) The sum is 4/3 and follows directly from the formula. 1 2 3 2 3 (b) SO = __ + ___ + ___ + . . . . 4SO = 1+ __ + ___ + . . . . Subtracting the first equation from 2 3 4 4 4 4 42 2 1 ___ the second gives 3SO = 1 + __ + 2 + . . . . By part (a), 3SO = 4/ 3 so SO = 4/ 9. 4 4 4 9 4 ___ 9 16 1 ___ ___ __ + 2 + ___ + . . . . Subtracting the first equa(c) SO = __ + 2 + 3 + . . . . 4SO = 1 + 4 4 4 4 43 4 5 7 3 ___ ___ tion from the second gives 3SO = 1+ __ + 2 + 3 + . . . . Rewriting, we get 4 4 4 ∞ i 1 ___ + ∞ ___ 3SO = 2 Σ iO Σ iO . Thus 3SO = 2(4/ 9) + 4/ 3 = 20/ 9. Thus SO = 20/ 27. iO=0 4 iO=0 4 ∞ iONO (d) Let SNO = Σ ___ . Follow the same method as in parts (a) - (c) to obtain a formula for SNO iO iO=0 4 in terms of SNO−1, SNO−2, ..., SO0 and solve the recurrence. Solving the recurrence is very difficult.

-1-

_______________________________________________________________________________ double RoundUp( double N, int DecPlaces ) { int i; double AmountToAdd = 0.5; for( i = 0; i < DecPlaces; i++ ) AmountToAdd /= 10; return N + AmountToAdd; } void PrintFractionPart( double FractionPart, int DecPlaces ) { int i, Adigit; for( i = 0; i < DecPlaces; i++ ) { FractionPart *= 10; ADigit = IntPart( FractionPart ); PrintDigit( Adigit ); FractionPart = DecPart( FractionPart ); } } void PrintReal( double N, int DecPlaces ) { int IntegerPart; double FractionPart; if( N < 0 ) { putchar(’-’); N = -N; } N = RoundUp( N, DecPlaces ); IntegerPart = IntPart( N ); FractionPart = DecPart( N ); PrintOut( IntegerPart ); /* Using routine in text */ if( DecPlaces > 0 ) putchar(’.’); PrintFractionPart( FractionPart, DecPlaces ); } Fig. 1.1. _______________________________________________________________________________ N 1 1 1 ∼ __ __ − OINO/ 2 − 1OK __ = ∼ ln NO − ln NO/ 2 ∼∼ ln 2. Σ Σ Σ i i i iO=OINO/ 2OK iO=1 iO=1 N

1.7

-2-

1.8

24 = 16 ≡ 1 (modO 5). (24)25 ≡ 125 (modO 5). Thus 2100 ≡ 1 (modO 5).

1.9

(a) Proof is by induction. The statement is clearly true for NO = 1 and NO = 2. Assume true for NO = 1, 2, ..., kO. Then

kO+1

Σ FiO =

iO=1

k

Σ FiO+FkO+1.

By the induction hypothesis, the value of the

iO=1

sum on the right is FkO+2 − 2 + FkO+1 = FkO+3 − 2, where the latter equality follows from the definition of the Fibonacci numbers. This proves the claim for NO = kO + 1, and hence for all NO. (b) As in the text, the proof is by induction. Observe that φ + 1 = φ2. This implies that φ−1 + φ−2 = 1. For NO = 1 and NO = 2, the statement is true. Assume the claim is true for NO = 1, 2, ..., kO. FkO+1 = FkO + FkO−1 by the definition and we can use the inductive hypothesis on the right-hand side, obtaining F < φkO + φkO−1 kO+1

< φ−1φkO+1 + φ−2φkO+1 FkO+1 < (φ−1 + φ−2)φkO+1 < φkO+1 and proving the theorem. (c) See any of the advanced math references at the end of the chapter. The derivation involves the use of generating functions. N

1.10 (a)

N

N

(2iO−1) = 2 Σ iO − Σ 1 = NO(NO+1) − NO = NO2. Σ iO=1 iO=1 iO=1

(b) The easiest way to prove this is by induction. The case NO = 1 is trivial. Otherwise, NO+1

N

iO3 Σ iO3 = (NO+1)3 + iΣ iO=1 O=1 NO2(NO+1)2 = (NO+1)3 + _________ 4 H ___ J NO2 + (NO+1)OA = (NO+1)2OA I 4 K H ___________ NO2 + 4NO + 4 J OA = (NO+1)2OA 4 I K (NO+1)2(NO+2)2 _____________ 22 2 H ___________ (NO+1)(NO+2) J = OA OA 2 I K 2 HNO+1 J = OA Σ iOA I iO=1 K =

-3-

Chapter 2: Algorithm Analysis 2.1

2/NO, 37, √MM NOO, NO, NOlog log NO, NOlog NO, NOlog (NO2), NOlog2NO, NO1.5, NO2, NO2log NO, NO3, 2NO/ 2, NO 2 . NOlog NO and NOlog (NO2) grow at the same rate.

2.2

(a) True. (b) False. A counterexample is TO1(NO) = 2NO, TO2(NO) = NO, and PfOO(NO) = NO.

(c) False. A counterexample is TO1(NO) = NO2, TO2(NO) = NO, and PfOO(NO) = NO2. (d) False. The same counterexample as in part (c) applies.

2.3

We claim that NOlog NO is the slower growing function. To see this, suppose otherwise. √MMMMM Then, NOε/ log NOO would grow slower than log NO. Taking logs of both sides, we find that, MMMMM under this assumption, ε/ √Mlog NOOlog NO grows slower than log log NO. But the first expresMMMMMM MMOO grows slower than sion simplifies to ε √log NOO. If LO = log NO, then we are claiming that ε √L log LO, or equivalently, that ε2LO grows slower than log2 LO. But we know that log2 LO = ο (LO), so the original assumption is false, proving the claim.

2.4

Clearly, logkO1NO = ο(logkO2NO) if kO1 < kO2, so we need to worry only about positive integers. The claim is clearly true for kO = 0 and kO = 1. Suppose it is true for kO < iO. Then, by L’Hospital’s rule, logiON logiO−1N lim ______ = lim i _______ N NO→∞ N NO→∞ The second limit is zero by the inductive hypothesis, proving the claim.

2.5

Let PfOO(NO) = 1 when NO is even, and NO when NO is odd. Likewise, let gO(NO) = 1 when NO is odd, and NO when NO is even. Then the ratio PfOO(NO) / gO(NO) oscillates between 0 and ∞.

2.6

For all these programs, the following analysis will agree with a simulation: (I) The running time is OO(NO). (II) The running time is OO(NO2). (III) The running time is OO(NO3). (IV) The running time is OO(NO2). (V) PjO can be as large as iO2, which could be as large as NO2. kO can be as large as PjO, which is NO2. The running time is thus proportional to NO.NO2.NO2, which is OO(NO5). (VI) The ifO statement is executed at most NO3 times, by previous arguments, but it is true only OO(NO2) times (because it is true exactly iO times for each iO). Thus the innermost loop is only executed OO(NO2) times. Each time through, it takes OO(PjO2) = OO(NO2) time, for a total of OO(NO4). This is an example where multiplying loop sizes can occasionally give an overestimate.

2.7

(a) It should be clear that all algorithms generate only legal permutations. The first two algorithms have tests to guarantee no duplicates; the third algorithm works by shuffling an array that initially has no duplicates, so none can occur. It is also clear that the first two algorithms are completely random, and that each permutation is equally likely. The third algorithm, due to R. Floyd, is not as obvious; the correctness can be proved by induction.

-4-

See J. Bentley, "Programming Pearls," Communications of the ACM 30 (1987), 754-757. Note that if the second line of algorithm 3 is replaced with the statement Swap( A[i], A[ RandInt( 0, N-1 ) ] ); then not all permutations are equally likely. To see this, notice that for NO = 3, there are 27 equally likely ways of performing the three swaps, depending on the three random integers. Since there are only 6 permutations, and 6 does not evenly divide 27, each permutation cannot possibly be equally represented. (b) For the first algorithm, the time to decide if a random number to be placed in AO[iO] has not been used earlier is OO(iO). The expected number of random numbers that need to be tried is NO/ (NO − iO). This is obtained as follows: iO of the NO numbers would be duplicates. Thus the probability of success is (NO − iO) / NO. Thus the expected number of independent trials is NO/ (NO − iO). The time bound is thus NO−1 NO2 N 1 Ni 1 ____ __ = OO(NO2 ____ < NO2NO−1 ____ O2 < < N log NO) Σ Σ Σ Σ N O −i N O −i N O −i iO=0 iO=0 iO=0 PjO=1 Pj

NO−1

The second algorithm saves a factor of iO for each random number, and thus reduces the time bound to OO(NOlog NO) on average. The third algorithm is clearly linear. (c, d) The running times should agree with the preceding analysis if the machine has enough memory. If not, the third algorithm will not seem linear because of a drastic increase for large NO. (e) The worst-case running time of algorithms I and II cannot be bounded because there is always a finite probability that the program will not terminate by some given time TO. The algorithm does, however, terminate with probability 1. The worst-case running time of the third algorithm is linear - its running time does not depend on the sequence of random numbers. 2.8

Algorithm 1 would take about 5 days for NO = 10,000, 14.2 years for NO = 100,000 and 140 centuries for NO = 1,000,000. Algorithm 2 would take about 3 hours for NO = 100,000 and about 2 weeks for NO = 1,000,000. Algorithm 3 would use 11⁄2 minutes for NO = 1,000,000. These calculations assume a machine with enough memory to hold the array. Algorithm 4 solves a problem of size 1,000,000 in 3 seconds.

2.9

(a) OO(NO2). (b) OO(NOlog NO).

2.10 (c) The algorithm is linear. 2.11 Use a variation of binary search to get an OO(log NO) solution (assuming the array is preread). NOO. 2.13 (a) Test to see if NO is an odd number (or 2) and is not divisible by 3, 5, 7, ..., √MM MMOO), assuming that all divisions count for one unit of time. (b) OO( √N

(c) BO = OO(log NO). (d) OO(2BO/ 2). (e) If a 20-bit number can be tested in time TO, then a 40-bit number would require about TO2 time. (f) BO is the better measure because it more accurately represents the sizeO of the input.

-5-

2.14 The running time is proportional to NO times the sum of the reciprocals of the primes less than NO. This is OO(NOlog log NO). See Knuth, Volume 2, page 394. 2.15 Compute XO2, XO4, XO8, XO10, XO20, XO40, XO60, and XO62. 2.16 Maintain an OIarray PowersOfXO that can be filled in a for loop. The array will contain XO, XO2, log NOK XO4, up to XO2 . The binary representation of NO (which can be obtained by testing even or odd and then dividing by 2, until all bits are examined) can be used to multiply the appropriate entries of the array. 2.17 For NO = 0 or NO = 1, the number of multiplies is zero. If bO(NO) is the number of ones in the binary representation of NO, then if NO > 1, the number of multiplies used is

OIlog NOK + bO(NO) − 1 2.18 (a) AO. (b) BO. (c) The information given is not sufficient to determine an answer. We have only worstcase bounds. (d) Yes. 2.19 (a) Recursion is unnecessary if there are two or fewer elements. (b) One way to do this is to note that if the first NO−1 elements have a majority, then the last element cannot change this. Otherwise, the last element could be a majority. Thus if NO is odd, ignore the last element. Run the algorithm as before. If no majority element emerges, then return the NOthO element as a candidate. (c) The running time is OO(NO), and satisfies TO(NO) = TO(NO/ 2) + OO(NO). (d) One copy of the original needs to be saved. After this, the BO array, and indeed the recursion can be avoided by placing each BiO in the AO array. The difference is that the original recursive strategy implies that OO(log NO) arrays are used; this guarantees only two copies. 2.20 Otherwise, we could perform operations in parallel by cleverly encoding several integers into one. For instance, if A = 001, B = 101, C = 111, D = 100, we could add A and B at the same time as C and D by adding 00A00C + 00B00D. We could extend this to add NO pairs of numbers at once in unit cost. 2.22 No. If LowO = 1, HighO = 2, then MidO = 1, and the recursive call does not make progress. 2.24 No. As in Exercise 2.22, no progress is made.

-6-

Chapter 3: Lists, Stacks, and Queues 3.2

The comments for Exercise 3.4 regarding the amount of abstractness used apply here. The running time of the procedure in Fig. 3.1 is O (L + P ). _______________________________________________________________________________ O

O

O

void PrintLots( List L, List P ) { int Counter; Position Lpos, Ppos; Lpos = First( L ); Ppos = First( P ); Counter = 1; while( Lpos != NULL && Ppos != NULL ) { if( Ppos->Element == Counter++ ) { printf( "%? ", Lpos->Element ); Ppos = Next( Ppos, P ); } Lpos = Next( Lpos, L ); } } Fig. 3.1. _______________________________________________________________________________ 3.3

(a) For singly linked lists, the code is shown in Fig. 3.2.

-7-

_______________________________________________________________________________ /* BeforeP is the cell before the two adjacent cells that are to be swapped. */ /* Error checks are omitted for clarity. */ void SwapWithNext( Position BeforeP, List L ) { Position P, AfterP; P = BeforeP->Next; AfterP = P->Next; /* Both P and AfterP assumed not NULL. */ P->Next = AfterP->Next; BeforeP->Next = AfterP; AfterP->Next = P; } Fig. 3.2. _______________________________________________________________________________ (b) For doubly linked lists, the code is shown in Fig. 3.3. _______________________________________________________________________________ /* P and AfterP are cells to be switched. Error checks as before. */ void SwapWithNext( Position P, List L ) { Position BeforeP, AfterP; BeforeP = P->Prev; AfterP = P->Next; P->Next = AfterP->Next; BeforeP->Next = AfterP; AfterP->Next = P; P->Next->Prev = P; P->Prev = AfterP; AfterP->Prev = BeforeP; } Fig. 3.3. _______________________________________________________________________________ 3.4

Intersect is shown on page 9. O

-8-

_______________________________________________________________________________ /* This code can be made more abstract by using operations such as /* Retrieve and IsPastEnd to replace L1Pos->Element and L1Pos != NULL. /* We have avoided this because these operations were not rigorously defined.

*/ */ */

List Intersect( List L1, List L2 ) { List Result; Position L1Pos, L2Pos, ResultPos; L1Pos = First( L1 ); L2Pos = First( L2 ); Result = MakeEmpty( NULL ); ResultPos = First( Result ); while( L1Pos != NULL && L2Pos != NULL ) { if( L1Pos->Element < L2Pos->Element ) L1Pos = Next( L1Pos, L1 ); else if( L1Pos->Element > L2Pos->Element ) L2Pos = Next( L2Pos, L2 ); else { Insert( L1Pos->Element, Result, ResultPos ); L1 = Next( L1Pos, L1 ); L2 = Next( L2Pos, L2 ); ResultPos = Next( ResultPos, Result ); } } return Result; } _______________________________________________________________________________ 3.5

Fig. 3.4 contains the code for Union.

3.7

(a) One algorithm is to keep the result in a sorted (by exponent) linked list. Each of the MN multiplies requires a search of the linked list for duplicates. Since the size of the linked list is O (MN ), the total running time is O (M 2N 2).

O

O

O

O

O

O

O

(b) The bound can be improved by multiplying one term by the entire other polynomial, and then using the equivalent of the procedure in Exercise 3.2 to insert the entire sequence. Then each sequence takes O (MN ), but there are only M of them, giving a time bound of O (M 2N ). O

O

O

O

O

O

(c) An O (MN log MN ) solution is possible by computing all MN pairs and then sorting by exponent using any algorithm in Chapter 7. It is then easy to merge duplicates afterward. O

O

O

O

(d) The choice of algorithm depends on the relative values of M and N . If they are close, then the solution in part (c) is better. If one polynomial is very small, then the solution in part (b) is better. O

-9-

O

_______________________________________________________________________________ List Union( List L1, List L2 ) { List Result; ElementType InsertElement; Position L1Pos, L2Pos, ResultPos; L1Pos = First( L1 ); L2Pos = First( L2 ); Result = MakeEmpty( NULL ); ResultPos = First( Result ); while ( L1Pos != NULL && L2Pos != NULL ) { if( L1Pos->Element < L2Pos->Element ) { InsertElement = L1Pos->Element; L1Pos = Next( L1Pos, L1 ); } else if( L1Pos->Element > L2Pos->Element ) { InsertElement = L2Pos->Element; L2Pos = Next( L2Pos, L2 ); } else { InsertElement = L1Pos->Element; L1Pos = Next( L1Pos, L1 ); L2Pos = Next( L2Pos, L2 ); } Insert( InsertElement, Result, ResultPos ); ResultPos = Next( ResultPos, Result ); } /* Flush out remaining list */ while( L1Pos != NULL ) { Insert( L1Pos->Element, Result, ResultPos ); L1Pos = Next( L1Pos, L1 ); ResultPos = Next( ResultPos, Result ); } while( L2Pos != NULL ) { Insert( L2Pos->Element, Result, ResultPos ); L2Pos = Next( L2Pos, L2 ); ResultPos = Next( ResultPos, Result ); } return Result; } Fig. 3.4. _______________________________________________________________________________ 3.8

One can use the Pow function in Chapter 2, adapted for polynomial multiplication. If P is small, a standard method that uses O (P ) multiplies instead of O (log P ) might be better because the multiplies would involve a large number with a small number, which is good for the multiplication routine in part (b). O

O

O

O

O

O

3.10 This is a standard programming project. The algorithm can be sped up by setting M' = M mod N , so that the hot potato never goes around the circle more than once, and O

O

O

O

-10-

then if M' > N / 2, passing the potato appropriately in the alternative direction. This requires a doubly linked list. The worst-case running time is clearly O (N min (M , N )), although when these heuristics are used, and M and N are comparable, the algorithm might be significantly faster. If M = 1, the algorithm is clearly linear. The VAX/VMS C compiler’s memory management routines do poorly with the particular pattern of free s in this case, causing O (N log N ) behavior. O

O

O

O

O

O

O

O

O

O

P

O

O

O

O

3.12 Reversal of a singly linked list can be done nonrecursively by using a stack, but this requires O (N ) extra space. The solution in Fig. 3.5 is similar to strategies employed in garbage collection algorithms. At the top of the while loop, the list from the start to PreviousPos is already reversed, whereas the rest of the list, from CurrentPos to the end, is normal. This algorithm uses only constant extra space. _______________________________________________________________________________ O

O

O

O

O

/* Assuming no header and L is not empty. */ List ReverseList( List L ) { Position CurrentPos, NextPos, PreviousPos; PreviousPos = NULL; CurrentPos = L; NextPos = L->Next; while( NextPos != NULL ) { CurrentPos->Next = PreviousPos; PreviousPos = CurrentPos; CurrentPos = NextPos; NextPos = NextPos->Next; } CurrentPos->Next = PreviousPos; return CurrentPos; } Fig. 3.5. _______________________________________________________________________________ 3.15 (a) The code is shown in Fig. 3.6. (b) See Fig. 3.7. (c) This follows from well-known statistical theorems. See Sleator and Tarjan’s paper in the Chapter 11 references. 3.16 (c) Delete takes O (N ) and is in two nested for loops each of size N , giving an obvious O (N 3) bound. A better bound of O (N 2) is obtained by noting that only N elements can be deleted from a list of size N , hence O (N 2) is spent performing deletes. The remainder of the routine is O (N 2), so the bound follows. O

O

O

O

O

O

O

O

O

O

O

O

O

O

(d) O (N 2). O

O

-11-

_______________________________________________________________________________ /* Array implementation, starting at slot 1 */ Position Find( ElementType X, List L ) { int i, Where; Where = 0; for( i = 1; i < L.SizeOfList; i++ ) if( X == L[i].Element ) { Where = i; break; } if( Where ) /* Move to front. */ { for( i = Where; i > 1; i-- ) L[i].Element = L[i-1].Element; L[1].Element = X; return 1; } else return 0; /* Not found. */ } Fig. 3.6. _______________________________________________________________________________ (e) Sort the list, and make a scan to remove duplicates (which must now be adjacent). 3.17 (a) The advantages are that it is simpler to code, and there is a possible savings if deleted keys are subsequently reinserted (in the same place). The disadvantage is that it uses more space, because each cell needs an extra bit (which is typically a byte), and unused cells are not freed. 3.21 Two stacks can be implemented in an array by having one grow from the low end of the array up, and the other from the high end down. 3.22 (a) Let E be our extended stack. We will implement E with two stacks. One stack, which we’ll call S , is used to keep track of the Push and Pop operations, and the other, M , keeps track of the minimum. To implement Push(X,E), we perform Push(X,S). If X is smaller than or equal to the top element in stack M , then we also perform Push(X,M). To implement Pop(E), we perform Pop(S). If X is equal to the top element in stack M , then we also Pop(M). FindMin(E) is performed by examining the top of M . All these operations are clearly O (1). O

O

O

O

O

O

O

O

O

O

O

O

(b) This result follows from a theorem in Chapter 7 that shows that sorting must take Ω(N log N ) time. O (N ) operations in the repertoire, including DeleteMin , would be sufficient to sort. O

O

O

O

O

-12-

_______________________________________________________________________________ /* Assuming a header. */ Position Find( ElementType X, List L ) { Position PrevPos, XPos; PrevPos = FindPrevious( X, L ); if( PrevPos->Next != NULL ) /* Found. */ { XPos = PrevPos ->Next; PrevPos->Next = XPos->Next; XPos->Next = L->Next; L->Next = XPos; return XPos; } else return NULL; } Fig. 3.7. _______________________________________________________________________________ 3.23 Three stacks can be implemented by having one grow from the bottom up, another from the top down, and a third somewhere in the middle growing in some (arbitrary) direction. If the third stack collides with either of the other two, it needs to be moved. A reasonable strategy is to move it so that its center (at the time of the move) is halfway between the tops of the other two stacks. 3.24 Stack space will not run out because only 49 calls will be stacked. However, the running time is exponential, as shown in Chapter 2, and thus the routine will not terminate in a reasonable amount of time. 3.25 The queue data structure consists of pointers Q->Front and Q->Rear, which point to the beginning and end of a linked list. The programming details are left as an exercise because it is a likely programming assignment. O

O

3.26 (a) This is a straightforward modification of the queue routines. It is also a likely programming assignment, so we do not provide a solution.

-13-

Chapter 4: Trees 4.1

(a) A . O

(b) G , H , I , L , M , and K . O

4.2

O

O

O

O

O

For node B : O

(a) A . O

(b) D and E . O

O

(c) C . O

(d) 1. (e) 3. 4.3

4.

4.4

There are N nodes. Each node has two pointers, so there are 2N pointers. Each node but the root has one incoming pointer from its parent, which accounts for N −1 pointers. The rest are NULL. O

O

O

O

4.5

Proof is by induction. The theorem is trivially true for H = 0. Assume true for H = 1, 2, ..., k . A tree of height k +1 can have two subtrees of height at most k . These can have at most 2k +1−1 nodes each by the induction hypothesis. These 2k +2−2 nodes plus the root prove the theorem for height k +1 and hence for all heights. O

O

O

O

O

O

O

O

4.6

This can be shown by induction. Alternatively, let N = number of nodes, F = number of full nodes, L = number of leaves, and H = number of half nodes (nodes with one child). Clearly, N = F + H + L . Further, 2F + H = N − 1 (see Exercise 4.4). Subtracting yields L − F = 1. O

O

O

O

4.7

O

O

O

O

O

O

O

This can be shown by induction. In a tree with no nodes, the sum is zero, and in a one-node tree, the root is a leaf at depth zero, so the claim is true. Suppose the theorem is true for all trees with at most k nodes. Consider any tree with k +1 nodes. Such a tree consists of an i node left subtree and a k − i node right subtree. By the inductive hypothesis, the sum for the left subtree leaves is at most one with respect to the left tree root. Because all leaves are one deeper with respect to the original tree than with respect to the subtree, the sum is at most 1⁄2 with respect to the root. Similar logic implies that the sum for leaves in the right subtree is at most 1⁄2, proving the theorem. The equality is true if and only if there are no nodes with one child. If there is a node with one child, the equality cannot be true because adding the second child would increase the sum to higher than 1. If no nodes have one child, then we can find and remove two sibling leaves, creating a new tree. It is easy to see that this new tree has the same sum as the old. Applying this step repeatedly, we arrive at a single node, whose sum is 1. Thus the original tree had sum 1. O

O

O

4.8

O

O

O

(a) - * * a b + c d e. (b) ( ( a * b ) * ( c + d ) ) - e. (c) a b * c d + * e -.

-14-

O

4.9

3

4

1

4

1

2

6

6

2

5

5

9

9

7

7 4.11 This problem is not much different from the linked list cursor implementation. We maintain an array of records consisting of an element field, and two integers, left and right. The free list can be maintained by linking through the left field. It is easy to write the CursorNew and CursorDispose routines, and substitute them for malloc and free. O

O

4.12 (a) Keep a bit array B . If i is in the tree, then B [i ] is true; otherwise, it is false. Repeatedly generate random integers until an unused one is found. If there are N elements already in the tree, then M − N are not, and the probability of finding one of these is (M − N ) / M . Thus the expected number of trials is M / (M −N ) = α / (α − 1). O

O

O

O

O

O

O

O

O

O

O

O

O

(b) To find an element that is in the tree, repeatedly generate random integers until an already-used integer is found. The probability of finding one is N / M , so the expected number of trials is M / N = α. O

O

O

O

(c) The total cost for one insert and one delete is α / (α − 1) + α = 1 + α + 1 / (α − 1). Setting α = 2 minimizes this cost. 4.15 (a) N (0) = 1, N (1) = 2, N (H ) = N (H −1) + N (H −2) + 1. O

O

O

O

O

O

O

O

(b) The heights are one less than the Fibonacci numbers. 4.16

4 2

6 3

1

5

9 7

4.17 It is easy to verify by hand that the claim is true for 1 ≤ k ≤ 3. Suppose it is true for k = 1, 2, 3, ... H . Then after the first 2H − 1 insertions, 2H −1 is at the root, and the right subtree is a balanced tree containing 2H −1 + 1 through 2H − 1. Each of the next 2H −1 insertions, namely, 2H through 2H + 2H −1 − 1, insert a new maximum and get placed in the right O

O

O

O

O

O

O

O

O

O

-15-

O

subtree, eventually forming a perfectly balanced right subtree of height H −1. This follows by the induction hypothesis because the right subtree may be viewed as being formed from the successive insertion of 2H −1 + 1 through 2H + 2H −1 − 1. The next insertion forces an imbalance at the root, and thus a single rotation. It is easy to check that this brings 2H to the root and creates a perfectly balanced left subtree of height H −1. The new key is attached to a perfectly balanced right subtree of height H −2 as the last node in the right path. Thus the right subtree is exactly as if the nodes 2H + 1 through 2H + 2H −1 were inserted in order. By the inductive hypothesis, the subsequent successive insertion of 2H + 2H −1 + 1 through 2H +1 − 1 will create a perfectly balanced right subtree of height H −1. Thus after the last insertion, both the left and the right subtrees are perfectly balanced, and of the same height, so the entire tree of 2H +1 − 1 nodes is perfectly balanced (and has height H ). O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

4.18 The two remaining functions are mirror images of the text procedures. Just switch Right and Left everywhere.

O

O

4.20 After applying the standard binary search tree deletion algorithm, nodes on the deletion path need to have their balance changed, and rotations may need to be performed. Unlike insertion, more than one node may need rotation. 4.21 (a) O (log log N ). O

O

(b) The minimum AVL tree of height 255 (a huge tree). 4.22 _______________________________________________________________________________ Position DoubleRotateWithLeft( Position K3 ) { Position K1, K2; K1 = K3->Left; K2 = K1->Right; K1->Right = K2->Left; K3->Left = K2->Right; K2->Left = K1; K2->Right = K3; K1->Height = Max( Height(K1->Left), Height(K1->Right) ) + 1; K3->Height = Max( Height(K3->Left), Height(K3->Right) ) + 1; K2->Height = Max( K1->Height, K3->Height ) + 1; return K3; } _______________________________________________________________________________

-16-

4.23 After accessing 3,

3 2

10 4

1

11 6

12

5

8 7

13 9

After accessing 9,

9 3 2

10 4

11

1

8 6 5

-17-

12 13

7

After accessing 1,

1 9 2

10 3

11 4

12 8

13

6 5

7

After accessing 5,

5 1

9 2

6

10

4

8

3

7

11 12 13

-18-

4.24

5 9

1 2

8 4

7

3

10 11 12 13

4.25 (a) 523776. (b) 262166, 133114, 68216, 36836, 21181, 13873. (c) After Find (9). O

4.26 (a) An easy proof by induction. 4.28 (a-c) All these routines take linear time. _______________________________________________________________________________ /* These functions use the type BinaryTree, which is the same */ /* as TreeNode *, in Fig 4.16. */ int CountNodes( BinaryTree T ) { if( T == NULL ) return 0; return 1 + CountNodes(T->Left) + CountNodes(T->Right); } int CountLeaves( BinaryTree T ) { if( T == NULL ) return 0; else if( T->Left == NULL && T->Right == NULL ) return 1; return CountLeaves(T->Left) + CountLeaves(T->Right); } _______________________________________________________________________________

-19-

_______________________________________________________________________________ /* An alternative method is to use the results of Exercise 4.6. */ int CountFull( BinaryTree T ) { if( T == NULL ) return 0; return ( T->Left != NULL && T->Right != NULL ) + CountFull(T->Left) + CountFull(T->Right); } _______________________________________________________________________________ 4.29 We assume the existence of a function RandInt(Lower,Upper), which generates a uniform random integer in the appropriate closed interval. MakeRandomTree returns NULL if N is not positive, or if N is so large that memory is exhausted. _______________________________________________________________________________ O

O

O

O

SearchTree MakeRandomTree1( int Lower, int Upper ) { SearchTree T; int RandomValue; T = NULL; if( Lower <= Upper ) { T = malloc( sizeof( struct TreeNode ) ); if( T != NULL ) { T->Element = RandomValue = RandInt( Lower, Upper ); T->Left = MakeRandomTree1( Lower, RandomValue - 1 ); T->Right = MakeRandomTree1( RandomValue + 1, Upper ); } else FatalError( "Out of space!" ); } return T; } SearchTree MakeRandomTree( int N ) { return MakeRandomTree1( 1, N ); } _______________________________________________________________________________

-20-

4.30 _______________________________________________________________________________ /* LastNode is the address containing last value that was assigned to a node */ SearchTree GenTree( int Height, int *LastNode ) { SearchTree T; if( Height >= 0 ) { T = malloc( sizeof( *T ) ); /* Error checks omitted; see Exercise 4.29. */ T->Left = GenTree( Height - 1, LastNode ); T->Element = ++*LastNode; T->Right = GenTree( Height - 2, LastNode ); return T; } else return NULL; } SearchTree MinAvlTree( int H ) { int LastNodeAssigned = 0; return GenTree( H, &LastNodeAssigned ); } _______________________________________________________________________________ 4.31 There are two obvious ways of solving this problem. One way mimics Exercise 4.29 by replacing RandInt(Lower,Upper) with (Lower+Upper) / 2. This requires computing 2H +1−1, which is not that difficult. The other mimics the previous exercise by noting that the heights of the subtrees are both H −1. The solution follows: O

O

-21-

_______________________________________________________________________________ /* LastNode is the address containing last value that was assigned to a node. */ SearchTree GenTree( int Height, int *LastNode ) { SearchTree T = NULL; if( Height >= 0 ) { T = malloc( sizeof( *T ) ); /* Error checks omitted; see Exercise 4.29. */ T->Left = GenTree( Height - 1, LastNode ); T->Element = ++*LastNode; T->Right = GenTree( Height - 1, LastNode ); } return T; } SearchTree PerfectTree( int H ) { int LastNodeAssigned = 0; return GenTree( H, &LastNodeAssigned ); } _______________________________________________________________________________ 4.32 This is known as one-dimensional range searching. The time is O (K ) to perform the inorder traversal, if a significant number of nodes are found, and also proportional to the depth of the tree, if we get to some leaves (for instance, if no nodes are found). Since the average depth is O (log N ), this gives an O (K + log N ) average bound. _______________________________________________________________________________ O

O

O

O

O

O

O

void PrintRange( ElementType Lower, ElementType Upper, SearchTree T ) { if( T != NULL ) { if( Lower <= T->Element ) PrintRange( Lower, Upper, T->Left ); if( Lower <= T->Element && T->Element <= Upper ) PrintLine( T->Element ); if( T->Element <= Upper ) PrintRange( Lower, Upper, T->Right ); } } _______________________________________________________________________________

-22-

4.33 This exercise and Exercise 4.34 are likely programming assignments, so we do not provide code here. 4.35 Put the root on an empty queue. Then repeatedly Dequeue a node and Enqueue its left and right children (if any) until the queue is empty. This is O (N ) because each queue operation is constant time and there are N Enqueue and N Dequeue operations. O

O

O

O

O

O

O

O

4.36 (a)

6:-

2:4

0,1

2, 3

8:-

4, 5

6, 7

8, 9

(b)

4:6

1,2,3

4, 5

-23-

6,7,8

4.39

A B

C

D H

E I

G

F

J

K

N

L

M

O

P Q

R

4.41 The function shown here is clearly a linear time routine because in the worst case it does a traversal on both T 1 and T 2. _______________________________________________________________________________ O

O

int Similar( BinaryTree T1, BinaryTree T2 ) { if( T1 == NULL || T2 == NULL ) return T1 == NULL && T2 == NULL; return Similar( T1->Left, T2->Left ) && Similar( T1->Right, T2->Right ); } _______________________________________________________________________________ 4.43 The easiest solution is to compute, in linear time, the inorder numbers of the nodes in both trees. If the inorder number of the root of T2 is x , then find x in T1 and rotate it to the root. Recursively apply this strategy to the left and right subtrees of T1 (by looking at the values in the root of T2’s left and right subtrees). If dN is the depth of x , then the running time satisfies T (N ) = T (i ) + T (N −i −1) + dN , where i is the size of the left subtree. In the worst case, dN is always O (N ), and i is always 0, so the worst-case running time is quadratic. Under the plausible assumption that all values of i are equally likely, then even if dN is always O (N ), the average value of T (N ) is O (N log N ). This is a common recurrence that was already formulated in the chapter and is solved in Chapter 7. Under the more reasonable assumption that dN is typically logarithmic, then the running time is O (N ). O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

4.44 Add a field to each node indicating the size of the tree it roots. This allows computation of its inorder traversal number. 4.45 (a) You need an extra bit for each thread. (c) You can do tree traversals somewhat easier and without recursion. The disadvantage is that it reeks of old-style hacking.

-24-

Chapter 5: Hashing 5.1

(a) On the assumption that we add collisions to the end of the list (which is the easier way if a hash table is being built by hand), the separate chaining hash table that results is shown here. 0 1

4371

2 3

1323

4

4344

6173

5 6 7 8 4199

9

(b) 0

9679

1

4371

2

1989

3

1323

4

6173

5

4344

6 7 8 9

4199

-25-

9679

1989

(c) 0

9679

1

4371

2 3

1323

4

6173

5

4344

6 7 8

1989

9

4199

(d) 1989 cannot be inserted into the table because hash 2(1989) = 6, and the alternative locations 5, 1, 7, and 3 are already taken. The table at this point is as follows: O

0 1

4371

2 3

1323

4

6173

5

9679

6 7

4344

8 9

5.2

4199

When rehashing, we choose a table size that is roughly twice as large and prime. In our case, the appropriate new table size is 19, with hash function h (x ) = x (mod 19). O

O

O

O

(a) Scanning down the separate chaining hash table, the new locations are 4371 in list 1, 1323 in list 12, 6173 in list 17, 4344 in list 12, 4199 in list 0, 9679 in list 8, and 1989 in list 13. (b) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in bucket 12, 6173 in bucket 17, 4344 in bucket 14 because both 12 and 13 are already occupied, and 4199 in bucket 0.

-26-

(c) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in bucket 12, 6173 in bucket 17, 4344 in bucket 16 because both 12 and 13 are already occupied, and 4199 in bucket 0. (d) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in bucket 12, 6173 in bucket 17, 4344 in bucket 15 because 12 is already occupied, and 4199 in bucket 0. 5.4

We must be careful not to rehash too often. Let p be the threshold (fraction of table size) at which we rehash to a smaller table. Then if the new table has size N , it contains 2pN elements. This table will require rehashing after either 2N − 2pN insertions or pN deletions. Balancing these costs suggests that a good choice is p = 2/ 3. For instance, suppose we have a table of size 300. If we rehash at 200 elements, then the new table size is N = 150, and we can do either 100 insertions or 100 deletions until a new rehash is required. O

O

O

O

O

O

O

O

If we know that insertions are more frequent than deletions, then we might choose p to be somewhat larger. If p is too close to 1.0, however, then a sequence of a small number of deletions followed by insertions can cause frequent rehashing. In the worst case, if p = 1.0, then alternating deletions and insertions both require rehashing. O

O

O

5.5

(a) Since each table slot is eventually probed, if the table is not empty, the collision can be resolved. (b) This seems to eliminate primary clustering but not secondary clustering because all elements that hash to some location will try the same collision resolution sequence. (c, d) The running time is probably similar to quadratic probing. The advantage here is that the insertion can’t fail unless the table is full. (e) A method of generating numbers that are not random (or even pseudorandom) is given in the references. An alternative is to use the method in Exercise 2.7.

5.6

Separate chaining hashing requires the use of pointers, which costs some memory, and the standard method of implementing calls on memory allocation routines, which typically are expensive. Linear probing is easily implemented, but performance degrades severely as the load factor increases because of primary clustering. Quadratic probing is only slightly more difficult to implement and gives good performance in practice. An insertion can fail if the table is half empty, but this is not likely. Even if it were, such an insertion would be so expensive that it wouldn’t matter and would almost certainly point up a weakness in the hash function. Double hashing eliminates primary and secondary clustering, but the computation of a second hash function can be costly. Gonnet and Baeza-Yates [8] compare several hashing strategies; their results suggest that quadratic probing is the fastest method.

5.7

Sorting the MN records and eliminating duplicates would require O (MN log MN ) time using a standard sorting algorithm. If terms are merged by using a hash function, then the merging time is constant per term for a total of O (MN ). If the output polynomial is small and has only O (M + N ) terms, then it is easy to sort it in O ((M + N )log (M + N )) time, which is less than O (MN ). Thus the total is O (MN ). This bound is better because the model is less restrictive: Hashing is performing operations on the keys rather than just comparison between the keys. A similar bound can be obtained by using bucket sort instead of a standard sorting algorithm. Operations such as hashing are much more expensive than comparisons in practice, so this bound might not be an improvement. On the other hand, if the output polynomial is expected to have only O (M + N ) terms, then using a hash table saves a huge amount of space, since under these conditions, the hash table needs only O

O

O

O

O

O

O

O

O

O

-27-

O

O

O

O

O

O

O

O

O

O

O

O (M + N ) space. O

O

O

Another method of implementing these operations is to use a search tree instead of a hash table; a balanced tree is required because elements are inserted in the tree with too much order. A splay tree might be particularly well suited for this type of a problem because it does well with sequential accesses. Comparing the different ways of solving the problem is a good programming assignment. 5.8

The table size would be roughly 60,000 entries. Each entry holds 8 bytes, for a total of 480,000 bytes.

5.9

(a) This statement is true. (b) If a word hashes to a location with value 1, there is no guarantee that the word is in the dictionary. It is possible that it just hashes to the same value as some other word in the dictionary. In our case, the table is approximately 10% full (30,000 words in a table of 300,007), so there is a 10% chance that a word that is not in the dictionary happens to hash out to a location with value 1. (c) 300,007 bits is 37,501 bytes on most machines. (d) As discussed in part (b), the algorithm will fail to detect one in ten misspellings on average. (e) A 20-page document would have about 60 misspellings. This algorithm would be expected to detect 54. A table three times as large would still fit in about 100K bytes and reduce the expected number of errors to two. This is good enough for many applications, especially since spelling detection is a very inexact science. Many misspelled words (especially short ones) are still words. For instance, typing them instead of then is a misspelling that won’t be detected by any algorithm. O

O

5.10 To each hash table slot, we can add an extra field that we’ll call WhereOnStack, and we can keep an extra stack. When an insertion is first performed into a slot, we push the address (or number) of the slot onto the stack and set the WhereOnStack field to point to the top of the stack. When we access a hash table slot, we check that WhereOnStack points to a valid part of the stack and that the entry in the (middle of the) stack that is pointed to by the WhereOnStack field has that hash table slot as an address. O

O

O

O

5.14 000

001

010

011

100

101

110

111

(2)

(2)

(3)

(3)

(2)

00000010

01010001

10010110

10111101

11001111

00001011

01100001

10011011

10111110

11011011

00101011

01101111

10011110

01111111

-28-

11110000

Chapter 6: Priority Queues (Heaps) 6.1

Yes. When an element is inserted, we compare it to the current minimum and change the minimum if the new element is smaller. DeleteMinO operations are expensive in this scheme.

6.2

1

1 3

2

6 15

6.3

7 14

12

3

5 9

10

4 11

2

12

13

8

15

6 14

9

4 7

5

8 11

13

10

The result of three DeleteMins,O starting with both of the heaps in Exercise 6.2, is as follows:

4

4 6

5

13 15

7 14

12

10 9

6 8

5

12

11

15

7 14

9

10 13

8

11

6.4 6.5

These are simple modifications to the code presented in the text and meant as programming exercises.

6.6

225. To see this, start with iO=1 and position at the root. Follow the path toward the last node, doubling iO when taking a left child, and doubling iO and adding one when taking a right child.

-29-

6.7

(a) We show that HO(NO), which is the sum of the heights of nodes in a complete binary tree of NO nodes, is NO − bO(NO), where bO(NO) is the number of ones in the binary representation of NO. Observe that for NO = 0 and NO = 1, the claim is true. Assume that it is true for values of kO up to and including NO−1. Suppose the left and right subtrees have LO and RO nodes, respectively. Since the root has height OIlog NOK, we have HO(NO) = OIlog NOK + HO(LO) + HO(RO) = OIlog NOK + LO − bO(LO) + RO − bO(RO) = NO − 1 + (OIlog NOK − bO(LO) − bO(RO)) The second line follows from the inductive hypothesis, and the third follows because LO + RO = NO − 1. Now the last node in the tree is in either the left subtree or the right subtree. If it is in the left subtree, then the right subtree is a perfect tree, and bO(RO) = OIlog NOK − 1. Further, the binary representation of NO and LO are identical, with the exception that the leading 10 in NO becomes 1 in LO. (For instance, if NO = 37 = 100101, LO = 10101.) It is clear that the second digit of NO must be zero if the last node is in the left subtree. Thus in this case, bO(LO) = bO(NO), and HO(NO) = NO − bO(NO) If the last node is in the right subtree, then bO(LO) = OIlog NOK. The binary representation of RO is identical to NO, except that the leading 1 is not present. (For instance, if NO = 27 = 101011, LO = 01011.) Thus bO(RO) = bO(NO) − 1, and again HO(NO) = NO − bO(NO) (b) Run a single-elimination tournament among eight elements. This requires seven comparisons and generates ordering information indicated by the binomial tree shown here. a e g

c

f

b

d

h The eighth comparison is between bO and cO. If cO is less than bO, then bO is made a child of cO. Otherwise, both cO and dO are made children of bO. (c) A recursive strategy is used. Assume that NO = 2kO. A binomial tree is built for the NO elements as in part (b). The largest subtree of the root is then recursively converted into a binary heap of 2kO−1 elements. The last element in the heap (which is the only one on an extra level) is then inserted into the binomial queue consisting of the remaining binomial trees, thus forming another binomial tree of 2kO−1 elements. At that point, the root has a subtree that is a heap of 2kO−1 − 1 elements and another subtree that is a binomial tree of 2kO−1 elements. Recursively convert that subtree into a heap; now the whole structure is a binary heap. The running time for NO = 2kO satisfies TO(NO) = 2TO(NO/ 2) + log NO. The base case is TO(8) = 8.

-30-

6.8

Let DO1, DO2, ..., DkO be random variables representing the depth of the smallest, second smallest, and kOthO smallest elements, respectively. We are interested in calculating EO(DkO). In what follows, we assume that the heap size NO is one less than a power of two (that is, the bottom level is completely filled) but sufficiently large so that terms bounded by OO(1 / NO) are negligible. Without loss of generality, we may assume that the kOthO smallest element is in the left subheap of the root. Let pPjO,kO be the probability that this element is the PjOthO smallest element in the subheap. Lemma: For kO>1, EO(DkO) =

kO−1

Σ pPjO,kO(EO(DPjO) + 1).

PjO=1

Proof: An element that is at depth dO in the left subheap is at depth dO + 1 in the entire subheap. Since EO(DPjO + 1) = EO(DPjO) + 1, the theorem follows. Since by assumption, the bottom level of the heap is full, each of second, third, ..., kO−1thO smallest elements are in the left subheap with probability of 0.5. (Technically, the probability should be 1⁄2 − 1/(NO−1) of being in the right subheap and 1⁄2 + 1/(NO−1) of being in the left, since we have already placed the kOthO smallest in the right. Recall that we have assumed that terms of size OO(1/NO) can be ignored.) Thus pPjO,kO = pkO−PjO,kO =

1 ____ kO−2 2

( kPjOO−1−2 )

Theorem: EO(DkO) ≤ log kO. Proof: The proof is by induction. The theorem clearly holds for kO = 1 and kO = 2. We then show that it holds for arbitrary kO > 2 on the assumption that it holds for all smaller kO. Now, by the inductive hypothesis, for any 1 ≤ PjO ≤ kO−1, EO(DPjO) + EO(DkO−PjO) ≤ log PjO + log kO−PjO Since PfOO(xO) = log xO is convex for xO > 0, log PjO + log kO−PjO ≤ 2log (kO/ 2) Thus EO(DPjO) + EO(DkO−PjO) ≤ log (kO/ 2) + log (kO/ 2) Furthermore, since pPjO,kO = pkO−PjO,kO, pPjO,kOEO(DPjO) + pkO−PjO,kOEO(DkO−PjO) ≤pPjO,kOlog (kO/ 2) + pkO−PjO,kOlog (kO/ 2) From the lemma, EO(DkO) =

kO−1

Σ pPjO,kO(EO(DPjO) + 1)

PjO=1

=1+

kO−1

Σ pPjO,kOEO(DPjO)

PjO=1

Thus EO(DkO) ≤ 1 +

kO−1

Σ pPjO,kOlog (kO/ 2)

PjO=1

-31-

kO−1

≤ 1 + log (kO/ 2) Σ pPjO,kO PjO=1

≤ 1 + log (kO/ 2) ≤ log kO completing the proof. ∼ log (kO−1) − 0.273548. It can also be shown that asymptotically, EO(DkO) ∼ 6.9

(a) Perform a preorder traversal of the heap. (b) Works for leftist and skew heaps. The running time is OO(KdO) for dO-heaps.

6.11 Simulations show that the linear time algorithm is the faster, not only on worst-case inputs, but also on random data. 6.12 (a) If the heap is organized as a (min) heap, then starting at the hole at the root, find a path down to a leaf by taking the minimum child. The requires roughly log NO comparisons. To find the correct place where to move the hole, perform a binary search on the log NO elements. This takes OO(log log NO) comparisons. (b) Find a path of minimum children, stopping after log NO − log log NO levels. At this point, it is easy to determine if the hole should be placed above or below the stopping point. If it goes below, then continue finding the path, but perform the binary search on only the last log log NO elements on the path, for a total of log NO + log log log NO comparisons. Otherwise, perform a binary search on the first log NO − log log NO elements. The binary search takes at most log log NO comparisons, and the path finding took only log NO − log log NO, so the total in this case is log NO. So the worst case is the first case. (c) The bound can be improved to log NO + log*NO + OO(1), where log*NO is the inverse Ackerman function (see Chapter 8). This bound can be found in reference [16]. 6.13 The parent is at position OI(iO + dO − 2)/dOK. The children are in positions (iO − 1)dO + 2, ..., idO + 1. 6.14 (a) OO((MO + dNO)logdONO). (b) OO((MO + NO)log NO). (c) OO(MO + NO2). (d) dO= max (2, MO / NO). (See the related discussion at the end of Section 11.4.)

-32-

6.16

2 4

11

9 18

5 10

12

8

31

17

6

15

18

11 21

6.17

1 2

3

7 8

6 9

10

5 11

12

4 13

14

15

6.18 This theorem is true, and the proof is very much along the same lines as Exercise 4.17. 6.19 If elements are inserted in decreasing order, a leftist heap consisting of a chain of left children is formed. This is the best because the right path length is minimized. 6.20 (a) If a DecreaseKeyO is performed on a node that is very deep (very left), the time to percolate up would be prohibitive. Thus the obvious solution doesn’t work. However, we can still do the operation efficiently by a combination of DeleteO and InsertO. To DeleteO an arbitrary node xO in the heap, replace xO by the MergeO of its left and right subheaps. This might create an imbalance for nodes on the path from xO’s parent to the root that would need to be fixed by a child swap. However, it is easy to show that at most log NO nodes can be affected, preserving the time bound. This is discussed in Chapter 11. 6.21 Lazy deletion in leftist heaps is discussed in the paper by Cheriton and Tarjan [9]. The general idea is that if the root is marked deleted, then a preorder traversal of the heap is formed, and the frontier of marked nodes is removed, leaving a collection of heaps. These can be merged two at a time by placing all the heaps on a queue, removing two, merging them, and placing the result at the end of the queue, terminating when only one heap remains. 6.22 (a) The standard way to do this is to divide the work into passes. A new pass begins when the first element reappears in a heap that is dequeued. The first pass takes roughly

-33-

2*O1*O(NO/ 2) time units because there are NO/ 2 merges of trees with one node each on the right path. The next pass takes 2*O2*O(NO/ 4) time units because of the roughly NO/ 4 merges of trees with no more than two nodes on the right path. The third pass takes 2*O3*O(NO/ 8) time units, and so on. The sum converges to 4NO. (b) It generates heaps that are more leftist. 6.23

2 4

11

5

9

6

8

11

12

18

15

17

10

18

31

21

6.24

1 3

2

7 15

5 11

13

6 9

14

4 10

12

8

6.25 This claim is also true, and the proof is similar in spirit to Exercise 4.17 or 6.18. 6.26 Yes. All the single operation estimates in Exercise 6.22 become amortized instead of worst-case, but by the definition of amortized analysis, the sum of these estimates is a worst-case bound for the sequence. 6.27 Clearly the claim is true for kO = 1. Suppose it is true for all values iO = 1, 2, ..., kO. A BkO+1 tree is formed by attaching a BkO tree to the root of a BkO tree. Thus by induction, it contains a BO0 through BkO−1 tree, as well as the newly attached BkO tree, proving the claim.

6.28 Proof is by induction. Clearly the claim is true for kO = 1. Assume true for all values iO = 1, 2, ..., kO. A BkO+1 tree is formed by attaching a BkO tree to the original BkO tree. The original

-34-

( )

(

)

thus had dk nodes at depth dO. The attached tree had dOk−1 nodes at depth dO−1, which are now at depth dO. Adding these two terms and using a well-known formula establishes the theorem. 6.29

4 13

15

23 18

12 51

24

21 65

24

14 65

26

16 18

2 11

29 55

6.30 This is established in Chapter 11. 6.31 The algorithm is to do nothing special − merely InsertO them. This is proved in Chapter 11. 6.35 Don’t keep the key values in the heap, but keep only the difference between the value of the key in a node and the value of the key in its parent. 6.36 OO(NO + kOlog NO) is a better bound than OO(NOlog kO). The first bound is OO(NO) if kO = OO(NO / log NO). The second bound is more than this as soon as kO grows faster than a constant. For the other values Ω(NO / log NO) = kO = ο(NO), the first bound is better. When kO = Θ(NO), the bounds are identical.

-35-

Chapter 7: Sorting 7.1 _______________________________________________ _____________________________________________ L _L _____________________________________________ Original L 3 1 4 1 5 9 2 6 5 LL LL L LL L L after PO=2 L 1 3 4 1 5 9 2 6 5 L L 3 4 1 5 9 2 6 5 LL L L after PO=3 L 1 L L after PO=4 L 1 1 3 4 5 9 2 6 5 L L L L after PO=5 L 1 1 3 4 5 9 2 6 5 LL L L after PO=6 L 1 1 3 4 5 9 2 6 5 L L LL L LL L L after PO=7 L 1 1 2 3 4 5 9 6 5 L L L L after PO=8 L 1 1 2 3 4 5 6 9 5 L L after PO=9 L 1 1 2 3 4 5 5 6 9 LL L_L______________________________________________ _____________________________________________ 7.2

OO(NO) because the whileO loop terminates immediately. Of course, accidentally changing the test to include equalities raises the running time to quadratic for this type of input.

7.3

The inversion that existed between AO[iO] and AO[iO + kO] is removed. This shows at least one inversion is removed. For each of the kO − 1 elements AO[iO + 1], AO[iO + 2], ..., AO[iO + kO − 1], at most two inversions can be removed by the exchange. This gives a maximum of 2(kO − 1) + 1 = 2kO − 1.

7.4 ________________________________________________ ______________________________________________ L L_______________________________________________ Original L 9 8 7 6 5 4 3 2 1 LL LL L LL L L after 7-sort L 2 1 7 6 5 4 3 9 8 L L L L after 3-sort L 2 1 4 3 5 7 6 9 8 L L after 1-sort L 1 2 3 4 5 6 7 8 9 L LL LL_L_______________________________________________ ______________________________________________ 7.5

(a) Θ(NO2). The 2-sort removes at most only three inversions at a time; hence the algorithm is Ω(NO2). The 2-sort is two insertion sorts of size NO/ 2, so the cost of that pass is OO(NO2). The 1-sort is also OO(NO2), so the total is OO(NO2).

7.6

Part (a) is an extension of the theorem proved in the text. Part (b) is fairly complicated; see reference [11].

7.7

See reference [11].

7.8

Use the input specified in the hint. If the number of inversions is shown to be Ω(NO2), then the bound follows, since no increments are removed until an htO/ 2 sort. If we consider the pattern formed hkO through hO2kO−1, where kO = tO/ 2 + 1, we find that it has length NO = hkO(hkO + 1)−1, and the number of inversions is roughly hkO4/ 24, which is Ω(NO2).

7.9

(a) OO(NOlog NO). No exchanges, but each pass takes OO(NO). (b) OO(NOlog NO). It is easy to show that after an hkO sort, no element is farther than hkO from its rightful position. Thus if the increments satisfy hkO+1 ≤ chkO for a constant cO, which implies OO(log NO) increments, then the bound is OO(NOlog NO).

-36-

7.10 (a) No, because it is still possible for consecutive increments to share a common factor. An example is the sequence 1, 3, 9, 21, 45, htO+1 = 2htO + 3. (b) Yes, because consecutive increments are relatively prime. The running time becomes OO(NO3/ 2). 7.11 The input is read in as 142, 543, 123, 65, 453, 879, 572, 434, 111, 242, 811, 102 The result of the heapify is 879, 811, 572, 434, 543, 123, 142, 65, 111, 242, 453, 102 879 is removed from the heap and placed at the end. We’ll place it in italics to signal that it is not part of the heap. 102 is placed in the hole and bubbled down, obtaining 811, 543, 572, 434, 453, 123, 142, 65, 111, 242, 102, 879 Continuing the process, we obtain 572, 543, 142, 434, 453, 123, 102, 65, 111, 242, 811, 879 543, 453, 142, 434, 242, 123, 102, 65, 111, 572, 811, 879 453, 434, 142, 111, 242, 123, 102, 65, 543, 572, 811, 879 434, 242, 142, 111, 65, 123, 102, 453, 543, 572, 811, 879 242, 111, 142, 102, 65, 123, 434, 453, 543, 572, 811, 879 142, 111, 123, 102, 65, 242, 434, 453, 543, 572, 811, 879 123, 111, 65, 102, 142, 242, 434, 453, 543, 572, 811, 879 111, 102, 65, 123, 142, 242, 434, 453, 543, 572, 811, 879 102, 65, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879 65, 102, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879 7.12 Heapsort uses at least (roughly) NOlog NO comparisons on any input, so there are no particularly good inputs. This bound is tight; see the paper by Schaeffer and Sedgewick [16]. This result applies for almost all variations of heapsort, which have different rearrangement strategies. See Y. Ding and M. A. Weiss, "Best Case Lower Bounds for Heapsort," ComputingO 49 (1992). 7.13 First the sequence {3, 1, 4, 1} is sorted. To do this, the sequence {3, 1} is sorted. This involves sorting {3} and {1}, which are base cases, and merging the result to obtain {1, 3}. The sequence {4, 1} is likewise sorted into {1, 4}. Then these two sequences are merged to obtain {1, 1, 3, 4}. The second half is sorted similarly, eventually obtaining {2, 5, 6, 9}. The merged result is then easily computed as {1, 1, 2, 3, 4, 5, 6, 9}. 7.14 Mergesort can be implemented nonrecursively by first merging pairs of adjacent elements, then pairs of two elements, then pairs of four elements, and so on. This is implemented in Fig. 7.1. 7.15 The merging step always takes Θ(NO) time, so the sorting process takes Θ(NOlog NO) time on all inputs. 7.16 See reference [11] for the exact derivation of the worst case of mergesort. 7.17 The original input is 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5 After sorting the first, middle, and last elements, we have 3, 1, 4, 1, 5, 5, 2, 6, 5, 3, 9 Thus the pivot is 5. Hiding it gives 3, 1, 4, 1, 5, 3, 2, 6, 5, 5, 9 The first swap is between two fives. The next swap has iO and PjO crossing. Thus the pivot is

-37-

_______________________________________________________________________________ void Mergesort( ElementType A[ ], int N ) { ElementType *TmpArray; int SubListSize, Part1Start, Part2Start, Part2End; TmpArray = malloc( sizeof( ElementType ) * N ); for( SubListSize = 1; SubListSize < N; SubListSize *= 2 ) { Part1Start = 0; while( Part1Start + SubListSize < N - 1 ) { Part2Start = Part1Start + SubListSize; Part2End = min( N, Part2Start + SubListSize - 1 ); Merge( A, TmpArray, Part1Start, Part2Start, Part2End ); Part1Start = Part2End + 1; } } } Fig. 7.1. _______________________________________________________________________________ swapped back with iO: 3, 1, 4, 1, 5, 3, 2, 5, 5, 6, 9 We now recursively quicksort the first eight elements: 3, 1, 4, 1, 5, 3, 2, 5 Sorting the three appropriate elements gives 1, 1, 4, 3, 5, 3, 2, 5 Thus the pivot is 3, which gets hidden: 1, 1, 4, 2, 5, 3, 3, 5 The first swap is between 4 and 3: 1, 1, 3, 2, 5, 4, 3, 5 The next swap crosses pointers, so is undone; iO points at 5, and so the pivot is swapped: 1, 1, 3, 2, 3, 4, 5, 5 A recursive call is now made to sort the first four elements. The pivot is 1, and the partition does not make any changes. The recursive calls are made, but the subfiles are below the cutoff, so nothing is done. Likewise, the last three elements constitute a base case, so nothing is done. We return to the original call, which now calls quicksort recursively on the right-hand side, but again, there are only three elements, so nothing is done. The result is 1, 1, 3, 2, 3, 4, 5, 5, 5, 6, 9 which is cleaned up by insertion sort. 7.18 (a) OO(NOlog NO) because the pivot will partition perfectly. (b) Again, OO(NOlog NO) because the pivot will partition perfectly. (c) OO(NOlog NO); the performance is slightly better than the analysis suggests because of the median-of-three partition and cutoff.

-38-

7.19 (a) If the first element is chosen as the pivot, the running time degenerates to quadratic in the first two cases. It is still OO(NOlog NO) for random input. (b) The same results apply for this pivot choice. (c) If a random element is chosen, then the running time is OO(NOlog NO) expected for all inputs, although there is an OO(NO2) worst case if very bad random numbers come up. There is, however, an essentially negligible chance of this occurring. Chapter 10 discusses the randomized philosophy. (d) This is a dangerous road to go; it depends on the distribution of the keys. For many distributions, such as uniform, the performance is OO(NOlog NO) on average. For a skewed distribution, such as with the input {1, 2, 4, 8, 16, 32, 64, ... }, the pivot will be consistently terrible, giving quadratic running time, independent of the ordering of the input. 7.20 (a) OO(NOlog NO) because the pivot will partition perfectly. (b) Sentinels need to be used to guarantee that iO and PjO don’t run past the end. The running time will be Θ(NO2) since, because iO won’t stop until it hits the sentinel, the partitioning step will put all but the pivot in SO1. (c) Again a sentinel needs to be used to stop PjO. This is also Θ(NO2) because the partitioning is unbalanced.

7.21 Yes, but it doesn’t reduce the average running time for random input. Using median-ofthree partitioning reduces the average running time because it makes the partition more balanced on average. 7.22 The strategy used here is to force the worst possible pivot at each stage. This doesn’t necessarily give the maximum amount of work (since there are few exchanges, just lots of comparisons), but it does give Ω(NO2) comparisons. By working backward, we can arrive at the following permutation: 20, 3, 5, 7, 9, 11, 13, 15, 17, 19, 4, 10, 2, 12, 6, 14, 1, 16, 8, 18 A method to extend this to larger numbers when NO is even is as follows: The first element is NO, the middle is NO − 1, and the last is NO − 2. Odd numbers (except 1) are written in decreasing order starting to the left of center. Even numbers are written in decreasing order by starting at the rightmost spot, always skipping one available empty slot, and wrapping around when the center is reached. This method takes OO(NOlog NO) time to generate the permutation, but is suitable for a hand calculation. By inverting the actions of quicksort, it is possible to generate the permutation in linear time. 7.24 This recurrence results from the analysis of the quick selection algorithm. TO(NO) = OO(NO). 7.25 Insertion sort and mergesort are stable if coded correctly. Any of the sorts can be made stable by the addition of a second key, which indicates the original position. 7.26 (d) PfOO(NO) can be OO(NO/ log NO). Sort the PfOO(NO) elements using mergesort in OO(PfOO(NO)log PfOO(NO)) time. This is OO(NO) if PfOO(NO) is chosen using the criterion given. Then merge this sorted list with the already sorted list of NO numbers in OO(NO + PfOO(NO)) = OO(NO) time. 7.27 A decision tree would have NO leaves, so OHlog NOJ comparisons are required. 7.28 log NO! ∼ ∼ NOlog NO − NOlog eO. 7.29 (a)

( 2NN ).

-39-

( )

(b) The information-theoretic lower bound is log 2N N . Applying Stirling’s formula, we 1 can estimate the bound as 2NO − ⁄2log NO. A better lower bound is known for this case: 2NO−1 comparisons are necessary. Merging two lists of different sizes MO and NO likewise requires at least log

( MON+ N ) comparisons.

7.30 It takes OO(1) to insert each element into a bucket, for a total of OO(NO). It takes OO(1) to extract each element from a bucket, for OO(MO). We waste at most OO(1) examining each empty bucket, for a total of OO(MO). Adding these estimates gives OO(MO + NO). 7.31 We add a dummy NO + 1thO element, which we’ll call MaybeO. MaybeO satisfies

PfalseO < MaybeO

7.33 (a) OHlog 4!OJ=5. (b) Compare and exchange (if necessary) aO1 and aO2 so that aO1 ≥ aO2, and repeat with aO3 and aO4. Compare and exchange aO1 and aO3. Compare and exchange aO2 and aO4. Finally, compare and exchange aO2 and aO3. 7.34 (a) OHlog 5!OJ = 7. (b) Compare and exchange (if necessary) aO1 and aO2 so that aO1 ≥ aO2, and repeat with aO3 and aO4 so that aO3 ≥ aO4. Compare and exchange (if necessary) the two winners, aO1 and aO3. Assume without loss of generality that we now have aO1 ≥ aO3 ≥ aO4, and aO1 ≥ aO2. (The other case is obviously identical.) Insert aO5 by binary search in the appropriate place among aO1 , aO3,aO4. This can be done in two comparisons. Finally, insert aO2 among aO3 , aO4 , aO5. If it is the largest among those three, then it goes directly after aO1 since it is already known to be larger than aO1. This takes two more comparisons by a binary search. The total is thus seven comparisons. 7.38 (a) For the given input, the pivot is 2. It is swapped with the last element. iO will point at the second element, and PjO will be stopped at the first element. Since the pointers have crossed, the pivot is swapped with the element in position 2. The input is now 1, 2, 4, 5, 6, ..., NO − 1, NO, 3. The recursive call on the right subarray is thus on an increasing sequence of numbers, except for the last number, which is the smallest. This is exactly the same form as the original. Thus each recursive call will have only two fewer elements than the previous. The running time will be quadratic. (b) Although the first pivot generates equal partitions, both the left and right halves will have the same form as part (a). Thus the running time will be quadratic because after the first partition, the algorithm will grind slowly. This is one of the many interesting tidbits in reference [20].

-40-

7.39 We show that in a binary tree with LO leaves, the average depth of a leaf is at least log LO. We can prove this by induction. Clearly, the claim is true if LO = 1. Suppose it is true for trees with up to LO − 1 leaves. Consider a tree of LO leaves with minimum average leaf depth. Clearly, the root of such a tree must have non-NULL left and right subtrees. Suppose that the left subtree has LLO leaves, and the right subtree has LRO leaves. By the inductive hypothesis, the total depth of the leaves (which is their average times their number) in the left subtree is LLO(1 + log LLO), and the total depth of the right subtree’s leaves is LRO(1 + log LRO) (because the leaves in the subtrees are one deeper with respect to the root of the tree than with respect to the root of their subtree). Thus the total depth of all the leaves is LO + LLOlog LLO + LROlog LRO. Since PfOO(xO) = xOlog xO is convex for xO ≥ 1, we know that PfOO(xO) + PfOO(yO) ≥ 2fOO((xO+yO) / 2). Thus, the total depth of all the leaves is at least LO + 2(LO/ 2)log (LO/ 2) ≥ LO + LO(log LO − 1) ≥ LOlog LO. Thus the average leaf depth is at least log LO.

-41-

Chapter 8: The Disjoint Set ADT 8.1

We assume that unions operated on the roots of the trees containing the arguments. Also, in case of ties, the second tree is made a child of the first. Arbitrary union and union by height give the same answer (shown as the first tree) for this problem. Union by size gives the second tree. 1

2

6

3 4

5

7

10

8 11

12

14 13

9

15

16 17

3 1 2

6

4

5

7

10

11

12

8

8.4

14 15

9

8.2

13

16 17

In both cases, have nodes 16 and 17 point directly to the root. Claim: A tree of height HO has at least 2HO nodes. The proof is by induction. A tree of height 0 clearly has at least 1 node, and a tree of height 1 clearly has at least 2. Let TO be the tree of height HO with fewest nodes. Thus at the time of TO’s last union, it must have been a tree of height HO−1, since otherwise TO would have been smaller at that time than it is now and still would have been of height HO, which is impossible by assumption of TO’s minimality. Since TO’s height was updated, it must have been as a result of a union with another tree of height HO−1. By the induction hypothesis, we know that at the time of the union, TO had at least 2HO−1 nodes, as did the tree attached to it, for a total of 2HO nodes, proving the claim. Thus an NO-node tree has depth at most OIlog NOK.

8.5

All answers are OO(MO) because in all cases α(MO, NO) = 1.

8.6

Assuming that the graph has only nine vertices, then the union/find tree that is formed is shown here. The edge (4,6) does not result in a union because at the time it is examined, 4 and 6 are already in the same component. The connected components are {1,2,3,4,6} and

-42-

{5,7,8,9}. 1

5 2

3

7 4

8.8

6

8 9

(a) When we perform a union, we push onto a stack the two roots and the old values of their parents. To implement a Deunion,O we only have to pop the stack and restore the values. This strategy works fine in the absence of path compression. (b) If path compression is implemented, the strategy described in part (a) does not work because path compression moves elements out of subtrees. For instance, the sequence Union(1,2), Union(3,4), Union(1,3), Find(4), Deunion(1,3) will leave 4 in set 1 if path compression is implemented.

8.9

We assume that the tree is implemented with pointers instead of a simple array. Thus FindO will return a pointer instead of an actual set name. We will keep an array to map set numbers to their tree nodes. Union and FindO are implemented in the standard manner. To perform Remove(X),O first perform a Find(X)O with path compression. Then mark the node containing XO as vacant. Create a new one-node tree with XO and have it pointed to by the appropriate array entry. The time to perform a RemoveO is the same as the time to perform a Find,O except that there potentially could be a large number of vacant nodes. To take care of this, after NO RemoveOs are performed, perform a FindO on every node, with path compression. If a FindO(XO) returns a vacant root, then place XO in the root node, and make the old node containing XO vacant. The results of Exercise 8.11 guarantee that this will take linear time, which can be charged to the NO RemoveOs. At this point, all vacant nodes (indeed all nonroot nodes) are children of a root, and vacant nodes can be disposed (if an array of pointers to them has been kept). This also guarantees that there are never more than 2NO nodes in the forest and preserves the MOα(MO, NO) asymptotic time bound.

8.11 Suppose there are uO UnionOs and PfOO FindOs. Each union costs constant time, for a total of uO. A FindO costs one unit per vertex visited. We charge, as in the text, under the following slightly modified rules: (A) the vertex is a root or child of the root (B) otherwise Essentially, all vertices are in one rank group. During any Find,O there can be at most two rule (A) charges, for a total of 2fOO. Each vertex can be charged at most once under rule (B) because after path compression it will be a child of the root. The number of vertices that are not roots or children of roots is clearly bounded by uO, independent of the unioning strategy, because each UnionO changes exactly one vertex from root to nonroot status, and this bounds the number of type (B) nodes. Thus the total rule (B) charges are at most uO. Adding all charges gives a bound of 2fOO + 2uO, which is linear in the number of operations. 8.13 For each vertex vO, let the pseudorank RvO be defined as OAlog SvOOA, where SvO is the number of I K descendents (including itself) of vO in the final tree, after all UnionOs are performed, ignoring

-43-

path compression. Although the pseudorank is not maintained by the algorithm, it is not hard to show that the pseudorank satisfies the same properties as the ranks do in union-by-rank. Clearly, a vertex with pseudorank RvO has at least 2RvO descendents (by its definition), and the number of vertices of pseudorank RO is at most NO/ 2RO. The union-by-size rule ensures that the parent of a node has twice as many descendents as the node, so the pseudoranks monotonically increase on the path toward the root if there is no path compression. The argument in Lemma 8.3 tells us that path compression does not destroy this property. If we partition the vertices by pseudoranks and assign the charges in the same manner as in the text proof for union-by-rank, the same steps follow, and the identical bound is obtained. 8.14 This is most conveniently implemented without recursion and is faster because, even if full path compression is implemented nonrecursively, it requires two passes up the tree. This requires only one. We leave the coding to the reader since comparing the various UnionO and FindO strategies is a reasonable programming project. The worst-case running time remains the same because the properties of the ranks are unchanged. Instead of charging one unit to each vertex on the path to the root, we can charge two units to alternating vertices (namely, the vertices whose parents are altered by path halving). These vertices get parents of higher rank, as before, and the same kind of analysis bounds the total charges.

-44-

Chapter 9: Graph Algorithms 9.1

The following ordering is arrived at by using a queue and assumes that vertices appear on an adjacency list alphabetically. The topological order that results is then s, G, D, H, A, B, E, I, F, C, t

9.2

Assuming the same adjacency list, the topological order produced when a stack is used is s, G, H, D, A, E, I, F, B, C, t Because a topological sort processes vertices in the same manner as a breadth-first search, it tends to produce a more natural ordering.

9.4

The idea is the same as in Exercise 5.10.

9.5

(a) (Unweighted paths) A->B, A->C, A->B->G, A->B->E, A->C->D, A->B->E->F. (b) (Weighted paths) A->C, A->B, A->B->G, A->B->G->E, A->B->G->E->F, A->B->G>E->D.

9.6

We’ll assume that Dijkstra’s algorithm is implemented with a priority queue of vertices that uses the DecreaseKeyO operation. Dijkstra’s algorithm uses O|OEO|O DecreaseKeyO operations, which cost OO(logdOO|OVO|O) each, and O|OVO|O DeleteMinO operations, which cost OO(dOlogdOO|OVO|O) each. The running time is thus OO(O|OEO|OlogdOO|OVO|O + O|OVO|OdOlogdOO|OVO|O). The cost of the DecreaseKeyO operations balances the InsertO operations when dO = O|OEO|O/O|OVO|O. For a sparse graph, this might give a value of dO that is less than 2; we can’t allow this, so dO is chosen to be max (2,OIO|OEO|O/O|OVO|OOK). This gives a running time of OO(O|OEO|Olog2 + O|OEO|O/O|OVO|OO|OVO|O), which is a slight theoretical improvement. Moret and Shapiro report (indirectly) that dO-heaps do not improve the running time in practice.

9.7

(a) The graph shown here is an example. Dijkstra’s algorithm gives a path from AO to CO of cost 2, when the path from AO to BO to CO has cost 1. C 2 A

-2 3 B

(b) We define a pass of the algorithm as follows: Pass 0 consists of marking the start vertex as known and placing its adjacent vertices on the queue. For PjO > 0, pass PjO consists of marking as known all vertices on the queue at the end of pass PjO − 1. Each pass requires linear time, since during a pass, a vertex is placed on the queue at most once. It is easy to show by induction that if there is a shortest path from sO to vO containing kO edges, then dvO will equal the length of this path by the beginning of pass kO. Thus there are at most O|OVO|O passes,

-45-

giving an OO(O|OEO|OO|OVO|O) bound. 9.8

See the comments for Exercise 9.19.

9.10 (a) Use an array CountO such that for any vertex uO, Count[u]O is the number of distinct paths from sO to uO known so far. When a vertex vO is marked as known, its adjacency list is traversed. Let wO be a vertex on the adjacency list. If dvO + cvO,wO = dwO, then increment Count[w]O by Count[v]O because all shortest paths from sO to vO with last edge (vO,wO) give a shortest path to wO. If dvO + cvO,wO < dwO, then pwO and dwO get updated. All previously known shortest paths to wO are now invalid, but all shortest paths to vO now lead to shortest paths for wO, so set Count[w]O to equal Count[v].O Note: Zero-cost edges mess up this algorithm. (b) Use an array NumEdgesO such that for any vertex uO, NumEdges[u]O is the shortest number of edges on a path of distance duO from sO to uO known so far. Thus NumEdgesO is used as a tiebreaker when selecting the vertex to mark. As before, vO is the vertex marked known, and wO is adjacent to vO. If dvO + cvO,wO = dwO, then change pwO to vO and NumEdges[w]O to NumEdges[v]+1O if NumEdges[v]+1 < NumEdges[w]. If dvO + cvO,wO < dwO, then update pwO and dwO, and set NumEdges[w]O to NumEdges[v]+1.O 9.11 (This solution is not unique). First send four units of flow along the path s, G, H, I, t. This gives the following residual graph: 2

A 1

3 4

s 2 4

2

2

1 2

C 2

3

D

G

2

B

3

E 2

1

4

3

F

3

1

2

H

4 t

4

I

4

Next, send three units of flow along s, D, E, F, t. The residual graph that results is as follows: 2

A 1 s 2 4

3 1 3 G

2

B 2 3

D 2

1 2

C 2 3

E 2

4

4 3

F

3 2

H

1

1

t

4

I

4

Now two units of flow are sent along the path s, G, D, A, B, C, t, yielding the following residual graph:

-46-

2

A 1

1 1 2 3 2

s 6

2 D

C 2

3 1

2

G

2

B

2

4

F

3

1

2

H

2

3

E

2

1

3

t

4

I

4

One unit of flow is then sent along s, D, A, E, C, t: 2

A 1 3 s

4

D 2

6

1 1

C 1

3

1 2

G

2

B

E 2

4

1

13

F

3

1

2

H

3

1

3

t

4

I

4

Finally, one unit of flow can go along the path s, A, E, C, t: 2

A 1

2 3

s

2

B

4

3

D 2

6

1 2

G

C 2

4

F

3

1

2

H

4

3

E 2

1 3

t

4

I

4

The preceding residual graph has no path from s to t. Thus the algorithm terminates. The final flow graph, which carries 11 units, is as follows: 2

A 1

3 4

s 6

2

0 4

C 2

3

D 2

G

2

B

3

E 0

4 3

F

0 4

H

0

0

t

4

I

This flow is not unique. For instance, two units of the flow that goes from G to D to A to E could go by G to H to E.

-47-

9.12 Let TO be the tree with root rO, and children rO1, rO2, ..., rkO, which are the roots of TO1, TO2, ..., TkO, which have maximum incoming flow of cO1, cO2, ..., ckO, respectively. By the problem statement, we may take the maximum incoming flow of rO to be infinity. The recursive function FindMaxFlow( T, IncomingCap ) finds the value of the maximum flow in TO (finding the actual flow is a matter of bookkeeping); the flow is guaranteed not to exceed IncomingCapO. If TO is a leaf, then FindMaxFlowO returns IncomingCapO since we have assumed a sink of infinite capacity. Otherwise, a standard postorder traversal can be used to compute the maximum flow in linear time. _______________________________________________________________________________ FlowType FindMaxFlow( Tree T, FlowType IncomingCap ) { FlowType ChildFlow, TotalFlow; if( IsLeaf( T ) ) return IncomingCap; else { TotalFlow = 0; for( each subtree TiO of T ) { ChildFlow = FindMaxFlow( TiO, min( IncomingCap, ciO ) ); TotalFlow += ChildFlow; IncomingCap -= ChildFlow; } return TotalFlow; } } _______________________________________________________________________________ 9.13 (a) Assume that the graph is connected and undirected. If it is not connected, then apply the algorithm to the connected components. Initially, mark all vertices as unknown. Pick any vertex vO, color it red, and perform a depth-first search. When a node is first encountered, color it blue if the DFS has just come from a red node, and red otherwise. If at any point, the depth-first search encounters an edge between two identical colors, then the graph is not bipartite; otherwise, it is. A breadth-first search (that is, using a queue) also works. This problem, which is essentially two-coloring a graph, is clearly solvable in linear time. This contrasts with three-coloring, which is NP-complete. (b) Construct an undirected graph with a vertex for each instructor, a vertex for each course, and an edge between (vO,wO) if instructor vO is qualified to teach course wO. Such a graph is bipartite; a matching of MO edges means that MO courses can be covered simultaneously. (c) Give each edge in the bipartite graph a weight of 1, and direct the edge from the instructor to the course. Add a vertex sO with edges of weight 1 from sO to all instructor vertices. Add a vertex tO with edges of weight 1 from all course vertices to tO. The maximum flow is equal to the maximum matching.

-48-

(d) The running time is OO(O|OEO|OO|OVO|O ⁄ ) because this is the special case of the network flow problem mentioned in the text. All edges have unit cost, and every vertex (except sO and tO) has either an indegree or outdegree of 1. 1

2

9.14 This is a slight modification of Dijkstra’s algorithm. Let PfOiO be the best flow from sO to iO at any point in the algorithm. Initially, PfOiO = 0 for all vertices, except sO: PfOsO = ∞. At each stage, we select vO such that PfOvO is maximum among all unknown vertices. Then for each wO adjacent to vO, the cost of the flow to wO using vO as an intermediate is min (PfOvO,cvO,wO). If this value is higher than the current value of PfOwO, then PfOwO and pwO are updated. 9.15 One possible minimum spanning tree is shown here. This solution is not unique. 3

A 4

B 2

D

3

E 2 H

C 1 F

2

G

1 I

7

J

9.16 Both work correctly. The proof makes no use of the fact that an edge must be nonnegative. 9.17 The proof of this fact can be found in any good graph theory book. A more general theorem follows: Theorem: Let GO = (VO, EO) be an undirected, unweighted graph, and let AO be the adjacency matrix for GO (which contains either 1s or 0s). Let DO be the matrix such that DO[vO][vO] is equal to the degree of vO; all nondiagonal matrices are 0. Then the number of spanning trees of GO is equal to the determinant of AO + DO. 9.19 The obvious solution using elementary methods is to bucket sort the edge weights in linear time. Then the running time of Kruskal’s algorithm is dominated by the Union/FindO operations and is OO(O|OEO|Oα(O|OEO|O,O|OVO|O)). The Van-Emde Boas priority queues (see Chapter 6 references) give an immediate OO(O|OEO|OloglogO|OVO|O) running time for Dijkstra’s algorithm, but this isn’t even as good as a Fibonacci heap implementation. More sophisticated priority queue methods that combine these ideas have been proposed, including M. L. Fredman and D. E. Willard, "Trans-dichotomous Algorithms for Minimum Spanning Trees and Shortest Paths," Proceedings of the Thirty-first Annual IEEE Symposium on the Foundations of Computer Science (1990), 719-725. The paper presents a linear-time minimum spanning tree algorithm and an OO(O|OEO|O+O|OVO|O logO|OVO|O/ loglogO|OVO|O) implementation of Dijkstra’s algorithm if the edge costs are suitably small. 9.20 Since the minimum spanning tree algorithm works for negative edge costs, an obvious solution is to replace all the edge costs by their negatives and use the minimum spanning tree algorithm. Alternatively, change the logic so that < is replaced by >, MinO by MaxO, and vice versa. 9.21 We start the depth-first search at AO and visit adjacent vertices alphabetically. The articulation points are CO, EO, and FO. CO is an articulation point because LowO[BO] ≥ NumO[CO]; EO is an

-49-

articulation point because LowO[HO] ≥ NumO[EO]; and FO is an articulation point because LowO[GO] ≥ NumO[FO]; the depth-first spanning tree is shown in Fig. 9.1. A 1/1

C 2/1

B 3/2

D 11/1

E 4/2

F 5/2

H 7/4

G 6/6

J 8/4

K 9/4

I 10/4

Fig. 9.1. 9.22 The only difficult part is showing that if some nonroot vertex aO is an articulation point, then there is no back edge between any proper descendent of aO and a proper ancestor of aO in the depth-first spanning tree. We prove this by a contradiction. Let uO and vO be two vertices such that every path from uO to vO goes through aO. At least one of uO and vO is a proper descendent of aO, since otherwise there is a path from uO to vO that avoids aO. Assume without loss of generality that uO is a proper descendent of aO. Let cO be the child of aO that contains uO as a descendent. If there is no back edge between a descendent of cO and a proper ancestor of aO, then the theorem is true immediately, so suppose for the sake of contradiction that there is a back edge (sO, tO). Then either vO is a proper descendent of aO or it isn’t. In the second case, by taking a path from uO to sO to tO to vO, we can avoid aO, which is a contradiction. In the first case, clearly vO cannot be a descendent of cO, so let c'O be the child of aO that contains vO as a descendent. By a similar argument as before, the only possibility is that there is a back edge (s'O, t'O) between a descendent of c'O and a proper ancestor of aO. Then there is a path from uO to sO to tO to t'O to s'O to vO; this path avoids aO, which is also a contradiction.

-50-

9.23 (a) Do a depth-first search and count the number of back edges. (b) This is the feedback edge set problem. See reference [1] or [20]. 9.24 Let (vO,wO) be a cross edge. Since at the time wO is examined it is already marked, and wO is not a descendent of vO (else it would be a forward edge), processing for wO is already complete when processing for vO commences. Thus under the convention that trees (and subtrees) are placed left to right, the cross edge goes from right to left. 9.25 Suppose the vertices are numbered in preorder and postorder. If (vO,wO) is a tree edge, then vO must have a smaller preorder number than wO. It is easy to see that the converse is true. If (vO,wO) is a cross edge, then vO must have both a larger preorder and postorder number than wO. The converse is shown as follows: because vO has a larger preorder number, wO cannot be a descendent of vO; because it has a larger postorder number, vO cannot be a descendent of wO; thus they must be in different trees. Otherwise, vO has a larger preorder number but is not a cross edge. To test if (vO,wO) is a back edge, keep a stack of vertices that are active in the depth-first search call (that is, a stack of vertices on the path from the current root). By keeping a bit array indicating presence on the stack, we can easily decide if (vO,wO) is a back edge or a forward edge. 9.26 The first depth-first spanning tree is A

B

C

D

G

E

F

GrO, with the order in which to perform the second depth-first search, is shown next. The strongly connected components are {F} and all other vertices.

-51-

B,6

A,7

G,5

C,4

D,2

E,3

F,1

9.28 This is the algorithm mentioned in the references. 9.29 As an edge (vO,wO) is implicitly processed, it is placed on a stack. If vO is determined to be an articulation point because LowO[wO] ≥ NumO[vO], then the stack is popped until edge (vO,wO) is removed: The set of popped edges is a biconnected component. An edge (vO,wO) is not placed on the stack if the edge (wO,vO) was already processed as a back edge. 9.30 Let (uO,vO) be an edge of the breadth-first spanning tree. (uO,vO) are connected, thus they must be in the same tree. Let the root of the tree be rO; if the shortest path from rO to uO is duO, then uO is at level duO; likewise, vO is at level dvO. If (uO,vO) were a back edge, then duO > dvO, and vO is visited before uO. But if there were an edge between uO and vO, and vO is visited first, then there would be a tree edge (vO,uO), and not a back edge (uO,vO). Likewise, if (uO,vO) were a forward edge, then there is some wO, distinct from uO and vO, on the path from uO to vO; this contradicts the fact that dvO = dwO + 1. Thus only tree edges and cross edges are possible. 9.31 Perform a depth-first search. The return from each recursive call implies the edge traversal in the opposite direction. The time is clearly linear. 9.33 If there is an Euler circuit, then it consists of entering and exiting nodes; the number of entrances clearly must equal the number of exits. If the graph is not strongly connected, there cannot be a cycle connecting all the vertices. To prove the converse, an algorithm similar in spirit to the undirected version can be used. 9.34 Neither of the proposed algorithms works. For example, as shown, a depth-first search of a biconnected graph that follows A, B, C, D is forced back to A, where it is stranded. D

C E

A

F

B

9.35 These are classic graph theory results. Consult any graph theory for a solution to this exercise. 9.36 All the algorithms work without modification for multigraphs.

-52-

9.37 Obviously, GO must be connected. If each edge of GO can be converted to a directed edge and produce a strongly connected graph G'O, then GO is convertible.O Then, if the removal of a single edge disconnects GO, GO is not convertible since this would also disconnect G'O. This is easy to test by checking to see if there are any single-edge biconnected components. Otherwise, perform a depth-first search on GO and direct each tree edge away from the root and each back edge toward the root. The resulting graph is strongly connected because, for any vertex vO, we can get to a higher level than vO by taking some (possibly 0) tree edges and a back edge. We can apply this until we eventually get to the root, and then follow tree edges down to any other vertex. 9.38 (b) Define a graph where each stick is represented by a vertex. If stick SiO is above SPjO and thus must be removed first, then place an edge from SiO to SPjO. A legal pick-up ordering is given by a topological sort; if the graph has a cycle, then the sticks cannot be picked up. 9.39 Given an instance of clique, form the graph G'O that is the complement graph of GO: (vO,wO) is an edge in G'O if and only if it is not an edge in GO. Then G'O has a vertex cover of at most O|OVO|O − KO if GO has a clique of size at least KO. (The vertices that form the vertex cover are exactly those not in the clique.) The details of the proof are left to the reader. 9.40 A proof can be found in Garey and Johnson [20]. 9.41 Clearly, the baseball card collector problem (BCCP) is in NPO, because it is easy to check if KO packets contain all the cards. To show it is NP-complete, we reduce vertex cover to it. Let GO = (VO, EO) and KO be an instance of vertex cover. For each vertex vO, place all edges adjacent to vO in packet PvO. The KO packets will contain all edges (baseball cards) iff GO can be covered by KO vertices.

-53-

Chapter 10: Algorithm Design Techniques 10.1

First, we show that if NO evenly divides PO, then each of PjO(iO−1)PO+1 through PjiPO must be placed as the iOthO job on some processor. Suppose otherwise. Then in the supposed optimal ordering, we must be able to find some jobs PjxO and PjyO such that PjxO is the tOthO job on some processor and PjyO is the tO+1thO job on some processor but txO > tyO. Let PjzO be the job immediately following PjxO. If we swap PjyO and PjzO, it is easy to check that the mean processing time is unchanged and thus still optimal. But now PjyO follows PjxO, which is impossible because we know that the jobs on any processor must be in sorted order from the results of the one processor case. Let PjeO1, PjeO2, ..., PjeMO be the extra jobs if NO does not evenly divide PO. It is easy to see that the processing time for these jobs depends only on how quickly they can be scheduled, and that they must be the last scheduled job on some processor. It is easy to see that the first MO processors must have jobs PjO(iO−1)PO+1 through PjiPO+MO; we leave the details to the reader.

10.3

, 0 4

7

8

9

sp 1

5 nl

6 3

10.4

2

:

One method is to generate code that can be evaluated by a stack machine. The two operations are PushO (the one node tree corresponding to) a symbol onto a stack and Combine,O which pops two trees off the stack, merges them, and pushes the result back on. For the example in the text, the stack instructions are Push(s), Push(nl), Combine, Push(t), Combine, Push(a), Combine, Push(e), Combine, Push(i), Push (sp), Combine, Combine. By encoding a CombineO with a 0 and a PushO with a 1 followed by the symbol, the total extra space is 2NO − 1 bits if all the symbols are of equal length. Generating the stack machine code can be done with a simple recursive procedure and is left to the reader.

10.6

Maintain two queues, QO1 and QO2. QO1 will store single-node trees in sorted order, and QO2 will store multinode trees in sorted order. Place the initial single-node trees on QO1, enqueueing the smallest weight tree first. Initially, QO2 is empty. Examine the first two entries of each of QO1 and QO2, and dequeue the two smallest. (This requires an easily implemented extension to the ADT.) Merge the tree and place the result at the end of QO2. Continue this step until QO1 is empty and only one tree is left in QO2.

-54-

10.9

To implement first fit, we keep track of bins biO, which have more room than any of the lower numbered bins. A theoretically easy way to do this is to maintain a splay tree ordered by empty space. To insert wO, we find the smallest of these bins, which has at least wO empty space; after wO is added to the bin, if the resulting amount of empty space is less than the inorder predecessor in the tree, the entry can be removed; otherwise, a DecreaseKeyO is performed. To implement best fit, we need to keep track of the amount of empty space in each bin. As before, a splay tree can keep track of this. To insert an item of size wO, perform an insert of wO. If there is a bin that can fit the item exactly, the insert will detect it and splay it to the root; the item can be added and the root deleted. Otherwise, the insert has placed wO at the root (which eventually needs to be removed). We find the minimum element MO in the right subtree, which brings MO to the right subtree’s root, attach the left subtree to MO, and delete wO. We then perform an easily implemented DecreaseKeyO on MO to reflect the fact that the bin is less empty.

10.10

Next fit: 12 bins (.42, .25, .27), (.07, .72), (.86, .09), (.44, .50), (.68), (.73), (.31), (.78, .17), (.79), (.37), (.73, .23), (.30). First fit: 10 bins (.42, .25, .27), (.07, .72, .09), (.86), (.44, .50), (.68, .31), (.73, .17), (.78), (.79), (.37, .23, .30), (.73). Best fit: 10 bins (.42, .25, .27), (.07, .72, .09), (.86), (.44, .50), (.68, .31), (.73, .23), (.78, .17), (.79), (.37, .30 ), (.73). First fit decreasing: 10 bins (.86, .09), (.79, .17), (.78, .07), (.73, .27), (.73, .25), (.72, .23), (.68, .31), (.50, .44), (.42, .37), (.30). Best fit decreasing: 10 bins (.86, .09), (.79, .17), (.78), (.73, .27), (.73, .25), (.72, .23), (.68, .31), (.50, .44), (.42, .37, .07), (.30). Note that use of 10 bins is optimal.

10.12

We prove the second case, leaving the first and third (which give the same results as Theorem 10.6) to the reader. Observe that log pONO = log pO(bOmO) = mOpOlog pObO Working this through, Equation (10.9) becomes OmO

OmO

TO(NO) = TO(b ) = a

iO B ___ bOkO E OpO OC OF i log pObO Σ a G iO=0 D m

If aO = bOkO, then TO(NO) = aOmOlog pObO

m

iOpO Σ iO=0

= OO(aOmOmOpO+1log pObO) Since mO = log NO/log bO, and aOmO = NOkO, and bO is a constant, we obtain TO(NO) = OO(NOkOlog pO+1NO)

10.13

The easiest way to prove this is by an induction argument.

-55-

10.14

Divide the unit square into NO−1 square grids each with side 1/ √MNMMM O−1. Since there are NO points, some grid must contain two points. Thus the shortest distance is conservatively 2/ (NO−1). given by at most √MMMMMMMM

10.15

The results of the previous exercise imply that the width of the strip is OO(1/ √MM NOO). NOO), and thus covers only OO(1/ √MM NOO) of the area of Because the width of the strip is OO(1/ √MM the square, we expect a similar fraction of the points to fall in the strip. Thus only MMOO). NOO) points are expected in the strip; this is OO( √N OO(NO/ √MM

10.17

The recurrence works out to TO(NO) = TO(2NO/ 3) + TO(NO/ 3) + OO(NO) This is not linear, because the sum is not less than one. The running time is OO(NOlog NO).

10.18

The recurrence for median-of-median-of-seven partitioning is TO(NO) = TO(5NO/ 7) + TO(NO/ 7) + OO(NO) If all we are concerned about is the linearity, then median-of-median-of-seven can be used.

10.20

When computing the median-of-median-of-five, 30% of the elements are known to be smaller than the pivot, and 30% are known to be larger. Thus these elements do not need to be involved in the partitioning phase. (Extra work would need to be done to implement this, but since the whole algorithm isn’t practical anyway, we can ignore any extra work that doesn’t involve element comparisons.) The original paper [9] describes the exact constants in the worst-case bound, with and without this extra effort.

10.21

We derive the values of sO and δ, following the style in the original paper [17]. Let RtO,XO be the rank of element tO in some sample XO. If a sample S'O of elements is chosen randomly from SO, and O|OS'O|O = sO, O|OSO|O = NO, then we’ve already seen that NO+1 EO(RtO,SO) = _____ RtO,S'O sO+1 where EO means expected value. For instance, if tO is the third largest in a sample of 5 elements, then in a group of 19 elements it is expected to be the tenth largest. We can also calculate the variance: VO(RtO,SO) =

O)(sO−R O+1)(NO+1)(NO−sO) = OO(NO/ √M_(R________________________ MMMMMMMMM (sO+1) (sO+2) tO,S

tO,S

2

√MsMOO)

We choose vO1 and vO2 so that EO(RvO1,SO) + 2dVO(RvO1,SO) ∼ ∼ kO ∼ ∼ EO(RvO2,SO) − 2dVO(RvO2,SO) where dO indicates how many variances we allow. (The larger dO is, the less likely the element we are looking for will not be in S'O.) The probability that kO is not between vO1 and vO2 is ∞

2 ∫ erfOO(xO) dxO = OO(eO−d /dO) O2

d

If dO = log1/ 2NO, then this probability is ο(1/ NO), specifically OO(1/ (NOlog NO)). This means that the expected work in this case is OO(log−1NO) because OO(NO) work is performed with very small probability.

-56-

These mean and variance equations imply (sO+1) RvO1,S'O ≥ k ______ − dO √MsMOO (NO+1) and (sO+1) RvO2,S'O ≤ k ______ + dO √MsMOO (NO+1) This gives equation (A): δ = dO √MsMOO = √MsMOOlog 1/ 2NO

(A)

If we first pivot around vO2, the cost is NO comparisons. If we now partition elements in SO NO+1 that are less than vO2 around vO1, the cost is RvO2,SO, which has expected value kO + δ _____ . sO+1 NO+1 Thus the total cost of partitioning is NO + kO + δ _____ . The cost of the selections to find vO1 sO+1 and vO2 in the sample S'O is OO(sO). Thus the total expected number of comparisons is NO + kO + OO(sO) + OO(NOδ/ sO) The low order term is minimized when sO = NOδ/ sO

(B)

Combining Equations (A) and (B), we see that sO2 = NOδ = √MsMONOlog 1/ 2NO O3/ 2

s

= NOlog

sO = N

10.22

O2/ 3

1/ 2

(C)

NO

(D) NO

(E)

δ = NO1/ 3log 2/ 3NO

(F)

log

1/ 3

First, we calculate 12*43. In this case, XLO = 1, XRO = 2, YLO = 4, YRO = 3, DO1 = −1, DO2 = −1, XLOYLO = 4, XROYRO = 6, DO1DO2 = 1, DO3 = 11, and the result is 516. Next, we calculate 34*21. In this case, XLO = 3, XRO = 4, YLO = 2, YRO = 1, DO1 = −1, DO2 = −1, XLOYLO = 6, XROYRO = 4, DO1DO2 = 1, DO3 = 11, and the result is 714.

Third, we calculate 22*22. Here, XLO = 2, XRO = 2, YLO = 2, YRO = 2, DO1 = 0, DO2 = 0, XLOYLO = 4, XROYRO = 4, DO1DO2 = 0, DO3 = 8, and the result is 484. Finally, we calculate 1234*4321. XLO = 12, XRO = 34, YLO = 43, YRO = 21, DO1 = −22, DO2 = −2. By previous calculations, XLOYLO = 516, XROYRO = 714, and DO1DO2 = 484. Thus DO3 = 1714, and the result is 714 + 171400 + 5160000 = 5332114. 10.23

The multiplication evaluates to (acO − bdO) + (bcO + adO)iO. (aO − bO)(dO − cO) + acO + bdO.

10.24

The algebra is easy to verify. The problem with this method is that if XO and YO are positive NO bit numbers, their sum might be an NO+1 bit number. This causes complications.

10.26

Matrix multiplication is not commutative, so the algorithm couldn’t be used recursively on matrices if commutativity was used.

-57-

Compute acO, bdO, and

10.27

If the algorithm doesn’t use commutativity (which turns out to be true), then a divide and conquer algorithm gives a running time of OO(NOlog70143640) = OO(NO2.795).

10.28

1150 scalar multiplications are used if the order of evaluation is ( ( AO1AO2 ) ( ( ( AO3AO4 ) AO5 ) AO6 ) )

10.29

(a) Let the chain be a 1x1 matrix, a 1xA matrix, and an AxB matrix. Multiplication by using the first two matrices first makes the cost of the chain AO + ABO. The alternative method gives a cost of ABO + BO, so if AO > BO, then the algorithm fails. Thus, a counterexample is multiplying a 1x1 matrix by a 1x3 matrix by a 3x2 matrix. (b, c) A counterexample is multiplying a 1x1 matrix by a 1x2 matrix by a 2x3 matrix.

10.31

The optimal binary search tree is the same one that would be obtained by a greedy strategy: IO is at the root and has children andO and itO; aO and orO are leaves; the total cost is 2.14.

10.33

This theorem is from F. Yao’s paper, reference [58].

A recursive procedure is clearly called for; if there is an intermediate vertex, StopOverO on the path from sO to tO, then we want to print out the path from sO to StopOverO and then StopOverO to tO. We don’t want to print out StopOverO twice, however, so the procedure does not print out the first or last vertex on the path and reserves that for the driver. _______________________________________________________________________________ 10.34

/* Print the path between S and T, except do not print */ /* the first or last vertex. Print a trailing " to " only. */ void PrintPath1( TwoDArray Path, int S, int T ) { int StopOver = Path[ S ][ T ]; if( S != T && StopOver != 0 ) { PrintPath1( Path, S, StopOver ); printf( "%d to ", StopOver ); PrintPath1( Path, StopOver, T ); } } /* Assume the existence of a Path of length at least 1 */ void PrintPath( TwoDArray Path, int S, int T ) { printf( "%d to ", S ); PrintPath1( Path, S, T ); printf( "%d", T ); NewLine( ); } _______________________________________________________________________________

-58-

10.35

Many random number generators are poor. The default UNIX random number generator randO uses a modulus of the form 2bO, as does the VAX/VMS random number generator. UNIX does, however, provide better random number generators in the form of random.O The Turbo random number generators, likewise, are also deficient. The paper by Park and Miller [44] discusses the random number generators on many machines.

10.38

If the modulus is a power of two, then the least significant bit of the "random" number oscillates. Thus FlipO will always return heads and tails alternately, and the level chosen for a skip list insertion will always be one or two. Consequently, the performance of the skip list will be Θ(NO) per operation.

10.39

(a) 25 ≡ 32 modO 341, 210 ≡ 1 modO341. Since 322 ≡ 1 modO 341, this proves that 341 is not prime. We can also immediately conclude that 2340 ≡ 1 modO341 by raising the last equation to the 34thO power. The exponentiation would continue as follows: 220 ≡ 1 modO341, 221 ≡ 2 modO341, 242 ≡ 4 modO341, 284 ≡ 16 modO341, 285 ≡ 32 modO341, 2170 ≡ 1 modO341, and 2340 ≡ 1 modO341. (b) If AO = 2, then although 2560 ≡ 1 modO 561, 2280 ≡ 1 modO561 proves that 561 is not prime. If AO = 3, then 3561 ≡ 375 modO 561, which proves that 561 is not prime. AO = 4 obviously doesn’t fool the algorithm, since 4140 ≡ 1 modO 561. AO = 5 fools the algorithm: 52 ≡ 25 modO 561, 54 ≡ 64 modO561, 58 ≡ 169 modO561, 51 ≡ 5 modO 561, 16 17 34 5 = 311 modO561, 5 ≡ 229 modO561, 535 ≡ 23 modO561, 5 ≡ 511 modO561, 70 140 280 560 5 ≡ 529 modO561, 5 ≡ 463 modO561, 5 ≡ 67 modO561, 5 ≡ 1 modO561.

10.41

The two point sets are {0, 4, 6, 9, 16, 17} and {0, 5, 7, 13, 16, 17}.

10.42

To find all point sets, we backtrack even if Found == true, and print out information when line 2 is executed. In the case where there are some duplicate distances, it is possible that several symmetries will be printed.

10.43 68

Max

68

65

68

86

65

68

94

94

65

17

10.44

65

86

68

59

39

88

68

65

88

86

92

59

Max

65

86

92

50

65

41

79

65

50

50

79

Min

50

Min

Max

20

Actually, it implements both; AlphaO is lowered for subsequent recursive calls on line 9 when ValueO is changed at line 12. BetaO is used to terminate the whileO loop when a line of play leads to a position so good that it is obvious the human player (who has already seen a more favorable line for himself or herself before making this call) won’t play into it. To implement the complementary routine, switch the roles of the human and computer. Lines that change are 3 and 4 (obvious changes), 5 (replace AlphaO with BetaO), 6 (replace *ValueO

-59-

10.46

10.47

We place circles in order. Suppose we are trying to place circle PjO, of radius rPjO. If some rMMM circle iO of radius riO is centered at xiO, then PjO is tangent to iO if it is placed at xiO + 2 √M iOrsPjO. To see this, notice that the line connecting the centers has length riO + rPjO, and the difference in yO-coordinates of the centers is O|OrPjO − riOO|O. The difference in xO-coordinates follows from the Pythagorean theorem.

To place circle PjO, we compute where it would be placed if it were tangent to each of the first PjO−1 circles, selecting the maximum value. If this value is less than rPjO, then we place circle PjO at xPjO. The running time is OO(NO2). Construct a minimum spanning tree TO of GO, pick any vertex in the graph, and then find a path in TO that goes through every edge exactly once in each direction. (This is done by a depth-first search; see Exercise 9.31.) This path has twice the cost of the minimum spanning tree, but it is not a simple cycle. Make it a simple cycle, without increasing the total cost, by bypassing a vertex when it is seen a second time (except that if the start vertex is seen, close the cycle) and going to the next unseen vertex on the path, possibly bypassing several vertices in the process. The cost of this direct route cannot be larger than original because of the triangle inequality. If there were a tour of cost KO, then by removing one edge on the tour, we would have a minimum spanning tree of cost less than KO (assuming that edge weights are positive). Thus the minimum spanning tree is a lower bound on the optimal traveling salesman tour. This implies that the algorithm is within a factor of 2 of optimal.

10.48

If there are two players, then the problem is easy, so assume kO>1. If the players are numbered 1 through NO, then divide them into two groups: 1 through NO / 2 and NO / 2+1 though NO. On the iOthO day, for 1 ≤ iO ≤ NO / 2, player pO in the second group plays players ((pO + iO) modO NO / 2) + 1 in the first group. Thus after NO / 2 days, everyone in group 1 has played everyone in group 2. In the last NO / 2−1 days, recursively conduct round-robin tournaments for the two groups of players.

10.49

Divide the players into two groups of size OINO / 2OK and OHNO / 2OJ, respectively, and recursively arrange the players in any order. Then merge the two lists (declare that pxO > pyO if xO has defeated yO, and pyO > pxO if yO has defeated xO − exactly one is possible) in linear time as is done in mergesort.

10.50 10.51

Divide and conquer algorithms (among others) can be used for both problems, but neither is trivial to implement. See the computational geometry references for more information.

-60-

10.52

(a) Use dynamic programming. Let SkO= the best setting of words wkO, wkO+1, ... wNO, UkO= the ugliness of this setting, and lkO= for this setting, (a pointer to) the word that starts the second line. To compute SkO−1, try putting wkO−1, wkO, ..., wMO all on the first line for kO < MO and M

Σ wiO< LO. iO=kO−1

Compute the ugliness of each of these possibilities by, for each MO, comput-

ing the ugliness of setting the first line and adding UmO+1. Let M'O be the value of MO that yields the minimum ugliness. Then UkO−1 = this value, and lkO−1 = M'O + 1. Compute values of UO and lO starting with the last word and working back to the first. The minimum ugliness of the paragraph is UO1; the actual setting can be found by starting at lO1 and following the pointers in lO since this will yield the first word on each line. (b) The running time is quadratic in the case where the number of words that can fit on a line is consistently Θ(NO). The space is linear to keep the arrays UO and lO. If the line length is restricted to some constant, then the running time is linear because only OO(1) words can go on a line. (c) Put as many words on a line as can fit. This clearly minimizes the number of lines, and hence the ugliness, as can be shown by a simple calculation. 10.53

An obvious OO(NO2) solution to construct a graph with vertices 1, 2, ..., NO and place an edge (vO,wO) in GO iff avO < awO. This graph must be acyclic, thus its longest path can be found in time linear in the number of edges; the whole computation thus takes OO(NO2) time. Let BESTO(kO) be the increasing subsequence of exactly kO elements that has the minimum last element. Let tO be the length of the maximum increasing subsequence. We show how to update BESTO(kO) as we scan the input array. Let LASTO(kO) be the last element in BESTO(kO). It is easy to show that if iO

10.54

Let LCSO(AO, MO, BO, NO) be the longest common subsequence of AO1, AO2, ..., AMO and BO1, BO2, ..., BNO. If either MO or NO is zero, then the longest common subsequence is the empty string. If xMO = yNO, then LCSO(AO, MO, BO, NO) = LCSO(AO, MO−1, BO, NO−1),AMO. Otherwise, LCSO(AO, MO, BO, NO) is either LCSO(AO, MO, BO, NO−1) or LCSO(AO, MO−1, BO, NO), whichever is longer. This yields a standard dynamic programming solution.

10.56

(a) A dynamic programming solution resolves part (a). Let FITSO(iO, sO) be 1 if a subset of the first iO items sums to exactly sO; FITSO(iO, 0) is always 1. Then FITSO(xO, tO) is 1 if either FITSO(xO − 1, tO − axO) or FITSO(xO − 1, tO) is 1, and 0 otherwise.

(b) This doesn’t show that PO = NPO because the sizeO of the problem is a function of NO and log KO. Only log KO bits are needed to represent KO; thus an OO(NKO) solution is exponential in the input size.

-61-

10.57

(a) Let the minimum number of coins required to give xO cents in change be COINO(xO); COINO(0) = 0. Then COINO(xO) is one more than the minimum value of COINO(xO − ciO), giving a dynamic programming solution. (b) Let WAYSO(xO, iO) be the number of ways to make xO cents in change without using the first iO coin types. If there are NO types of coins, then WAYSO(xO, NO) = 0 if xO ≠ 0, and WAYSO(0, iO) = 1. Then WAYSO(xO, iO − 1) is equal to the sum of WAYSO(xO − pciO, iO), for integer values of pO no larger than xO/ciO (but including 0).

10.58

(a) Place eight queens randomly on the board, making sure that no two are on the same row or column. This is done by generating a random permutation of 1..8. There are only 5040 such permutations, and 92 of these give a solution.

10.59

(a) Since the knight leaves every square once, it makes BO2 moves. If the squares are alternately colored black and white like a checkerboard, then a knight always moves to a different colored square. If BO is odd, then so is BO2, which means that at the end of the tour the knight will be on a different colored square than at the start of the tour. Thus the knight cannot be at the original square.

10.60

(a) If the graph has a cycle, then the recursion does not always make progress toward a base case, and thus an infinite loop will result. (b) If the graph is acyclic, the recursive call makes progress, and the algorithm terminates. This could be proved formally by induction. (c) This algorithm is exponential.

-62-

Chapter 11: Amortized Analysis 11.1

When the number of trees after the insertions is more than the number before.

11.2

Although each insertion takes roughly log N , and each DeleteMin takes 2log N actual time, our accounting system is charging these particular operations as 2 for the insertion and 3log N −2 for the DeleteMin. The total time is still the same; this is an accounting gimmick. If the number of insertions and DeleteMins are roughly equivalent, then it really is just a gimmick and not very meaningful; the bound has more significance if, for instance, there are N insertions and O (N / log N ) DeleteMins (in which case, the total time is linear). O

O

O

O

O

O

O

11.3

O

O

O

O

Insert the sequence N , N + 1, N − 1, N + 2, N − 2, N + 3, ..., 1, 2N into an initially empty skew heap. The right path has N nodes, so any operation could take Ω(N ) time. O

O

O

O

O

O

O

O

11.5

O

We implement DecreaseKey(X, H) as follows: If lowering the value of X creates a heap order violation, then cut X from its parent, which creates a new skew heap H 1 with the new value of X as a root, and also makes the old skew heap H smaller. This operation might also increase the potential of H , but only by at most log N . We now merge H and H 1. The total amortized time of the Merge is O (log N ), so the total time of the DecreaseKey operation is O (log N ). O

O

O

O

O

O

O

O

O

O

11.8

O

O

O

O

O

For the zig −zig case, the actual cost is 2, and the potential change is R f ( X ) + R f ( P ) + R f ( G ) − Ri ( X ) − Ri ( P ) − Ri ( G ). This gives an amortized time bound of O

O

P OO

O

O

P OO

O

P OO

O

O

O

O

O

O

ATzig −zig = 2 + R f ( X ) + R f ( P ) + R f ( G ) − Ri ( X ) − Ri ( P ) − Ri ( G ) O

O

O

P OO

O

P OO

O

P OO

O

O

O

O

O

O

Since R f ( X ) = Ri ( G ), this reduces to O

P OO

O

O

= 2 + R f ( P ) + R f ( G ) − Ri ( X ) − Ri ( P ) O

P OO

O

P OO

O

O

O

O

Also, R f ( X ) > R f ( P ) and Ri ( X ) < Ri ( P ), so O

P OO

O

P OO

O

O

O

O

ATzig −zig < 2 + R f ( X ) + R f ( G ) − 2Ri ( X ) O

O

O

P OO

O

P OO

O

O

Since Si ( X ) + S f ( G ) < S f ( X ), it follows that Ri ( X ) + R f ( G ) < 2R f ( X ) − 2. Thus O

O

O

P OO

O

P OO

O

O

P OO

O

O

P OO

ATzig −zig < 3R f ( X ) − 3Ri ( X ) O

11.9

O

P OO

O

O

O

(a) Choose W (i ) = 1/ N for each item. Then for any access of node X , R f ( X ) = 0, and Ri ( X ) ≥ −log N , so the amortized access for each item is at most 3 log N + 1, and the net potential drop over the sequence is at most N log N , giving a bound of O (M log N + M + N log N ), as claimed. O

O

O

O

O

O

O

O

O

O

O

O

O

O

P OO

O

O

O

(b) Assign a weight of qi /M to items i . Then R f ( X ) = 0, Ri ( X ) ≥ log(qi /M ), so the amortized cost of accessing item i is at most 3 log(M /qi ) + 1, and the theorem follows immediately. O

O

O

P OO

O

11.10

O

O

O

O

O

O

O

(a) To merge two splay trees T 1 and T 2, we access each node in the smaller tree and insert it into the larger tree. Each time a node is accessed, it joins a tree that is at least O

O

-63-

twice as large; thus a node can be inserted log N times. This tells us that in any sequence of N −1 merges, there are at most N log N inserts, giving a time bound of O (N log2N ). This presumes that we keep track of the tree sizes. Philosophically, this is ugly since it defeats the purpose of self-adjustment. O

O

O

O

O

O

O

(b) Port and Moffet [6] suggest the following algorithm: If T 2 is the smaller tree, insert its root into T 1. Then recursively merge the left subtrees of T 1 and T 2, and recursively merge their right subtrees. This algorithm is not analyzed; a variant in which the median of T 2 is splayed to the root first is with a claim of O (N log N ) for the sequence of merges. O

O

O

O

O

11.11

O

O

O

The potential function is c times the number of insertions since the last rehashing step, where c is a constant. For an insertion that doesn’t require rehashing, the actual time is 1, and the potential increases by c , for a cost of 1 + c . O

O

O

O

If an insertion causes a table to be rehashed from size S to 2S , then the actual cost is 1 + dS , where dS represents the cost of initializing the new table and copying the old table back. A table that is rehashed when it reaches size S was last rehashed at size S / 2, so S / 2 insertions had taken place prior to the rehash, and the initial potential was cS / 2. The new potential is 0, so the potential change is −cS / 2, giving an amortized bound of (d − c / 2)S + 1. We choose c = 2d , and obtain an O (1) amortized bound in both cases. O

O

O

O

O

O

O

O

O

O

11.12

O

O

O

O

O

We show that the amortized number of node splits is 1 per insertion. The potential function is the number of three-child nodes in T . If the actual number of nodes splits for an insertion is s , then the change in the potential function is at most 1 − s , because each split converts a three-child node to two two-child nodes, but the parent of the last node split gains a third child (unless it is the root). Thus an insertion costs 1 node split, amortized. An N node tree has N units of potential that might be converted to actual time, so the total cost is O (M + N ). (If we start from an initially empty tree, then the bound is O (M ).) O

O

O

O

O

O

O

11.13

O

O

O

(a) This problem is similar to Exercise 3.22. The first four operations are easy to implement by placing two stacks, SL and SR , next to each other (with bottoms touching). We can implement the fifth operation by using two more stacks, ML and MR (which hold minimums). O

O

O

O

If both SL and SR never empty, then the operations can be implemented as follows: O

O

Push(X,D): push X onto SL ; if X is smaller than or equal to the top of ML , push X onto ML as well. O

O

O

O

O

O

Inject(X,D): same operation as Push , except use SR and MR . O

O

O

Pop(D): pop SL ; if the popped item is equal to the top of ML , then pop ML as well. O

O

O

Eject(D): same operation as Pop , except use SR and MR . O

O

O

FindMin(D): return the minimum of the top of ML and MR . O

O

These operations don’t work if either SL or SR is empty. If a Pop or E ject is attempted on an empty stack, then we clear ML and MR . We then redistribute the elements so that half are in SL and the rest in SR , and adjust ML and MR to reflect what the state would be. We can then perform the Pop or E ject in the normal fashion. Fig. 11.1 shows a transformation. O

O

O

P

O

O

O

O

O

O

O

P

O

O

Define the potential function to be the absolute value of the number of elements in SL minus the number of elements in SR . Any operation that doesn’t empty SL or SR can O

O

O

-64-

O

3, 1, 4, 6, 5, 9, 2, 6

SL

O

3, 1, 4, 6

SR

O

O

SR

O

1, 2, 6

ML

5, 9, 2, 6

SL

O

1, 4, 6

MR

5, 2

ML

O

MR

O

O

Fig. 11.1. increase the potential by only 1; since the actual time for these operations is constant, so is the amortized time. To complete the proof, we show that the cost of a reorganization is O (1) amortized time. Without loss of generality, if SR is empty, then the actual cost of the reorganization is | SL | units. The potential before the reorganization is | SL | ; afterward, it is at most 1. Thus the potential change is 1− | SL | , and the amortized bound follows. O

O

O O

O O

O O

O

O O

O O

O

-65-

O O

O

Chapter 12: Advanced Data Structures and Implementation 12.3

Incorporate an additional field for each node that indicates the size of its subtree. These fields are easy to update during a splay. This is difficult to do in a skip list.

12.6

If there are B black nodes on the path from the root to all leaves, it is easy to show by induction that there are at most 2B leaves. Consequently, the number of black nodes on a path is at most log N . Since there can’t be two consecutive red nodes, the height is bounded by 2log N . O

O

O

O

12.7

Color nonroot nodes red if their height is even and their parents height is odd, and black otherwise. Not all red black trees are AVL trees (since the deepest red black tree is deeper than the deepest AVL tree).

12.19

See H. N. Gabow, J. L. Bentley, and R. E. Tarjan, "Scaling and Related Techniques for Computational Geometry," Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing (1984), 135-143, or C. Levcopoulos and O. Petersson, "Heapsort Adapted for Presorted Files," Journal of Algorithms 14 (1993), 395-413.

12.29

Pointers are unnecessary; we can store everything in an array. This is discussed in reference [12]. The bounds become O (k log N ) for insertion, O (k 2log N ) for deletion of a minimum, O (k 2N ) for creation (an improvement over the bound in [12]). O

O

O

12.35

O

O

O

O

O

O

Consider the pairing heap with 1 as the root and children 2, 3, ... N . A DeleteMin removes 1, and the resulting pairing heap is 2 as the root with children 3, 4, ... N ; the cost of this operation is N units. A subsequent DeleteMin sequence of 2, 3, 4, ... will take total time Ω(N 2). O

O

O

O

O

O

-66-

*When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile*

© Copyright 2015 - 2021 PDFFOX.COM - All rights reserved.