The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

441 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 3

Huffman codes; introduction to dynamic programming.

- Tim RoughgardenProfessor

Computer Science

So I know you're probably catching your breath from that last computation.

Â So let's zoom out. Let's make sure we don't lose the forest

Â for the trees. And see that we're actually mostly there.

Â We're just one lemma away from finishing the Proof of Theorem.

Â The proof of correctness of Huffman's algorithm.

Â So what do we have going for us. Well, first of all we've got the

Â inductive hypothesis. Remember and any proof by induction you better rely very

Â heavily on the inductive hypothesis. What does it say?

Â It says when Huffman's algorithm recurses on the smaller subproblem with the

Â smaller alpha bit signal prime, it solves it optimally.

Â It returns a tree which minimizes the average encoding length with respects to

Â the alpha bit sigma prime in the corresponding frequencies.

Â So, we're going to call that tree, the output of the recursive call, T prime

Â hat. So let me amplify the power of this

Â inductive hypothesis by combining it with the computation we did in the last slot.

Â So what do we know so far? We know for this smallest subproblem

Â which frankly, we don't care about in its own right.

Â But, nevertheless, for this smaller sub problem, with alphabet sigma prime, the

Â recursive call solves it optimally. It returns to us, this tree T hat prime.

Â And among all trees with leaves inable according to sigma prime.

Â This tree is as good as it gets, it minimizes the average encoding length.

Â But what have we learned in the last slide?

Â We learned that there's a one to one correspondence between feasible

Â solutions, between trees for the smallest subproblem with the alphabet sigma prime,

Â and feasible solutions to the original problem, the one we care about, that have

Â a certain form in which, it just so happens that A and B are siblings, that

Â they share a common parent. Moreover, this correspondence preserves

Â the objective function value, the average encoding length or at least it preserves

Â it up to a constant which is good enough for our purposes.

Â So the upshot is, in minimizing average encoding length

Â over all feasible solutions for the smallest subproblem, a recursive call is

Â actually doing more for us. It's actually minimizing the average

Â encoding length for the original problem with the original alphabet sigma over a

Â subset of the feasible solutions, the feasible solutions in which A and B are

Â siblings. Now, does this do us any good?

Â Well, it depends, what was our original goal?

Â Our goal was to get the smallest average encoding if possible over all feasible

Â solutions in the world. And so what have we just figured out?

Â We figured out, well we're getting the best possible scenario amongst some

Â solutions. Those in which A and B happen to be

Â siblings. So, we're in trouble if there is no

Â optimal solution in which A and B are siblings.

Â Then it doesn't do us any good to optimize only over these crappy

Â solutions. On the other hand, if there is an optimal

Â solution lying in this set XAB in which A and B are siblings, then we're golden.

Â Then there's an optimal solution in this set, while recursive call is finding the

Â best solution in the sets. So, we're going to find an optimal

Â solution. So that's really the big question.

Â Can we guarantee that it was safe to merge A and B in the very first

Â iteration? Can we guarantee there is an optimal solution, a best possible tree

Â that also has that property? That also has A and B as siblings.

Â So the key level, which I approve in the next slide in which we'll complete the

Â proof of the correctness of the apartment spherum, is indeed there is always an

Â optimal solution in the set XABA an optimal solution on which A and B, the

Â two lowest frequency symbols are indeed siblings.

Â So I'll give you a full-blown proof of this Key Lemma in the next slide.

Â But let me first just orientate you and you know, just develop your intuition

Â about why you'd hope this Key Lemma would be true.

Â The intuition is really the same as that as when we developed the greedy algorithm

Â in the first place, namely when you which symbols do you want to incur the longest

Â encoding lengths? Well, you want them to be the lowest

Â frequency symbols, that you want to minimize the average.

Â So if you look at any old tree, there's going to be you know, some

Â bottommost layer. There's going to be some deepest possible

Â leaves. And, you really hope that A and B, the

Â lowest frequency leaves are going to be in that lowest level.

Â Now they might both be in the lowest level and not, and might not be siblings.

Â But, if you think about it, amongst symbols at the same level, they're all

Â suffering exactly the same encoding length anyways.

Â So you can permute them in any way you want and it's not going to affect your

Â average encoding length. So once you put A and B at the bottommost

Â level you may as well put them as siblings and it's not going to make any

Â difference. So, the final punchline being, you can

Â take any tree like say an optimal tree. And by swapping.

Â Changing the lowest frequency elements A and B to the bottom and to be common

Â siblings, you can't make the tree worse you can only make it better, and that's

Â going to show that there is an optimal tree with the design property where A and

Â B are in fact siblings. All right, so that's a reasonably

Â accurate intuition, but it's not a proof. Here is the proof,

Â so we're going to use an exchange argument.

Â We're going to take an arbitrary optimal tree a tree minimizing the average

Â encoding length. Let's call it T star,

Â there's may be many such trees, just pick any one, I don't care which.

Â And the plan is to show an even better tree or at least as good tree which

Â satisfies the properties that we want where A and B show up as siblings.

Â So lets figure out where we're going to swap the symbols A and B to.

Â So we've got our tree, T star. And let's look at the deepest level of T

Â star. And let's find our, you know, any pair of

Â siblings at this deepest level, call them X and Y.

Â So in this green tree on the right we have two choices of pairs of siblings at

Â the lowest level. It doesn't matter which one we pick.

Â So let's just say I pick the leftmost pair,

Â call them X and Y. So A and B have to be somewhere in this

Â tree. Let me just pick a couple of the leaves

Â to be examples for where A and B might actually be in T star.

Â And so now we're going to execute the exchange.

Â We're going to obtain a new tree T hat from T star, merely by swapping the

Â labels of the leaves X and A and similarly swapping the labels of the

Â leaves Y and B. So after this exchange we get a new valid

Â tree. So once again, you know, the leaves are

Â labeled with the various symbols of sigma.

Â And, moreover, by our choice of X and Y, we've now via this exchange forced the

Â tree to conform to the property. We've forced A and B to be siblings.

Â They take the original positions of X and Y which were chosen to be siblings.

Â So to finish the proof, we're going to show that the averaging coding length of

Â T hat can only be less than T star. Since T star was optimal, that means that

Â T hat has to be optimal as well, since it also satisfies the desired property that

Â A and B are siblings, that's going to complete the proof the, the Key Lemma and

Â therefore the proof of the correctness of Huffman's algorithm.

Â The reason intuitively is that when we do the swap, A and B inherit the previous

Â debts of X and Y and conversely X and Y inherit the old debts of A and B.

Â So A and B can only get worse and X and Y can only get better, and X and Y are the

Â ones with the higher frequencies. So the overall cost benefit analysis is a

Â net win and the tree only improves. But, to make that precise, let me just

Â show you the exact computation. So let's look at the difference between

Â the average encoding length given by the 3T star and that after the swap given by

Â the 3T hat. So analogously, to our computation on the

Â previous slide most of the stuff cancels because the trees T star and T hat

Â frankly just all that aren't all that different.

Â The only things that are different are the positions of the symbols A, B, X, and

Â Y. So we're going to have four positive

Â terms contributed by T star for these four symbols.

Â And we're going to have four negative terms contributed by the tree T hat again

Â for these exact same four symbols. So let me just show you,

Â what happens after the dust settles and after you take away all the cancellation?

Â You can write the eight terms that remain in a slightly sneaky but you know,

Â clarifying way as follows. So we're going to have one product

Â representing the four terms involving the symbols A and X, and then an analogous

Â term for the four terms involving B and Y.

Â So in the first term we look at the product of two differences.

Â On the one hand we look at the difference between the frequencies of X and A.

Â Remember those are the two things that got swapped with each other.

Â And then we also look at the difference between their depths in the original tree

Â T star. And then because B and Y got swapped we

Â have an entirely analogous term involving them.

Â The reason I re-wrote this difference in a slightly sneaky way, is it now becomes

Â completely obvious, what role the various hypotheses play.

Â Specifically, why is it important that A and B have the lowest possible

Â frequencies? We actually haven't used that yet in our

Â proof, but it's fundamental to our greedy algorithm.

Â Secondly, why did we choose X and Y to be in the deepest level of the original tree

Â T star. Well.

Â Let's start with the first fare products. The differences between, between

Â frequencies. So the frequency of X minus the frequency

Â of A, and similarly between Y and B. Well because A and B have the smallest

Â frequencies of them all. These differences have to be

Â non-negative. The frequencies of X and Y can only be

Â more. The second pair of differences also must

Â be non-negative and this is simply because we chose X and Y to be at the

Â deepest level. Their depths have to be at least as large

Â as that of A and B in the original 3T star.

Â Since all four differences are nonnegative, that means the sum of their

Â products is also nonnegative. That means the average encoding length of

Â T star is at least as big as T hat. So that's where T hat has to also be

Â optimal. It in addition satisfies the desired

Â property that A and B are siblings. And so that means our recursive call,

Â even though we've committed to merging A and B, we've committed to returning a

Â tree in which their siblings that was without loss, that was a safe merge.

Â That's why Huffman's Algorithm at the end of the day can return an optimal tree,

Â one that minimizes the average encoding length.

Â We're all possible binary prefix-free codes.

Â So let me just wrap up with a few notes about how to implement Huntsman's

Â algorithm and what kind of running time you should be able to achieve.

Â So if you just naively implement the pseudocode I showed you, you're going to

Â get a not particularly great quadratic time algorithm that moreover uses a lot

Â of recursion. So why is it going to be quadratic time?

Â Well, you have one recursive call each time.

Â And you always reduce the number of symbols by one.

Â So the total number of recursive calls is going to be linear in the number of

Â symbols in your alphabet in. And how much work did you do in each

Â recursive call, not counting the work done by later calls?

Â Well, you have to search to figure out the lowest frequency symbols A and B.

Â So that's basically some minimum computations.

Â So that's going to take linear time for each of the linear recursive calls.

Â Now the fact that we keep doing these repeated minimum computations each

Â recursive call, I hope triggers a light bulb, that maybe a data structure would

Â be helpful. Which data structure's raison d'etre is

Â to speed up repeated minimum computations?

Â Well as we've seen, for example, in Dijkstra's and Prim's, Prim's Algorithms,

Â that would be the heap. So of course, we need to find a heap, you

Â have to say what the keys are. But since we always want to know what the

Â symbols with the smallest frequencies are, the obvious thing to do is to use

Â frequencies as keys. So the only question then is what happens

Â when you do a merge? Well, the obvious thing is to just to add

Â the frequencies of those two symbols and reinsert into the heap the new meta

Â symbol with its key equal to what? The sum of the keys of the two elements

Â that you just extracted from the heap. So not only is that going to work great,

Â it's going to be N log N time, it's very easy to implement in iterative manner.

Â So that's starting to be a really blazing fast implementation of Huffman's

Â algorithm. In fact you can do even better.

Â You can boil down an implementation of Huffman's algorithm to a single

Â indication of a sorting sub-routine followed by a merely linear amount of

Â work. Now in the worst case you're still going

Â to be stuck asymptotically with an N log N of limitation because you can't sort

Â better than N log N if you know nothing about what you're sorting.

Â But in this implementation the constants are going to be even better, and as we

Â discussed in part one, there are sometimes special cases where you can

Â beat N log N for sorting. So for example if all, all of your data

Â is representable with a small number of bits, then you can do better than N log

Â N. So I'm going to leave this even faster

Â implemtation as a somewhat nontrivial exercise.

Â I'll give you a hint though, which is, if you do sorting as a pre-processing step

Â you actually don't even need to use a heap data structure for this

Â implementation. You can get away with an even more

Â primitive data structure of a queue. You might however, want to use two

Â queues. So, next time you find yourself in line

Â for coffee, that's a nice thing to think about.

Â How do you implement Huffman's Algorithm merely with one invocation to sorting, so

Â that the symbols are sorted by frequency, followed by linear work using two queues?

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.