The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

292 ratings

Stanford University

292 ratings

Course 3 of 4 in the Specialization Algorithms

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 2

Kruskal's MST algorithm and applications to clustering; advanced union-find (optional).

- Tim RoughgardenProfessor

Computer Science

This video will prove the correctness of our greedy algorithm for clustering.

Â We'll show that it maximizes the spacing over all possible K clusterings.

Â You might have hoped that we could deduce the correctness of this greedy algorithm

Â for clustering immediately from our correctness proofs for various greedy

Â minimum spanning tree algorithms. Unfortunately that doesn't seem to be the

Â case. In the minimum cost spanning tree

Â problem, we're focusing on minimizing the sum of the edge cost.

Â Here we're looking at different objective, maximizing the spacings.

Â We do need to do a proof from scratch. That's said, you know, the arguments

Â we'll use should look familiar to you not just from the sort of exchange type

Â arguments when we prove the cut property, but also it might remind you even more,

Â going back further, to our greedy algorithms for scheduling.

Â So let's now set up the notation for the proof.

Â As usual, we're going to look at the output of r algorithm.

Â It achieves some objective function value, some spacing.

Â We're going to look at an arbitrary competitor.

Â Some other proposed scheduling. We're going to show that we're at least

Â as good, our spacing is, at least, as large.

Â So specifically, we'll denote the clusters in the output of r algorithm by

Â C1 up to CK. Our clustering has some spacing, some

Â distance between the near, closest pair of separated points.

Â Call it capital S. We're going to denote our competitor,

Â some alternative K clustering by C-Hat one of the C-Hat K, what is it that we're

Â tryin to show? We want to show that this arbitrary other

Â clustering has spacing no larger than R's, if we can show that, then because

Â this clustering was arbitrary it means the greedy clustering has spacing as

Â large as any other, so it's maximizing the spacing, that's what we want to

Â proof. But differently we want to exhibit a pair

Â of points separated by this cluster and C one-half to C1K, such that the distance

Â between those separated points is S or smaller.

Â So, let me just quickly depose of a trivial case.

Â If the C hats are the same as the C's, possibly up to a renaming, then of course

Â exactly the same pairs of points are separated into each of the clustering, so

Â that the spacing is exactly the same. So that's not a case we have to worry

Â about. The interesting case, then, is when the c

Â hats differ fundamentally from the cs, when they're not merely a permutation of

Â the clusters in the greedy clustering. And the maneuver we're going to do here

Â is similar in spirit to what we did in our scheduling correctness proof.

Â Way back in our scheduling correctness proof, we argued that any schedule that

Â differs from the greedy one, suffers from, in some sense, a local flaw.

Â We identified an adjacent pair of jobs that was, in some sense, out of order

Â with respect to the greedy ordering. The analog here is, we're going to argue

Â that, for any clustering which is not merely a permutation of the greedy

Â clustering. There has to be a pair of points which is

Â classified differently in the c hats relative to the c's.

Â By differently, I mean they're clustered together in the greedy clustering.

Â These points, p and q, belong to the same cluster, c sub i.

Â Yet, in this alternative clustering, which is not just the permutation of the

Â greedy clustering. They're placed in different clusters.

Â One, maybe p and c hat i, and q and some other c hat j.

Â So I want to now split the proof into an easy case and a tricky case.

Â To explain why the easy case is easy lets, lets observe a property that this

Â greedy clustering algorithm has. Now the algorithm's philosophy is that

Â the squeaky wheel should get the grease. That is, the separated pair of points

Â that are closest to each other are the ones that should get merged.

Â So for this reason, because it's always the closest separated pair that get

Â merged, if you look at the sequence of point pairs that get merged together,

Â that determine the spacing in each subsequent iteration, the distances

Â between these sort of worst separated points is only going up over time.

Â At the beginning of the algorithm, the closest pair of points in the entire

Â point set are the ones that get directly merged.

Â Then those are out of the picture, and now that some further away pair of

Â points are separated, it determines the spacing,

Â then they get merged. Once they've been coalesced, then there

Â is still some further away pair of points, which is now the smallest

Â separated. They get merged, and so on.

Â So if you look at the sequence of distances between the pairs of points

Â that are directly merged by the greedy algorithm, that is only going up over

Â time. And this sequence culminates with the

Â final spacing S of the greedy algorithm. At some sense, the spacing of the output

Â of the greedy algorithm is the distance between the point period that would get

Â merged if we ran the greedy algorithm one more in moderation but unfortunately

Â we're not allowed to do that. Okay?

Â So the point is, for every pair of points directly merged by the greedy algorithm,

Â they're always a distance at most S away from each other.

Â So the easy case, then, is when this pair of points, pq,

Â which, on the one hand, lie in a common greedy structure,

Â but on the other hand, in different clusters with c hats.

Â If they were, at some point, not merely in the same cluster, but actually

Â directly merged by the greedy algorithm. If, at some iteration, they determined

Â the spacing, and were picked by the greedy algorithm to have their, clusters

Â merged. Then we just argued that the distance

Â between p and q is no more than the space in capital s of the greedy clustering.

Â And since p and q lie in different clusters of the c hats.

Â It's separated by the C hats and therefore they upper bound the spacing of

Â the C hats. Maybe there's some even closer separated

Â pair by the C hats. But the very least P and Q are separated

Â so they upper bound the spacing of the C hat clustering.

Â So that's what we wanted to prove. We wanted to show that this alternative

Â spacing didn't have better spacing than our greedy spacing.

Â It had to be at most as big. It had to be at most capital S.

Â So in this easy case, when P and Q are directly merged by the greedy algorithm,

Â we're done. So the tricky case is when P and Q are

Â only indirectly merged, and you may be wondering at the moment, what does that

Â mean? How did two people wind up in the same

Â cluster if they weren't, at some point, directly merged?

Â So let's draw a picture and see how that can happen.

Â So the issue is that two points P and Q might wind up in a common greeting

Â cluster, not because the greedy algorithm ever

Â explicitly considered that point pair, but rather because of a path or cascade

Â of direct mergers of other point pairs. Imagine, for example, that at some

Â iteration of the greedy algorithm the point P was considered explicitly along

Â with the point A1, where here A1 is meant to be different than Q.

Â So that's a direct merger, and P and A1 wind up in the same cluster. Their

Â clusters are merged. Maybe the same thing happened to the

Â point Q at some point A sub L which is different than P.

Â Sooner or later maybe, you know, at some other time, some totally unrelated pair

Â of points A2 and A3 are directly merged and then at some point A1 and A2 are

Â considered by the greedy algorithm. Algorithm, because the other closest pair

Â of separated points, and, they get merged.

Â And so on. So the edges in this picture are meant to

Â indicate direct mergers, pairs of points that are explicitly fused because they

Â determine the spacing of some point of the greedy iteration.

Â But at the end of the day the greedy clustering is going to have the results

Â of all of these mergings. So in case you're feeling confused, let

Â me just point out that we really saw this exact same exact thing going on when we

Â were talking about minimum spanning trees in Kruskal's rhythm.

Â So, at an intermediate point in Kruskal's rhythm, after it's added some edges, but

Â before it's constructed a spanning tree. As we discussed, the intermediate state

Â is a bunch of different connected components.

Â And there are vertices that Have an edge chosen between them.

Â They, of course, are going to be in the same kinetic component.

Â But then again, a kenetic component could have long paths in it.

Â So you could have vertices that are in the same kinetic component in an

Â intermediate state of Kruskal's Algorithm, despite the fact that we've

Â haven't chosen an edge directly between them.

Â There's rather, a path of chosen edges between them.

Â It's exactly the same thing going on here.

Â Now, what we have going for us is that, if a pair of points, as discussed, was

Â directly merged, we know they're close. The distance between them is, at most,

Â this spacing, capital S. We really don't know anything, frankly,

Â about the distance between pairs of vertices that were not directly merged.

Â They just, sort of, accidentally wound up in a common cluster.

Â But this turns out to be good enough. This is actually sufficient to argue that

Â this competitor clustering with the c-hat has spacing no more than s?

Â No better than ours. Let's see why.

Â So given that P and Q are in a common greedy cluster it must mean there was a

Â path of direct mergers that forced them to be in the same cluster.

Â So let's let the intermediate points involved in that path denoted A1 of two

Â AL. So here's the part of the proof where we

Â basically reduce the tricky case to the easy case.

Â So we've got this pair of points, PQ. Now, remember, not, not only are they in

Â a common greedy cluster. But they're in different clusters in our

Â competitor in the c hats. So the point p is in some cluster.

Â Call it c hat i. And Q is in something else.

Â In particular, it's not in c hat i. Now, imagine you go on a hike.

Â You start at the point p, and you hike along this path.

Â You traverse these direct mergers toward q.

Â Now, you're starting inside c hat I, and you end up outside.

Â So at some point on your hike, you will traverse the boundary.

Â You will, for the first time, escape from c hat I, and wind up in some other

Â cluster. So that has to happen.

Â And let's call ai and ai+1 the consecutive pair of points at which you

Â go from inside this cluster to outside this cluster.

Â And now we're back in the easy case. Now we're dealing with a separated pair

Â that would directly merge by the greedy algorithm.

Â Remember that we set up this path to be a path of direct mergers so in particular,

Â AJ and AJ + one were direct mergers, therefore their distance is at most S.

Â And again, by virtue of being direct mergers, their distance is at most the

Â spacing of the greedy clustering and yet as a separated point by the C hats.

Â It's also an upper bound on the spacing of the C hats.

Â This means the spacing S of our greedy clustering is as good as the competitor.

Â Is the competitor was arbitrary or optimal.

Â That completes the proof.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.