The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

439 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 2

Kruskal's MST algorithm and applications to clustering; advanced union-find (optional).

- Tim RoughgardenProfessor

Computer Science

So, in these next few videos, we're going to continue our discussion of the

Â minimum cost spanning tree problem. And I'm going to focus on a second good

Â algorithm, good greedy algorithm for the problem,

Â namely Kruskal's algorithm. Now, we already have an excellent

Â algorithm for computing the minimum cost spanning tree in Prim's algorithm.

Â So you might wonder, you know, why bother spending the time learning a

Â second one? Well, let me give you three reasons.

Â So the first reason is this is just a, it's a cool algorithm.

Â It's definitely a candidate for the greatest hits.

Â It's something I think you should know. It's competitive with Prim's algorithm

Â both in theory and in practice. So it's another great greedy solution for

Â the minimum cost spanning problem. The second reason is it'll give us an

Â opportunity to learn a new data structure,

Â one we haven't discussed yet in this course.

Â It's called the union-find data structure.

Â So, in exactly the same way or in a similar way to how we used heaps to get a

Â super fast implementation of Prim's algorithm.

Â We use a unionfind data structure to get a super fast implementation of Kruskal's

Â algorithm. So that'll be a fun topic just in its own

Â right. The third reason is, is there's some very

Â cool connections between Kruskal's algorithm and certain types of clustering

Â algorithms. So that's a nice application that we'll

Â spend some time talking about. I'll discuss how natural greedy

Â algorithms in a clustering context are best understood as a variance of

Â Kruskal's minimum spanning tree algorithm.

Â So let me just briefly review some of the things I expect you to remember about the

Â minimum cost spanning tree problem. So the input of course is an undirected

Â graph, G and each edge has a cost. And what we're trying to output, the

Â responsibility of the algorithm is a spanning tree.

Â That is a subgraph which has no cycles and is connected.

Â There's a path between each pairs of vertices and amongst all potentially

Â exponential many spanning trees, the algorithm is supposed to output the one

Â with the smallest cost, smallest sum of edge costs.

Â So let me reiterate the, the standing assumptions that we've got through the

Â minimum spanning tree lectures. So first of all, in assuming if the graph

Â is connected, that's obviously necessary for it to have

Â any spanning trees. that said, Kruskal's algorithm actually

Â extends in a really, just easy, elegant way to the case where G is disconnected.

Â But I'm not going to talk about that here.

Â secondly, remember, we're going to assume that all of the edge costs are distincts,

Â that is there are no ties. Now don't worry, Kruskal's algorithm is

Â just as correct. If there are ties amongst the edge costs.

Â I'm just not going to give you a proof that covers that case, but don't worry, a

Â proof does, indeed exist. Finally, of the various machinery that we

Â developed to prove the correctness of Prim's algorithm perhaps the most

Â important and most subtle point was what's called the cut property.

Â So this is a condition which guarantees you're not making a mistake in a greedy

Â algorithm. It guarantees that a given edge is indeed

Â in the minimum spanning tree. And remember, the cut property says,

Â if you have an edge of a graph and you could find just a single cut for which

Â this edge is the cheapest one crossing it.

Â Okay? So, the edge E crosses is cut and every

Â other edge that crosses it is more expensive.

Â That certifies the presence of this edge in the minimum spanning tree,

Â guarantees that it's a safe edge to include.

Â So we'll definitely be using that again in Kruskal's algorithm when we prove it's

Â correct. So as with Prim's algorithm, before I hit

Â you with the pseudocode, let me just show you how the algorithm works in an

Â example. I think you'll find it very natural.

Â So let's look at the following graph with five vertices.

Â So the graph has seven edges and I've annotated them with their edge costs in

Â blue. So here's the big difference in

Â philosophy between Prim's algorithm and Kruskal's algorithm.

Â In Prim's algorithm, you insisted on growing like a mold from a starting

Â point, always maintaining connectivity, and spanning one new vertex in each

Â iteration. Kruskal's going to just throw out the

Â desire to have a connected subgraph at each step of the iteration.

Â Kruskal's algorithm will be totally content to grow a tree in parallel with

Â lots of simultaneous little pieces, only having them coalesce at the very end of

Â the algorithm. So in Prim's algorithm, while we were

Â only allowed to pick the cheapest edge subject to this constraint of spanning

Â some new vertex. In Kruskal's algorithm we're just going to pick the cheapest

Â edge that we haven't looked at yet. Now, there is an issue, of course,

Â we want to construct a spanning tree at the end.

Â So, we certainly don't want to create any cycles,

Â so we'll skip over edges that will create cycles.

Â But other than that constraint, we'll just look at the cheapest edge next in

Â Kruskal's algorithm and pick it if there is no cycles.

Â So let's look at this five vertex example.

Â Again, there is no starting point. We're just going to look at the cheapest

Â edge overall. So that's obviously this unit cost edge

Â and we're going to include that in our tree.

Â Right? Why not? Why not pick the cheapest edge? It's a

Â greedy algorithm. So what do we do next?

Â Well, now we have this edge of cost two, that looks good, so let's go ahead and

Â pick that one. Cool.

Â Notice these two edges are totally disjoint.

Â Kay.' So we are not maintaining a connectivity

Â of our subgraph at each iteration of Kruskal's algorithm.

Â Now, it just so happens that when we look at the next edge, the edge of cost 3, we

Â will fuse together the two disjoint pieces that we had previously. Now, we

Â happen to have one connected piece. Now, here's where it gets interesting.

Â When we look at the next edge, the edge of cost 4, we notice that we're not

Â allowed to pick the edge of cost 4. Why?

Â Well, that would create a triangle with the edges of costs 2 and 3,

Â and that of course is a no-no. We want to span the tree at the end of

Â the day, so we can't have any cycles. So we skip over the 4 because we have no

Â choice, we can't pick it, we move on to the 5 and the 5 is fine.

Â So when we pick the edge of cost 5, there's no cycles,

Â we go ahead and include it. And now we have a spanning tree and we

Â stop or if you prefer, you could think of it that we do, we do consider the edge of

Â cost 6. That would create a triangle with the

Â edges of costs 3 and 5, so we skip the 6. And then, for completeness, we think

Â about considering the 7, but that would form a triangle with the

Â edges of costs 1 and 5, so we skip that. So after this single scan through the

Â edges in assorted order, we find ourselves with these four pink edges.

Â In this case, it's a spanning tree and as we'll see, not just in this graph but in

Â every graph it's actually the minimum cost spanning tree.

Â So, with the intuition hopefully solidly in place, I don't think the following

Â pseudocode will surprise you. We want to get away with a single scan

Â through the edges in short order. So, obviously in the preprocessing step,

Â we want to take the unsorted array of edges and sort them by edge cost.

Â To keep the notation and the pseudocode simple, let me just, for the purposes of

Â the algorithm, description only, rename the edges 1, 2, 3, 4, all the way up to m

Â conforming to this sorted order, right? So, the algorithm's just going to scan

Â through the edges in this newly found sorted order.

Â So we're going to call the tree in progress capital T, like we did in Prim's

Â algorithm. And now, we're just going to zip through

Â the edges once in sorted order. And we take an edge, unless it's

Â obviously a bad idea. And here a bad idea means it creates a

Â cycle, that's a no-no, but as long as there's no cycle, we go

Â ahead and include the edge. And that's it, after you finish up the

Â for loop you just return the tree capital T.

Â It's easy to imagine various optimizations that you could do.

Â So for example, once you've added enough edges to have a spanning tree. So n - 1

Â edges, where n is the number of vertices, you can get ahead, go ahead and abort

Â this for loop and terminate the algorithm.

Â But let's just keep things simple and analyze this three line version of

Â Kruskal's algorithm in this lecture. So just like in our discussion of Primm's

Â algorithm, what I want to do is first just focus on why is Kruskal's algorithm

Â correct? Why does it output a spanning tree at all? And then, the spanning tree

Â that it outputs? Why on earth should it have the minimum cost amongst all

Â spanning trees? That's when we're, once we're convinced

Â of the correctness, we'll move on to a naive running time implementation and

Â then finally, a fast implementation using suitable data structures.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.