The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

441 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 2

Kruskal's MST algorithm and applications to clustering; advanced union-find (optional).

- Tim RoughgardenProfessor

Computer Science

So now we understand why Kruskal's algorithm is correct, why it always

Â computes a Minimum cost-spanning Tree. In this video, we'll turn our attention

Â to implementation issues. We'll begin with a straightforward

Â implementation of Kruskal's algorithm. That'll give us a polynomial run time

Â bound which is good but we'd like to do better.

Â So then we'll show how deploying a suitable data structure, something you

Â haven't seen before, the Union-Find data structure, allows us to speed up

Â Kruskal's Algorithm to be competitive with Prim's Algorithm.

Â That is we'll get a near linear running time bound of MlogN.

Â So let's just briefly review the very elegant pseudocode for Kruskal's

Â Algorithm, so it's a greedy algorithm.

Â It considers the cheapest edges first, all the way up to the most expensive.

Â So we begin with a sorting pre-processing step to put the edges in sorted order for

Â notational convenience let's just rename the edges. So, that one is the cheapest

Â edge, and, all the way up to M being the most expensive edge.

Â We then have our single linear scan in this for loop, and we just grab edges

Â whenever we can, okay?

Â So we maintain this evolving set capital T, which at the end of the algorithm will

Â be our spanning tree. Now, what forces us to exclude an edge

Â from this set capital T? Well if it creates a cycle with the edges

Â was already chosen, obviously that's a no go.

Â We can't have cycles in our final output, but as long as we don't have a cycle from

Â including an edge, we go ahead and optimistically include it.

Â And as we've seen, this is a correct algorithm, it always outputs the minimum

Â cost spanning tree. So, what would be the running time of

Â this algorithm? If we just straightforwardly implement

Â the pseudocode on this slide? Well, let's just considered the algorithm

Â step by step. In the first step, we sort the edges,

Â so that's going to take M log N time. Now don't forget whenever we're speaking

Â about graphs, we have the convention that M denotes the number of edges and N

Â denotes the number of vertices. So, you might justifiably wonder why I

Â wrote M log N for the running time of the sorting step instead of M log M, since

Â after all what it is we're sorting are the edges and there's M of them.

Â Well, what I'm using here is that, in this context we can switch log N and log

Â M interchangeably with each other inside a big-O notation.

Â Why is that true? Well recall when we first discussed

Â graphs in part one, we noticed that there can't be too many edges.

Â So the number of edges M, is the most quadratic of the number of vertices,

Â it's the most big-O of N squared. So if M is at most M squared, then log M

Â is at most two log N And the two is suppressed in the big-O notation.

Â So log M and log M are interchangeable in this context.

Â Notice that for the minimum cost spanning tree problem you may as well assume that

Â there's no parallel edges. You may as well assume that the graphs

Â are simple. If you have a bunch of parallel edges

Â between a given vertices, you can just throw out all but the

Â cheapest one. That's the only one you'll ever need.

Â So, moving on to the main loop, pretty obvious how many iterations we have

Â there. We have M iterations.

Â So all we need to figure out is how much work we have to do in each iteration.

Â So what is it each iteration needs to accomplish?

Â It needs to check, whether or not adding the current edge to the edges we've

Â already chosen creates a cycle or not. So I claim that can be done in time

Â linear in the number of vertices. That is it can be done in the big-O of N

Â time. So how do we accomplish this?

Â Well, we need two quick observations. First of all, and this is something we've

Â seen in arguments in the previous videos, checking whether or not this new edge is

Â going to create a cycle. Say the edge has end points U and V.

Â Checking for a cycle boils down to checking whether or not there's already a

Â path between the end points U and V, and the edges capital T that was chosen so

Â far. If there is a UV path already, adding

Â this edge will close the loop and create a cycle.

Â If there currently is no UV path, then adding this edge will not create a cycle.

Â So the second observation is well, how do we check if there's a path from U to V,

Â in the edges we've already chosen? Well, we already know how to do that

Â just. Using graph search.

Â So you can use breadth for a search, you can use depth for a search.

Â It doesn't matter. You just start at the vertex U and you

Â see if you have a reach of V or not. If you reach it, there's a path.

Â If you don't reach it, there's not a path.

Â Breadth-first-search, depth-first-search, whatever.

Â It takes time linear, in the graph that you're searching.

Â And since we only need to search for the edges that are in capital T.

Â And there's going to be, at most, N minus one of them.

Â Linear time in this context means O of N. O of the number of vertices,

Â because that also bounds the number of edges that might be in capital T.

Â The edges that we're searching for are pathing.

Â So, adding up all of this work, what do we have?

Â We have this sorting pre-processing step. That takes time big-O of M log N,

Â then we have these M iterations of the four loop like this is takes O of N times

Â factor. the latter term dominates, so the overall

Â running time is big-O of M times N. So this, coincidentally, is the same

Â running time we got from the straightforward implementation of Prim's

Â algorithm, and I'll make the same comments here.

Â This is a reasonable running time, it's polynomial on the input size.

Â It's way better than checking all exponentially many spanning trees that

Â the graph might have. But we certainly would like to do better.

Â We'd certainly love to have implementation of Kruskal's algorithm

Â that gets us to a near linear running time bound, and that's the plan.

Â How are we going to do it? Well, really the work that we're doing

Â here over and over again, which is kind of a bummer, is these cycle checks.

Â And every single iteration, we're spending time linear in the number of

Â vertices to check for a cycle. And the question is,

Â can we speed that up? And the Union-Find data structure will

Â actually, believe it or not, allow us to check for a cycle in constant time.

Â So if we actually had a data structure that could implement constant time cycle

Â checks. Then we'd have to spend only constant

Â time for each iteration of this while loop.

Â So the loop overall would take only linear time in the number of edges, O of

Â M edges. If we got that, then believe it or not,

Â the sorting pre-processing step would become the bottle neck in the running

Â time of Kruskal's algorithm. Our running time would drop from N times

Â N down to near linear, down O of N log N.

Â So let me now tell you a little bit about this magical data structure that's going

Â to give us constant time cycle checks. I'm just going to give you the high level

Â picture, and how it connects to Kruskal's algorithm on this slide.

Â We'll look at the details of the data structure, in the next video.

Â I also want to warn you that I'm not going to discuss, in this pair of videos,

Â the state of the art for Union-Find Data Structures.

Â I'm going to give you a fairly primitive version,

Â but that is nevertheless, sufficient to give us our desired M Log N running time

Â of Kruskal's algorithm. So if you're interested, there is some

Â optional material about different implementations of Union-Find that use

Â some super cool ideas, like union by rank and path compression.

Â That give you different, and in some senses, better operation times.

Â But the quick and dirty version of Union-Find that I'm going to discuss

Â here, is sufficient for our present needs.

Â And so the Raison d'Ãªtre of a Union-Find Data Structure is to maintain a partition

Â of a set of objects. So in this picture, the overall rectangle

Â is meant to denote a set of objects and then C1, C2, C3, and C4 are disjointed

Â subsets that together form the union of the entire set.

Â So that's what I mean by a partition of a group of objects.

Â We're not going to ask too much of this data structure.

Â We're only going to ask it to support two operations.

Â No prizes to guess what those two operations are called.

Â So the find operation, we give it an object from this universe and we ask the

Â data structure to return to us the name of the group to which that object

Â belongs. So for example, if we handed it something

Â in the middle of this rectangle, an object, we'd expect it to return to us

Â the name c3. The union operation by contrast, takes as

Â input, the names of two groups. And what we want the data structure to do

Â is to fuse those two groups together. That is, the objects in the first group,

Â and the objects in the second group should all coalesce, and be, now, in one

Â sole group. So why might such a data structure be

Â useful for speeding up Kruskal's Algorithm?

Â To see the connection, think of Kruskal's algorithm as working conceptually in the

Â following way. So initially, when the algorithm starts

Â in the Set capital T is empty, each of the vertices is by it's own,

Â it's on its own isolated component. And then each time Kruskal's adds a new

Â edge to the set capital T. What that does is it takes two current

Â connected components and fuses them into a single connected component.

Â So for example, toward the end of Kruskal's algorithm that maybe it's

Â included enough edges, that now the Tree capital T that is constructed so far has

Â only four different connected components. And maybe it's about to add a new edge U

Â comma V, where of course U and V should be in different connected components with

Â respect to the edges chosen for far. So this new edge addition at this

Â iteration of Kruskal is going to fuse the connecting components of U and V into a

Â single one. So that corresponds to taking the union

Â of the groups to which U and V respectively belong.

Â So to be a little more precise about it. So what are going to be the objects

Â contained by the Union-Find Data Structure in Kruskal's algorithm?

Â We're going to correspond to vertices. It's the vertices coalescing that we want

Â to keep track of. So what are going to be the groups in the

Â partition that we maintain? They're just going to correspond to

Â connected components with respect to the edges that Kruskal's algorithm has

Â already committed to. And with these semantics, it's clear that

Â every time Kruskal adds a new edge to its set capital T, we have to invoke the

Â union operation to fuse two connected components into one.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.