0:34

In contrast to the single source shortest path problem, there is no distinguished

Â source vertex. And the goal of the problem is to compute

Â for every pair of vertices u and v, the length of a shortest path starting at u

Â and ending at v. Just as with the single source version of

Â the problem, this isn't quite the full story.

Â If the input graph g has a negative edge cycle, then either, depending on how you

Â define shortest paths, the problem doesn't make sense or, it's

Â computationally intractable. So, if there is a negative cost cycle,

Â we're off the hook from having to compute shortest path distances, but we do need

Â to correctly report in that case, that the graph contains a negative cost cycle.

Â That's our excuse, our excuse, for not computing the correct shortest path

Â lengths. So, you know what would make me really

Â happy? If, when you see this problem, you

Â thought to yourself, eh, don't we pretty much already have a rich enough toolbox

Â to solve the all-pairs shortest path problem.

Â If that's what you thought, that's a great thought and the answer, in many

Â senses, is yes. So let's explore that idea, make it

Â precise. In the following quiz, I'm going to ask

Â you, suppose I gave you, as a black box, a subroutine that solves the single

Â source shortest path problem correctly and quickly.

Â How many times would you need to invoke that black box?

Â That subroutine to correctly solve the all pairs shortest path problem.

Â 2:01

So, the correct answer is C. You need N indications of the single

Â source shortest path sub-routine, or N is the number of vertices in the input

Â graph. Why?

Â Well if you designate an arbitrary vertex as a source for text S and run the

Â provided sub-routine, it will compute for you shortest path distances from that

Â choice of S to all the destinations. So that computes for you N, a, shortest

Â path distances out of the N square that you're responsible for.

Â All the shortest path distances that has this particular vertex S as the origin.

Â So there's N different choices out of the possible origin.

Â So, you just interate out of all those choices and hopefully provide the out

Â rhythm for once. Each and boom, you've got the N squared

Â shortest path distances that you're responsible for.

Â 3:13

The reason it matters whether or not the edge costs are all non negative or not is

Â it governs which single source shortest path team we get to use.

Â So if, the happy case, all edge costs are not negative, then we get to use

Â Dijkstra's algorithm as our work horse. And remember Dijkstra's algorithm is

Â blazingly fast. Our heap based implementation of it Ray M

Â time M log N. So if you run that N times you're running

Â time is naturally N times M times log N. So in the sparse case this is going to be

Â N squared log N. In the dense this is going to be N cubed

Â log N. [SOUND] So for sparse graphs, this

Â frankly is pretty awesome. You're not going to really do much

Â better, than running Dijkstra n times, once for each choice of source, of the

Â source vertex. The reason is, we're responsible for

Â outputting n squared values, the shortest path distance for each pair uv of

Â vertices. And so here, the running time is just

Â that n squared, times an extra log, log factor.

Â [SOUND] The situation for dense graphs, however, is much murkier.

Â And it is an open question to this day whether there are algorithms

Â fundamentally faster than cubic time for the all pair shortest paths problem in

Â dense graphs. If you wanted to try to convince somebody

Â that maybe you couldn't do better than cubic time, you might argue as follows.

Â There's a quadratic number of shortest path distances that have to be computed.

Â And for a given pair, u and v, the shortest path might well have a linear

Â number of edges in it. So, surely, you can't compute the

Â shortest path between one pair better than linear time.

Â So then the quadratic number is going to have to be cubic time.

Â However, I want to be clear. This is not a proof.

Â This is just a wishy washy argument. Why is it not a proof?

Â Well, for all we know, we can do some amount of work which is relevant

Â simultaneously for lots of these shortest path problems.

Â And you don't actually have to spend linear on average time for each.

Â As a tale of inspiration, let me remind you about matrix multiplication.

Â If you write down the definition of what it means to multiply two matrices, you

Â look at the definition and it just seems like it's an obviously cubic problem.

Â It just seems like by definition, you have to do a cubic amount of work.

Â [SOUND] Yet, that intuition is totally wrong.

Â Beginning with Strassen, and then many following algorithms, we now know that

Â there are algorithms for matrix multiplication, fundamentally better than

Â the naive, cubic time algorithm. If you have a, nontrivial approach to

Â decomposing a problem, you can eliminate some of the redundant work, and do

Â better, than the straightforward solution.

Â Is a Strassen-like improvement possible for the all pair shortest paths problem?

Â [SOUND] Nobody knows. Now let's discuss the general case in

Â quick graphs that are allowed to have negative edge lengths.

Â So in this case we cannot use Dikstra's algorithm as our workhorse, as our single

Â source shortest path subroutine. We have to resort to the Bellman-Ford

Â algorithm instead because that's the only one of the two that accommodates negative

Â cost edges. Remember the Bellman-Ford algorithm is

Â slower than Dikstras algorithm, the running time down that we proved was O of

Â M times N. If we run that N times we get a running

Â time of M times M squared. How good is the running time of N squared

Â M? Well, if the graph is sparse, if M is

Â theta of N, then this is cubic in N. And if the graph is dense, M is theta of

Â N squared, we're now seeing our first ever for this course, quartic running

Â time. Hey and I hope that you're not really

Â especially happy with the cubic running time down for the sparse graph case, but

Â now when we're talking about quartic running time, now this just really seems

Â exorbitant. And so hopefully you're thinking there's

Â gotta be a better approach to this problem than just running Bellman-Ford

Â N-times in the dense graph case. Indeed, there is: the Floyd-Warshall

Â algorithm, we'll start talking about it next video.

Â