1:02

Now, node v is certainly hoping not to have to inherit its

Â solution of plus infinity from the previous round, when i equals 0.

Â And indeed when i equals 1, the subproblem solution for

Â the vertex v is going to be 2.

Â That's because we can choose the last hop to be the edge s, comma v.

Â That has length 2, and

Â S's subproblem value last iteration when i equals 0, is 0.

Â By the same reasoning x's new subproblem value is going to be four,

Â because we can chose the last hop to be s, comma x, and add the cost of that

Â edge four to s's subproblem value, last iteration when i equals 0.

Â Now, the nodes w and t would love to throw out their plus infinity solutions and

Â get something finite.

Â And you might be thinking that because v and x now have finite distances,

Â those would propagate to the nodes w and t.

Â That will indeed happen, but we're going to have to wait until the next iteration,

Â we're going to have to wait until i equals 2.

Â The reason is if you look at the code, if you look at the recurrence,

Â when we compute the subproblem value at a given iteration i, we only make use

Â of the subproblem solutions from the previous iteration, i minus 1.

Â We do not use any of the updates that have already happened in the current

Â iteration i.

Â So because when i equals 0, a of 0 v, and a of 0 x were both plus infinity,

Â we're going to be stuck with a1w and a1t, both being plus infinity as well.

Â So, now let's move onto the next generation of the outer 4 loop,

Â when i equals 2.

Â The subproblem solution s is not going to change,

Â you're not going to do better than 0, so it's going to stay that way.

Â Similarly at the vertex v, you're not going to do better than 2, so

Â that's going to stay the same at this iteration.

Â Something interesting happens at the vertex x, however.

Â Now, in the recurrence you certainly have the option of inheriting

Â the previous solution.

Â So one option would be to set a of 2, comma x, to be equal to 4,

Â but there's actually a better choice.

Â So the recurrence is going to come out to be even smaller,

Â specifically if we choose that last hop to be the unit cost arc from vx.

Â We add that unit cost to v subproblem value, last iteration when i equals 1,

Â that was 2, 2 plus 1 equals 3.

Â So that's going to be the new subproblem value at x,

Â in this iteration where i equals 2.

Â So as advertised, the updates to the vertices v and

Â x in the iteration where i equals 1, now that i equals 2,

Â get propagated onto the vertices WNT.

Â So, WNT shed their plus infinity values, and

Â instead they pick up the values 4 and 8 respectively.

Â 3:44

Notice that I've labeled the vertex t with an 8, not with a 7.

Â I've computed a of 2, comma t, to be 8.

Â And the reason is again the same updates from this iteration,

Â in particular the fact that x has dropped from 4 to 3

Â do not get reflected from other nodes in this same iteration.

Â We have to wait until the next iteration of the outer 4 loop before they happen.

Â So we're using the stale information at x,

Â that when i equals 1 its solution value is 4.

Â That's the information we're using to update t's solution value,

Â so it's 4 plus 4 or 8.

Â 5:43

So answer A, quadratic and squared, what this is,

Â this is the number of subproblems, right?

Â So subproblems are indexed by a choice of i, which is somewhere between 0 and

Â n minus 1.

Â And a choice of a destination v, there's n choices for

Â each of those exactly n squared subproblems.

Â If we only ever spent constant time evaluating a subproblem,

Â then indeed the running time at Bellman-Ford would be big O of n squared.

Â And in most dynamic programming algorithms we've discussed in this class,

Â indeed you only spend constant time solving each subproblem.

Â The one exception being the optimal binary search trees problem,

Â where in general we spent linear time.

Â Here again, like optimal binary search trees,

Â we might spend more than constant time solving this subproblem.

Â The reason being, we have to do brute force search through a list of candidates

Â that might be super constant.

Â The reason is that each arc that comes in to the destination v provides

Â one candidate for what the correct solution to this subproblem might be.

Â So remember, the number of candidates, we had a quiz on this,

Â is proportional to the n degree of the vertex v.

Â And that can be as large as n minus one, linear the number of vertices.

Â 6:58

Another interesting answer is C, that it's cubic in n, and D, the big O of n cubed

Â is a valid upper bound on the running time of the Bellman-Ford algorithm.

Â But it's not the sharpest possible upper bound, so why is it a valid upper bound,

Â as discussed just now is a quadratic n squared number of subproblems,

Â how much work do you do per subproblem?

Â Well, it's proportional to the n degree of a vertex,

Â the biggest n degree of a vertex is n minus 1, big O of n.

Â So, linear work for each of the quadratic number of subproblems resulting in cubic

Â running time.

Â 7:30

There is, however, a tighter, a better analysis of the Bellman-Ford algorithm.

Â Now, why is O of mn bigger than n cubed?

Â Well what is m?

Â In a sparse graph, m is going to be theta of n,

Â and in a dense graph, m is going to be n squared.

Â So if it's a dense graph, it's true, big O of mn is no smaller than n cubed, but

Â if it's not a dense graph then this really is an improved upper bound.

Â 8:31

But we know a much simpler expression for the sum of the n degrees.

Â This sum is simply equal to m, the number of edges.

Â In any directed graph,

Â the number of edges is exactly equal to the sum of the n degrees.

Â One easy way to see that is take your favorite directed graph and

Â imagine you plug the edges into the graph one at a time,

Â starting from the empty set of edges.

Â Well each time you plug in a new edge,

Â obviously the number of edges in the graph goes up by one.

Â And also, the n degree of exactly one vertex goes up by one,

Â whichever vertex happens to be the head of the edge that you just stuck in.

Â So therefore, the sum of the n degrees and the sum of the number of edges is always

Â the same, no matter what the directed graph is.

Â So that's why the total work is better than n cubed,

Â it's actually big O of m times n.

Â 9:52

So in general, suppose that some iteration, earlier than the last one, so

Â say with a current index value of j, it just so happens that nothing changes.

Â That every destination v, you just reuse the optimal solution that you recomputed

Â in the previous iteration of the outer 4 loop.

Â Well then if you think about it, what's going to happen in the next iteration?

Â You're going to do exactly the same set of computations, with exactly the same

Â set of inputs, so you're going to get exactly the same set of results.

Â That is, it will be true again in the next iteration that you will just inherit

Â the optimal solutions from the previous one.

Â And that will keep happening over and over again until the rest of time.

Â