0:13

First, let's make an observation that if we have some optimal path,

Â some fastest route or some shortest route.

Â Than any part of it between some note in the middle and

Â some other note in the middle.

Â It's also optimal in the same sense.

Â Let's prove it.

Â 0:32

Consider an optimal path from some origin S to some note T and

Â consider some two vertices u and v on this path.

Â They can coincide with s or t or they can be somewhere in the middle.

Â Suppose there was some shorter path from u to v which is marked with green.

Â Then we would be able to go from s to u as there is no path.

Â Take a shorter path from u to v, take a shortcut and

Â then go where there is no path from v to t.

Â And in total,

Â this new path will be shorter than the initial path, which was optimal.

Â So this cannot happen.

Â And so we know that actually any part of an optimal path is also optimal itself.

Â 1:20

Corollary from that is that if there is some shortest path from

Â S two nodes and u is the previous node on that path.

Â Then, distance from the origin to the final destination,

Â t, is equal to the distance from origin to node u,

Â which is the previous node, plus wight of the edge between u and t.

Â And we will use this property to improve our estimates

Â of distances from origin to some nodes gradient.

Â 2:09

In the reference research we found distance of started from Infinity.

Â As soon as they get update they became correct distances formal

Â region to the corresponding note.

Â This will not be the case in algorithm.

Â These distances will change Several times maybe before they become correct.

Â But in the end they will become correct.

Â And during the process

Â these distances will be upper bounds on the actual distance.

Â So they will be more than or

Â equal to the real distance from origin to the corresponding node.

Â [INAUDIBLE] the procedure called Edge relaxation, and it means the following.

Â We take the edge U, V.

Â We check, is it better to go to V, through the optimal

Â currently known path from S to U, and then following the edge from U, to V.

Â Does it improve our current upper ont he distance to v, or not?

Â So, we have some upper bound dist value, dist of v, and

Â we achieve that, maybe using u on the way or not.

Â We might have come to v in a different way, and we might have come

Â using the same distance as if we go from s to u, and then follow huv.

Â Or we could use longer path.

Â So we just check whether it is possible to improve our current estimation of

Â distance of feet.

Â We are using distance of U plus weight of the edge to U.

Â So here is the code of relaxation procedure.

Â It takes one edge from U to V.

Â 3:48

As input and it checks if the current distance estimate of

Â v is bigger than the current distance estimate of u plus weight of the edge.

Â That means that definitely we can come to know

Â u from s with a path with a length at most dist of u.

Â And if we then follow edge uv with a weight w of uv we

Â will improve the distance estimate of v.

Â So in this case we update this distance estimate we decrease always as you see And

Â we also remember that the node from which we came into v is now u.

Â Remember that in the breadth for search algorithm,

Â we start in pref the node from which this node was discovered.

Â In this case, we use the same data structure to the store

Â the node from where we've updated the distance to our node last time.

Â Now, we're ready to suggest a naive algorithm to solve our

Â fastest route problem.

Â Procedure naive takes graph g and origin node S as inputs.

Â 4:59

It uses dist values and prev values the same as in the breth first search,

Â and we initialize these values with infinity as dist[u].

Â And the prev[u] value with point resting no where,

Â we also initialize the dist of the original nod withe zero.

Â And then, the only thing we do we relax all the edges.

Â More specifically, on each iteration of the do while loop,

Â we try to relax each of the edges in the graph.

Â And if at least one is effective, that is,

Â some dist value changes, then we continue to the next iteration.

Â And we only stop When the whole iteration,

Â after going through all the edges in the graph, couldn't relax anything.

Â 5:57

To prove that assume for the sake of contradiction, that at some point

Â no edge can be relaxed and there is a vertex v such that the distance to

Â the vertex is bigger than the actual distance from origin to the note.

Â We know that this estimation of distance cannot become less

Â than the actual distance because this is always an upper bound.

Â It starts from infinity and

Â it is only decreased when we find a better path from origin to this node.

Â So there is some path from origin to this node of length exactly dist[v],

Â so it cannot be less than the actual distance from origin to this node.

Â And this also means that there can actually be no such situation that we

Â do relaxations [INAUDIBLE] and we do many iterations and we don't stop at any point.

Â Because after any successful relaxation,

Â some distance estimation is decreased by at least one.

Â And if we started with infinity then the value just becomes finite,

Â but that can happen at most a number of nodes times.

Â And if the value was already finite it just decreases by at least one.

Â And if we started with some number of finite values,

Â which are bigger than the actual distances and

Â that each iteration we decrease at least one distance by at least one.

Â This process cannot be infinite.

Â It will at some point come to the stage when the distance, and

Â the distance estimate are the same, and so this edge cannot be relaxed anymore.

Â And if that happens for all the edges our algorithm will stop.

Â So our algorithm definitely stops.

Â The question is whether it comes to exactly the same

Â dist values as distances from origin To the corresponding nodes.

Â So for contradiction we assume that it does not.

Â And for that for

Â at least some node v dist value is bigger than the actual distance from our agent.

Â And then, we consider some shortest path from S to this node v.

Â 8:13

V definitely is a broken note, in the sense that [INAUDIBLE] is bigger than

Â the correct distance, but there can be some other notes on this path from S to V,

Â which have the same property, which are broken.

Â U, V the first note of counting from S on this path, which is broken in some sense.

Â U is definitely not the same as S, because for

Â S we know that the [INAUDIBLE] value is zero, and the correct distance is zero.

Â U is at least the second note on the path, or maybe much later then.

Â There is a previous note on the path before U.

Â And what is denoted by p.

Â And let's look at S, p, and u.

Â What we know is that p is not a broken node, and so

Â its dist value is the same as the distance from origin to this node p.

Â And so we know that the distance from S to u

Â Is actually equal to the distance from S to p plus weight of the edge from p to u.

Â Why is that?

Â Because the part of path from S to u

Â is optimal because it is a part of an optimal path from S to v.

Â And also part of path from S to p is also optimal, and so

Â this equality It's true that distance from S to u is equal to distance from S to P.

Â And then add the weight of the S from p to u, but

Â we also know that distance from S to p is equal to this value of p.

Â And so, the second quality is true that it

Â is equal to dist value of p plus weight of the S from p to u.

Â 10:02

Is equal to dist value of p plus weight of the edge from p to u.

Â But this inequality is exactly the property we checked to determine whether

Â an edge from b to u can be relaxed or not.

Â And so, from one hand we know that this edge can be relaxed and

Â from the another we know that we cannot relax anymore edges in our graph and

Â our nef algorithms stopped and this is a contradiction.

Â So now, we proved by contradiction

Â that our nef algorithm returns correct distances from orignal to.

Â So now it's in the graph.

Â We won't analyze the running time of this algorithm, because in the next video

Â we'll improve this algorithm and then we'll estimate its running time.

Â