It is common to represent the free C-space as a graph, where the nodes represent configurations

and the edges represent free paths between the configurations.

Once we have a graph, we need to search it to find a path between the start configuration

and a goal configuration.

The graph shown here has six nodes.

Let's imagine that the edges connecting them are roads, and the roads may be straight or

they may be winding.

We'd like to find a path connecting the start and the goal that minimizes the cost, which

is the total path length in this case.

We draw the roads as straight edges, but we assign each edge a weight indicating the length,

or cost, of the road.

For example, going from node 1 to node 3 has a cost of 18.

We now have a weighted undirected graph.

In this video I'll focus on an undirected graph, but it's easy to generalize graph search

to directed graphs.

For this graph, the shortest path has a cost of 30 and goes from node 1 to node 4 to node

5 to node 6.

To find the lowest-cost path, we will use A-star graph search, one of the most useful

graph-search algorithms.

A-star is used to find lowest-cost paths on a graph where the path cost is the sum of

the positive costs of the individual edges.

In addition to the graph, A-star search requires a function that calculates an optimistic cost-to-go

from any node to a goal node.

An estimate of the cost-to-go is optimistic if the actual cost-to-go can never be less

than the optimistic estimate.

The function that calculates the optimistic cost-to-go should satisfy two criteria: first,

it should be fast to evaluate, and second, the estimated cost-to-go should be reasonably

close to the actual cost-to-go.

A function that always estimates zero cost-to-go satisfies the fast-to-evaluate criterion but

fails the criterion of providing a good estimate.

A function that exactly calculates the cost-to-go by searching the graph fails the fast-to-evaluate

criterion.

For our example, a good choice for the optimistic cost-to-go function is one that calculates

the straight-line distance to the goal, as the bird flies.

This is fast to evaluate and guaranteed to be optimistic.

For node 1 of our example, the optimistic cost-to-go is 20, and for nodes 2 through

5 the optimistic cost-to-go is 10.

With this as background, we can begin the search process.

Let's create a table to keep track of our progress.

The columns of the table correspond to the nodes 1 through 6.

First, the "past cost" refers to the cost of the best known path to this node.

Since node 1 is the start node, the past cost is zero; it doesn't cost us anything to get

to node 1.

The past cost for all the other nodes is infinity, since we don't know yet if there is a path

to any of these nodes.

Next, the optimistic cost-to-go is 20 for node 1, 10 for nodes 2 through 5, and zero

for node 6, since it is the goal configuration.

Next, we define the "estimated total cost" to be the estimated cost of the best solution

path that passes through that node.

The estimated total cost is the sum of the past cost plus the optimistic cost-to-go.

The estimated total cost is 20 for node 1 and infinity for all the other nodes.

Finally, the "parent node" of node i is the previous node on the best known path to node

i. Node 1 does not have a parent node, and we don't know of any paths to any of the other

nodes yet.

Now we define two lists: OPEN, which is a list of nodes to explore from, and CLOSED,

a list of nodes we have already explored from.

We initialize the list OPEN with node 1, the start node.

We also indicate its estimated total cost, which is 20.

Now we begin the search by exploring from the first node in OPEN.

We will explore all edges leading away from node 1.

The first edge goes to node 3.

Currently node 3 has no parent node, so we update that to indicate that node 3 can be

reached from node 1.

We also see that the cost to reach node 3 is 18, so we update the past cost to 18.

This means that the new estimated total cost is 28.

Now we add node 3 to the list OPEN, and we insert it in the list in sorted order according

to its estimated total cost, 28.

Next we take the edge from node 1 to node 4, and we update node 4 to indicate its parent

is node 1, its past cost is 12, and its estimated total cost is 22.

We now insert node 4 into OPEN, and it goes before node 3 because its estimated total

cost is lower.

Finally, we take the edge from node 1 to node 5, and we update node 5 to make its parent

node 1, its past cost 30, and its estimated total cost 40.

Then we insert node 5 into the sorted list OPEN.

We are done exploring from node 1, so we move node 1 to the list CLOSED.

This means we never have to revisit node 1.

Now the first node on the OPEN list, the node with the lowest estimated total cost, is node

4, so we mark it for exploration.

Node 4 connects to nodes 1, 5, and 6, but we don't have to consider node 1, since it's

CLOSED.

So let's take the edge to node 5.

Currently, node 5 indicates that its parent is node 1 and its past cost is 30.

But the cost of the path from node 4 is only 20, the sum of the past cost to node 4 plus

the cost of the edge from node 4 to 5.

Therefore, the new best path to node 5 goes through node 4 and has a past cost of 20.

We update node 5's information so the past cost is 20, the estimated total cost is 30,

and the parent node is 4.

We also reflect that change in node 5's information in the OPEN list.

Next we take the edge from node 4 to node 6.

We update node 6's information to reflect the past cost and estimated cost of 32 and

the parent node 4, and we add node 6 to OPEN.

Even though node 6 is the goal configuration, our search is not done; we might still find

a lower-cost path to node 6 in the future.

We are now done exploring from node 4, so we move it to CLOSED.

Next we explore from node 3.

First we take the edge to node 2, we update node 2's information, and we insert node 2

in the sorted list OPEN.

Now we take the edge to node 6, and we see that the cost of the path through node 3 is

18 plus 15 or 33, higher than the past cost of an already known path to node 6.

Therefore we can ignore this edge.

Now we are done with node 3, so we move it to CLOSED, and we mark node 5 for exploration.

The only node it is connected to that is not in CLOSED is node 6, so let's explore that

edge.

The past cost to node 6 by a path passing through node 5 is the sum of node 5's past

cost, which is 20, plus the cost of the edge from node 5 to node 6, which is 10.

The new past cost is 30, which is less than 32, so we update node 6's information.

Its past cost and estimated total cost is now 30, and its parent node is 5.

We also update node 6's information in OPEN.

We are done exploring from node 5, so we move it to CLOSED.

Now the first node in OPEN is node 6, which is the goal configuration.

Because of the additive nature of costs, this goal configuration cannot be reached by a

lower cost path that we will find in the future.

Therefore the search is done.

We can reconstruct the optimal path by going back to node 6's parent, node 5; then to node

5's parent, node 4; then to node 4's parent, node 1.

The path shown in green is the optimal path through the graph.

To fully understand this algorithm, you may need to spend some time studying it in the

book, or better yet, you should implement it.

The algorithm can be summarized in this pseudocode.

First, we initialize the algorithm, then enter a while loop, where we mark the first node

in the OPEN list for exploration.

If this node is in the goal region, then the algorithm has succeeded in finding an optimal

solution, and the algorithm exits.

If not, then the algorithm explores each neighbor in the graph that is not already CLOSED.

For each neighbor node, the algorithm checks to see if it has found a new best path to

that node, and if so, it updates the node's past cost, estimated total cost, parent, and

position in the OPEN list.

If the OPEN list ever becomes empty, then there is no solution to the motion planning

problem.

A variation on this algorithm is to always choose the optimistic cost-to-go to be zero.

Then the past cost and the estimated total cost are equal.

This algorithm is called Dijkstra's algorithm, which preceded A-star historically.

Dijkstra's algorithm also finds optimal paths, but it's typically much slower than A-star,

because it doesn't use a heuristic cost-to-go to help guide the search.

So you should use a reasonable optimistic estimate if you can.

If you make a mistake and your optimistic cost-to-go function actually returns an estimate

greater than the actual cost-to-go, A-star may terminate with a solution that is not

optimal.