First, many shortest paths involve important nodes.

If you look at this map of the US, you will see that there are many,

many big routes coming out from big cities.

And that's no coincidence.

And as most of the shortest paths, which are long-distance

go through these big roads, they also go through these big cities.

So big cities are important nodes.

And many shortest paths go through those important nodes.

Another idea is that the important nodes are somewhat spread around.

Because in each region of the map, we have some big cities and

capital of the state or some bigger city in the state,

which is important node, such that many shortest paths again go through it.

It can't just have one single important node, and everything else is unimportant.

You always have them spread out through all the map.

And another thing is that important nodes are sometimes completely unavoidable.

For example, for many, many different source points in San Francisco

on the left shore and for many target points in Oakland on the right shore,

there is no other way to get from the source to this target via some

shortest path and bypass this Treasure Island in between.

And so this Treasure Island node is very important

because you cannot just avoid it with the shortest path.

And the contraction hierarchies algorithm uses this scheme of shortest paths with

preprocessing.

When you don't just get a graph, and

then instantly start to answer queries, how long does it take to get from A to B?

Now instead you first get the graph and you get some time to preprocess it.

It can be a long process.

It can take a few hours or even days.

But then when you're ready, and you've saved the results of your preprocessing,

you can answer the queries for distance and shortest paths much faster.

And you work with this preprocessed graph to do that.

And this is a very practical case.

For example in Google Maps or in Yanix Maps, that's what people do.

They first preprocess the graph.

And then they push the preprocessed graph and the algorithm into production.

And then they can answer your queries millions of times faster

without you noticing it.

And it doesn't matter that we have to spend a few hours or a few days before

pushing the graph into production because it happens behind the scenes.

And so the general schema is that you prerpocess the graph then implement

a special algorithm to find distance and shortest path in this preprocessed graph.

And then you reconstruct the shortest path in the initial graph and

show it to the user somehow.