1:19

If you're taking a local search approach to a constraint satisfaction problem,

Â something like 2 SAT or 3 SAT, a common approach is to define 2 variable

Â assignments to be neighbors, if and only if they differ in the value of a single

Â Variable. So, for the Boolean case, where variables

Â are either true or false, you're going to define 2 assignments to be neighbors, if

Â you can get from one to the other, by flipping the value of a single variable.

Â For the travelling salesman problem, there's a lot of sensible ways to define

Â neighborhoods. One simple, and popular approach, is to

Â define 2 travelling salesman tours to be neighbors.

Â If and only if they differ in the minimum possible number of edges.

Â If you think about it for a second, you'll realize that the minimum possible

Â number of edges that 2 TSP tours differ in is 2.

Â So, for example, in pink I have shown you 2 neighboring TSPs.

Â P tours, the first tour has edges from u to v and from w to x, but by contrast the

Â second tour has the crisscrossing edges from u to x and from v to w.

Â All of the other edges in the 2 tours are the same.

Â Once you've identified the space of candidate solutions to your computational

Â problem, and once you've settled on a neighborhood for every candidate

Â solution, you're in a position to run a local search.

Â Local search just iteratively improves the current solution, always moving to a

Â neighboring solution, until you can't improve things any further.

Â I'm going to specify here a highly under determined version of the local search to

Â highlight the number of design decisions that you've gotta make if you really

Â want to apply local search to a problem in your own projects.

Â We'll talk about the possible instantiations of the different design

Â decisions in the next couple of slides. So, for starters, you have to initialize

Â the search with one of the candidate solutions, some little x.

Â How do you pick it? Well, there's a number of approaches, we'll discuss that

Â in more detail in a sec. Now, just like in our maximum cut local

Â search algorithm, we take the current solution, whatever it is x.

Â We look for a neighboring 1y, which is superior.

Â If we find such a y, then we move to that superior solution.

Â If there is no superior y, we're locally optimal.

Â And then and we just return to the current solution.

Â Our local search algorithm for the maximum cut problem, that we discussed

Â last video, is in fact an instinctiation, of this generic procedure, where the set

Â X of all solutions, is just the cuts of the given graph.

Â And, 2 cuts are defined to be neighbors, if and only if you can get from 1 to the

Â other, by moving A single vertex from one group to the other.

Â And that algorithm, as we are here, we just started from some arbitrary

Â solution, some arbitrary cut. We repeatedly searched for a superior

Â neighboring solution. That is we tried to see if there was a

Â way to move a vertex from one group to the other to get a better cut.

Â If we found such a superior neighboring solution We iterated from that better

Â solution, and then when we couldn't iterate it any more, when there were no

Â better neighboring cuts, we just halted and returned the final result.

Â You can apply this generic local search procedure to other problems in exactly

Â the same way. As an example, think about the traveling

Â salesman problem. Let's say we define neighborhoods like we

Â did on the last slide with 2 tours being neighbors if and only if they differ in

Â exactly 2 edges, n - 2 edges. Are in common.

Â What do you do? You start from some arbitrary traveling salesmen tour, and

Â now you iterate. You keep looking for a neighboring

Â superior solution that is from the current tour,.

Â You consider all tours you can reach by changing exactly two edges of the current

Â tour. You check if any of those end choose 2

Â solutions have strictly smaller costs. If any of them do you iterate from that

Â new superior solution. When you can't iterate anymore, when all

Â of the neighboring solutions are at least as costly as the one you're working with,

Â that's a locally optimal tour and you return that as your final output.

Â And the rest of the video I want to continue discussing local search at a

Â high level talking about some of the design decisions that you've got to make

Â as well as some of the performance characteristics you can expect.

Â After we finish this high level discussion we'll move on to some videos

Â on another concrete example of local search.

Â Specifically to 2 SAT, a specific example of a constraint satisfaction problem.

Â Let's begin with 3 frequently asked questions about under-determined

Â characteristics of the generic local search procedure I just showed you.

Â Question number 1 concerns step number 1 of the algorithm.

Â You need to somehow come up with this arbitrary starting point, x.

Â How do you do it? To answer this question, let me crudely classify the

Â situations in which you might be using local search, into 2 categories.

Â In the first category, you're really depending on local search, to come up

Â with a good approximate solution, to an optimization problem.

Â You really have no idea how to get close to an optimal solution, other than

Â through, some local search Approach. The second category of scenarios is where

Â you have some pretty good heuristics that seem to be delivering to you pretty good

Â solutions to your optimization problem and you just want to use a local search

Â as a post-processing step to do even better.

Â So let's start with the first category where all of your eggs are in one basket

Â and you need a local search to deliver to you a pretty good solution to an

Â optimization Problem. So this is a tricky spot to find yourself

Â in. Local search is guaranteed to deliver to

Â you a locally optimal solution. But in many problems locally optimal

Â solutions can be much, much worse than what you're looking for, a globally

Â optimal solution. In the special case of the maximum cut

Â problem, we had a performance guarantee of 50%.

Â Which already is only a so-so guaranteed but for most optimization problems even

Â that kinds of performance guarantee is out of question.

Â There are locally optimal solutions far worse than the globally optimal ones.

Â On the other hand, you know there are some good locally optimal solutions out

Â there. In particular the globally optimal

Â solution is also a locally optimal one. Now if you just run local search once,

Â it's not clear how to evaluate the quality of the solution that's returned

Â to you. You run the algorithm, it hands you a

Â solution, it has cost 79,217. is that good or bad? Who knows? An

Â obvious approach to doing better is to run the local search many times, say

Â thousands of times, even millions of times, and then amongst all of the local

Â optima of the different runs of your local search algorithm returns, you pick

Â the best one and you make that your final Final solution.

Â To encourage your local search algorithm to hand back to you different local

Â optima when you run it over and over again.

Â You want to be making some random deicsions.

Â An extremely common point to make a random decision is in step 1 of local

Â search. Is in choosing your initial starting

Â point .x so, for example in the maximum cut problem, you'd want to start with a

Â random cut with each vertex equally likely to be in a or b.

Â With the traveling salesman problem, you might want to start from a random tour,

Â that is a random permutation of the end vertices.

Â And if it's a straint satisfaction problem you could begin by giving each

Â variable Independently a random value. If this seems kind of like throwing darts

Â at a dartboard, that's what it is. It turns out there's a lot of stubbornly

Â difficult problems out there for which the state of the art is not really

Â substantially better than running independent trials of local search with

Â different random initial positions and returning the best of the local optimal

Â that you find. Now let's suppose you're in this second,

Â happier category of scenarios, where you've got a better handle on your

Â optimization problem. Maybe you've figured out how to use

Â greedy algorithms, or maybe mathematical programming.

Â Anyways, you have techniques that generate close to optimal solutions to

Â some optimization problem. But why not make these nearly optimal

Â solutions even better? How do you do that? We'll feed the output

Â of your current suit of heuristics into a local search post processing step.

Â After all, local search only moves to better solutions. It can only make your

Â pretty good solution even better.

Â