0:00

Let's see how this substructure lemma naturally leads to a recursive search

Â algorithm. The algorithm is recursive.

Â I'm not going to bother to specify the base cases, which is when you're given a

Â graph, with a most 1 vertex. Or whether you're given a value of k,

Â equal to 0 or 1. So let's assume that the graph has a

Â least 2 vertices, and that k is at least equal to 2.

Â I wouldn't be surprised if this algorithm reminds you, of some of the recursive

Â implementations we discussed for various dynamic programming algorithms.

Â And in the same way those algorithms exhibited exponential running time,

Â without memoization, we're going to see an exponential running time here.

Â For all of our dynamic programming algorithms, we were able to cleverly

Â organize sub-problems from smallest to largest, thereby eliminating redundantly

Â solving the same sub-problems over and over again, yielding polynomial run time

Â bounds. Here, we're going to be stuck with the

Â exponential running time bound. Obviously not surprising given Given that

Â it's an NP complete problem, but still, if you really want to understand this

Â deeply, take this recursive search algorithm.Try to come up with a

Â polynomial time version,try to come up with a small set of sub problems, ordered

Â from smallest to largest that you can solve systematically.You will be

Â inevitably stymied in those attempts, but you'll better appreciate what's non

Â trivial about the recursive solution here.

Â In step 1 of the alrogithm, we pick some arbitray edge, u, comma v.

Â Why is it useful to focus on an edge? Well, by the definition of a vertex

Â cover, it has to include either u or v, so that's our initial clue.

Â We're going to proceed optimistically. We're looking for a vertex cover of size

Â K, so let's assume one exists. What does the sub structure limit tells

Â us? It says. That if such a solution exists, then

Â necessarily either gu or gv or both themselves have small vertex covers, of

Â size only k - 1. Moreover, it's a simple matter to extend

Â such vertex covers to a small 1 for g, you just augment them by the missing

Â vertex. So let's first adopt as a working

Â hypothesis that g sub u has a smaller disk number of size only k-1.

Â Let's recursively try to find it. If our recursive search comes back with a

Â solution of vertex number of size k-1 for g-u, we're good to go.

Â We just augment that by the vertex u and that's our vertex number of size k for

Â the original graph of G If that recursive call fails to find the small vertex cover

Â will you say, oh well then it must be that v is the key to getting a small

Â vertex cover. So we do exactly the same thing, we

Â recursively search g sub v for a vertex cover of size K - 1.

Â If the second recursive call also fails to find a small vertex cover, one of size

Â only k - 1, then by the substructure claim, the other direction, we know that

Â the original graph G cannot have a vertex cover of size k.

Â If it did, the substructure limit tells us.

Â One of the 2 recursive calls would have succeeded.

Â Since they both failed, we conclude correctly that the original graph did not

Â have a small vertex cover. The correctness analysis is straight for,

Â forward, formally you would proceed by induction.

Â So the induction hypothesis would then guarantee the correctness of the 2

Â recursive calls, so on the smaller, our sub problem is GU and GV the recursive

Â calls correctly detect whether or not there are vertex covers of size K-1.

Â The substructure limit guarantees that we correctly compile the results of the two

Â recursive calls into the output of our algorithms.

Â So if one of the two recursive calls succeeds, we're good to go.

Â We just output the vertex cover of size K as desired, and this is the crucial part,

Â if both of the recursive calls. Fail then by sub-structure liner we know,

Â we correctly report fail. We know there is not a vertex cover of

Â size k or less in the original graph G. So lets analyse the running time using an

Â Adhoc analysis. Lets think about the number of workers of

Â cost that there can be over the entire trajectory of the algorithm or

Â equivalently the number of nodes in the recursion tree generated by the algorithm

Â and then we'll come up with the bound on how much work is done on a given.

Â recursive of call, not counting the work done by its recursive subcalls.

Â And now, the cool part, and this is really where the rule of a small value of

Â k comes into play is that, each time we recurse, we subtract 1 from the target

Â size of a vertex cover that we're looking for.

Â Remember, if we are given, as input, the parameter k.

Â Each of our recursive calls involve. The parameter value of K minus 1.

Â So what this does is limit the recursion depth or the depth of the tree generated

Â by this algorithm to be at most K. Initially you start with K, each time you

Â recurse you decrement it by 1 and you're going to hit base cases by the time K is

Â around 1 or 0. Putting those two observations together,

Â branching factor banded by two, recursion depth bounded by K.

Â The number of recursive calls in total, over the entire life time of this

Â algorithm, is going to be bounded by two to the K.

Â Moreover, if you inspect the pseudo code, you see that not a whole lot of actions

Â going on, other than the recursive sub calls.

Â So even with a very sloppy implementation, where you have construct

Â a G sub U, or a G sub V from the input graph g.

Â Even if you do that crudely, you're going to get a linear time bound big o of m

Â work, per recursive call, not counting the work that gets done later by the

Â recursive sub-calls. So, that means an upward bound on the

Â total amount of work done by this algorithm is the number of recursive

Â calls, 2 ^ k, times the work per call. M 2^K*M.

Â Now this running time you'll of course notice still exponential in K, but, of

Â course it has to be exponential unless we intend on proving P=NP.

Â Don't forget the vertex cover problem is NP complete.

Â So what have we accomplished? We had a trivial exponential time algorithm by

Â brute force search earlier. Why did we go to this rigamorole to come

Â up with the second algorithm with this exponential run time bound.

Â Well, this is a qualitatively better running time bound than we have before

Â with naive brute force search. Here's 2 ways to see why this running

Â time bound is qualitatively superior So first, let's just inspect this running

Â time from a mathematical perspective and let's ask how large can we take K while

Â still having a polynomial time algorithm. Recall that our naive brute force

Â searched algorithm where you just try each sub set of K vertices random time

Â theta of n to the k. And so that's polynomials Time if k is

Â constant, and it's super-polynomial time if k is bigger than a constant.

Â For this running time bound 2 ^ k * m, we can let k grow as large as log of the

Â graph size and still have a polynomial Algorithm, So now we have the sig way

Â from these mathematical balance to thinking through actually implementing

Â these algorithms running them on actual data.

Â So for the naive per search you have got this running time of of n to the k so

Â even for a relatively small graph you know lets say n is may be fifty or

Â hundred you are not going to be able, You're not going to be able to run this

Â brute for-, naive brute force search algorithm except for extremely small

Â values of k. 3, 4, maybe 5 if you're lucky.

Â So naive brute force search has very limited range of applicability.

Â So by contrast our smartest risk algorithm can accommodate a noticeably

Â larger range of k's. So remember the running time here is 2 to

Â the k * the graph size, and so even for values of k up to 20, for many networks

Â of interest you're going to be able to implement and run this algorithm in a

Â reasonable amount of time.

Â