The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

Loading...

From the course by Stanford University

Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming

435 ratings

The primary topics in this part of the specialization are: greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, optimal search trees).

From the lesson

Week 1

Two motivating applications; selected review; introduction to greedy algorithms; a scheduling application; Prim's MST algorithm.

- Tim RoughgardenProfessor

Computer Science

Okay let's continue the perfect correctness of our greedy algorithm for

Â minimizing the sum of the weight of completion times and let's move onto

Â understanding the ramifications of the exchange of jobs suggested at the

Â conclusion of the previous video. So recall the basic ideas to use this

Â observation that the optimal schedule sigma star by virtue of our assumption of

Â being different from the greedy one has to have this pair of consecutive jobs

Â where the earlier one has the higher index.

Â So my question for you involves thinking about how the completion times of all of

Â the jobs change after we make this exchange of the two jobs i and j.

Â Which ones go up? Which ones go down?

Â Which ones are unaffected? Which ones can we not actually predict

Â whether they go up or down? All right, so the answer to this, quite

Â key question is the third answer. Jobs other than I and J are unaffected, the

Â completion time of job I goes up, and the completion time of job J goes down.

Â So let's review why. Consider a job K, other than I or J, it's probably easiest

Â to see if in sigma-star, K completes before I and J, scheduled earlier than I

Â and J. Don't, remember what the completion time

Â of a job is, it's just the time that needs to elapse before it gets done, so

Â it's the sum of the job lengths up to, and including that job itself.

Â So if K was scheduled before I and J before, it's exactly the same after I and

Â J are swapped. You don't know the difference.

Â Exactly the same set of jobs precedes Job K is the force whose completion time is

Â the same. But if you think about it, this exact

Â same argument is true for jobs K that succeed INJ.

Â So before we do the swap, it has to wait for a bunches ops to complete, including

Â INJ and after we swap INJ, it still has to wait for INJ to complete.

Â Yeah, they complete in opposite order but who cares?

Â The amount of time of the lapse is exactly the same.

Â So importantly, jobs other than INJ are completely agnostic to the swap.

Â Their completion time is exactly the same as before.

Â So that's the first. Part.

Â so job I, it's completion time goes up. It's easier to see why it used to be

Â before J, now it has to wait for J. In fact we can say exactly how much

Â completion time goes up by. It goes up by exactly the length of J.

Â That is the extra time that now needs to elapse before I gets completed.

Â By exactly the same reasoning the completion time of job j actually drops

Â it has to wait for the same jobs to complete before it being accepted no

Â longer has to wait for job I. So, not only can we say its completion

Â time goes down we can say that it goes down so by precisely the length of job I.

Â So now we are in a great position to finish off this proof.

Â Let's summarize what we got so far. So, for a cost benefit analysis of

Â exchanging I and J, we discovered the cost is limited to an increase in the

Â completion time of job I and the benefit is limited to the decrease in the

Â completion time of job J. Specifically the cost, the new cost

Â incurred by the exchange is the weight of job I times the amount by which it's

Â completion time went up, namely the length of job J.

Â Similarly, the benefit of the exchange is the drop of LI and J's completion time

Â and that gets multiplied by it's weight's of WJ.

Â So now finally we are at the point where we can use the fact that this purportedly

Â optimal schedule sigma star schedules I and J in some sense incorrectly,

Â with a higher index job first despite it having a lower ratio in contrast to the

Â principles followed by our greedy algorithm.

Â So why is it relevant that the earlier job I has a higher index?

Â Well, a higher index corresponds to a lower ratio.

Â Remember we did this notation switch so that the first job has the highest ratio,

Â the second job has the second highest ratio and so on.

Â So the bigger your index, the further you are down in the ordering, the lower your

Â ratio. So because I is bugger than J, that means

Â I's ratio is worse than J. The usefulness of this becomes even more

Â apparent when we clear denominators. We multiply both sides by the, by LI

Â times LJ. And then we find that WI times LJ is

Â strictly less than WJ times LI. But what are these, these are just the

Â cost and benefit terms respectively of our thought experiment exchange,

Â exchanging I and j. So what does it mean that the benefit of

Â the swap outweighs the cost? It mean if we take sigma star, this

Â reportedly optimal solution, and we invert the jobs I and j, we get a

Â schedule with an even better weighted sum of completion times, but that is nuts.

Â That's absurd, sigma star was supposed to be optimal.

Â So here's our contradiction. That completes the proof.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.