0:01

Discrete Optimization, Constraint-based Scheduling.

So what I'm going to do today is talk about one of the most fascinating areas,

area, areas of Discrete Optimization, which is Scheduling.

There are entire books on this, there are journals just on this topic, but what

you're going to see, what I'm going to try to do is to make you curious.

So that you can learn more about these things.

There are beautiful algorithms, beautiful search techniques, and you'll see some of

them. So I hope, you know, I'll get you excited

about this area, because it's at the same time beautiful scientifically, and

there's a lot of application in practice. So what I'm going to do is talk only

about scheduling of you know, using Constraint Programming.

There are also an entire branch of scheduling using, you know, mixed integer

programming, local search and so on. But, you know, this is, this is one of

the areas where lo, you know, constraint programming has been really, really very

successful. I'm going to talk about, modeling a

little bit. About global constraints and telling you,

you know, you'll see some very interesting issue arising in this

context. And you'll see a lot of very nice

techniques behind the scene. Okay?

So as I said, it's a very, successful application area for constraint

programming. And one of the reason is that you can use

all the flexibility of constraint programming.

These problems are typically very complex.

And you have a lot of different kind of, of constraints, and you can put them

together using constraint programming, okay?

So this is a typical example of what you can have in this area.

So you are minimizing the duration of a big project, and that project has a lot

of things to schedule, okay? So, and they are different precedence

constraints between them, so some task has to come before others, and so on.

And also you know, some of these tasks are going to use the same resources.

And so sometimes the can overlap in time, or you know, several of them can overlap

in time and things like this, okay? So this is the kind of problems that we

are going to look at, okay? So this is an example, okay?

So we want to build this bridge. [SOUND], okay?

So typically we can't do it in one step, okay?

So there variety of tasks, oops. No I, I want to do this again, because

you, you missed the best part here. Okay, so we don't want to build that

bridge like this. You see that little truck?

Okay, anyway. so you have various tasks that you see

here, okay? So obviously you have to build the

bottom, you have to do the excavation before you can actually do the pillars

and so on. And so, so you have precedence

constraints between some of these tasks. You can build the top of the, of the

bridge before the bottom and then obviously some of these tasks are going

to use exactly the same resources. And they, they have to share that.

And that's going to put some constraints on how you can actually schedule all

these different activities for building this bridge, okay?

So that's the kind of problems that we are looking at, okay?

of course you have other problems in the steel industry and paper industry, all

these things, you know, typically are scheduling problems like that.

So typically when you model these problems, these days, what, you know,

let's say, constraint programming languages or local search.

You know, algorithm are doing is basically building abstraction just for

that particular area. It's so big that you want this dedicated

language that I'm going to talk about activities, and resources, and things

like that. So we sometimes call this model-based

computing, okay? It's going to make it easier to actually

state the problem, okay? So you'll see, you know, concepts of

activities, resources, and I'm going to give you, you know, a sense of what this

looks like, although I won't go into the details, okay?

Precedents, constraints and so on and so forth.

Now behind the scene of use with these are going to be compiled into decision

variables and global constraints, and things like this, okay?

So, essentially that's a high level language which is compiling to you know,

the typical language of constraint programming behind the scene.

And then also you know, this concept are going to support dedicated search

algorithm. Because we can exploit some of the

structure that you have In these applications.

And I will talk a little bit about that. But there is much more, you know, behind

the scene, and I want to be able to talk about all the techniques that we can use

here, okay? So, you know, one of the things that is,

you know, the TSP, of, scheduling is called the Jobshop scheduling problem,

okay? So it's probably the simplest, scheduling

problem but it's also very hard, okay? But it's very simple to state, and people

have been competing on this forever, okay?

So there are a lot of standard, you know, benchmarks on this, and there are also a

lot of open problems. It's not very difficult to actually

generate instances that nobody can solve optimally, okay?

So what I'm going to give you now is, you know, in two pages, it's in two slides.

I'm going to give you the specification of the problem.

I'm obstructing some of the fac, you know, some of the right information this

slide. But essentially what we have is we have a

set of tasks that we have to schedule, okay?

And every one of them is a duration, okay?

So we know, we're going to assume that we know the duration, it's fixed in this

particular case. Now,w e also know that every task is

actually usually one resources. This is a simplification right, in real

life they typically use different resources, but here they only use one

resource, one machine, okay? And we specify which machine that, you

know, every task is using. And then we have a set of precedence

constraints, okay, of the type (b,a), okay?

Which basically means that a can only start after b has completed execution.

So you have to finish b before you can start a, okay?

So think about it in this, like, Excavating before you start building the

bridge, right? And so, these are essentially the data

and the various constraints. And what you want to do is schedule all

these tasks so that you minimize the project completion time.

The er, you know want this project to finish as early as possible.

Now in the jargon of optimi, of, of scheduling, this is called a mixed time,

okay? But this is one of the very famous

objective, there are many others, okay? One of the most studied objective

function for scheduling, okay? So let me give you a picture of that so

can really what the jobshop is, okay? The constraints are coming form the job.

So everyone of these a result of the lines you see on this slide, okay.

So these are basically the jobs, okay? So you know, this is let's say the

starting of the project. Then you see you know, this is the

sequence of tasks inside a job. And the semantics here is that you know,

this task is to be completed before the second one started.

And after, the second one has to be completed before the third starts and so

on and so forth. So this a way to you know, the precedence

concerns are coming from, okay? So, they are coming from these jobs which

are basically sequences of activities. And then what you see, the colors that

you see in this, is, in this, in this pictures.

In this picture are basically the machine that every one of these activity has to

use, every task has to use, okay? So basically all the tasks which are

color in yellow, I have to use the same machine, okay?

So I am also showing you here you know, all the tasks of a particular machine

that I basically call a machine kit, okay?

So all these tasks use are, are using basically the same machine and therefore

they cannot overlap in time, okay? You know, every two, every two of them.

You know, we have to satisfy the fact that one is to be before or after the

other, okay? So basically when you think about that,

okay? So for solving these problems one of the

things that you have to find out is in order for every one of these machines.

And what I have shown you on this slide is basically all the possibly links

between every two tasks using that machine, okay?

And so what we have to do is to choose a subset of them, which are basically.

You know, making a sequence, okay? And so is, so, so in a sense the way to

think about this, is that a machine, has to handle all these tasks sequentially.

They can be ordered in an arbitrary fashion.

Of course, you know, you have to satisfy the present constraints on the jobs.

But, but mostly what you're doing is trying to find for every one of these

machine, an ordering of all these tasks, okay?

So this is one of them, for instance, okay?

So you start, you know, over there, and then you go over, ooh, where do we go?

we go over, let me see. This is really difficult to see.

We go here, and then [LAUGH] we go, we go down and then we go up, and so on and so

forth, okay? So you basically have one ordering for

that particular machine. All the Tasks here are basically ordered

and, and the machine as, as very, you know, as a well defined order, ordering

for every one of the task, okay? So this point, if you do that for all the

machine, if you order all the machine, the only thing that you are left with are

what? You are only left with these precedence

constraints, okay? So you have this big graph know, we have

all the original precedence constraints and every one of them which comes from

the ordering of the machines, okay? And so this point, you know, I have a

problem where the only thing I have to do, is to minimize the project duration

subject to precedence constraints. And the beautiful thing is that this is a

polynomial time problem, okay? We can solve this problem in polynomial

time, okay? So we can use topological sorting which

is called in that popular a literature of the PERT algorism.

Or you know it, it the precedence contraints that are most sophisticated

they have also distance. You would use transitive closure, but

essentially this problem can be solved fast, you know, importing and obviously

in polynomial time, okay? So in a sense the only thing we have to

do in this particular problem is order the, you know, the machine.

You don't have to choose a starting date, okay?

So this step here at the end is going to do that automatically, okay?

So you just have to find out the right ordering for all the task on everyone on

the machine, okay? So let me show you an example of, you

know, a constraint programming model for this particular, for this particular

problem. So the, so, so once again, I don't want

you to understand this in detail. What I want to show you here, and, and

illustrate is the kind of languages that people have built for modeling these

applications scheduling. These applications in a high level, okay?

So the beginning here the first three line are boring basically here the data

and description for every job and every task in that job you have a duration.

Then you have the machine for which that particular task.

On that particular job he's executing, we compute a time horizon, which is

basically the longest time we can schedule all these activities.

And so this is basically data, it's basically boring.

What is a little bit more interesting here, okay?

Is basically the description of, at a high-level, the decision variables, and

some of the, the, the resources that, that are going to use for stating the

constraints. So what you see there is that we are

basically defining a schedule, and then we are basically defining these

activities, okay? So for every job, and every task in the

job, we are defining that activity and that activity is a particular duration,

okay? So this is a concept, you know, this

activity of the end of the date /g, will have a starting date and an ending date.

And this isn't the level at which we want to rezone here, okay?

So we want to be thinking about this activity, its starting date, the end

date, possibly the duration if it's not fixed, okay?

And we are basically expressing that as a high-level concept on which we want to

express the model, okay? We also have an activity which is

makespan, that's essentially the completion time of the project, it has

duration zero. That is basically a dummy activity that

we are putting at the end. And we put a precedence constraint of all

the tasks, essentially, to that particular activity.

Then we have here is a unary resource. We are, this is basically something that

tells you that if there are various tasks which are actually using that resource,

they won't be able to overlap in time. You will have to schedule one before the

other, okay? So basically, you know, the description

of your decision variables and the various.

You know, act, you know, resources that you will use for stating the problem,

okay? The rest is very much like the constraint

programs that we have seen before. What we are doing is minimizing the end

date of the makespan, okay? So it's the same as the starting date

because the duration is zero, okay? So that's the, that, that's essentially

modeling the end, you know, the duration of the project.

And then what you see there are basically the various constraints, okay?

So the first set of constraints I'm basically the precedent constraints

inside this job okay and so you look at all the job and all the tasking in that

job, okay? so you are setting that task t and job j

as to precedes task t plus one in that same job j, okay?

So, that's basically what you are seeing, you know of course, you know the system

here understand, what it means to be a precedent constraints.

And it will translate that into a simple inequality and I'll show you that in that

next slide, okay? The last part that you see here, okay?

So the, the next, you know, in between the two here you will, you know, these

are the precedent constraints for saying that the makespan is after every one.

But these are basically the resource constraints for every one of the task and

every one of the job. You are basically specifying which

resources they are using, okay? And therefore as you know that, you know,

no task can overlap on these resources. You are basically specifying implicitly

the, you know, the kind of disjunctive constraints that this problem has to

satisfy, okay? So that's at the high-level, the kind of

modeling that you have is this scheduling application.

Of course, in practice, the resources are typically more complicated, the

precedence constraints are more complicated.

There may be additional side constraints, and things like this.

But this is kind of the, you know, the simplest scheduling problem is the TSP of

scheduling problems, okay? Now, behind the scene, this model, this

high level model is compiled into a particular set of cons, decision

variables and constraints, okay? So in a sense, you, you have to think

about it, every activity has three variables.

The starting date, okay, of that particular, of that particular activity,

the duration of that activity, and the end date of that activity.

Okay, so in a sense every variable can be thought of as three of, three variables.

The starting date, the end date and the duration.

Now, they have redundancy specially if the duration is fixed but, you know, they

are basically... We typically use these three because it's

really convenient for expressing some of the algorithms.

Of course you have a constraint linking them.

OK, the constraint is basically stating tha tyou know the starting date plus the

duration is equal to the end date. OK, so in a sense an activity you have to

think of it as giving you three you know interdependent decision variables linked

that way by this constraint. Okay then the other things how to express

the precedence constraints, if you have precedence constraints between b and a,

okay? So simply it's become a very simple tiny

inequality that Sa has to be greater or equal to a or b, okay?

It's very simple, okay? The most interesting part here, and

that's what I want to focus a little bit, you know, about in the la, the, the last

part of the lecture, is the resource constraint.

So these resource constraints are going to be compiled, into global constraints

which are disjunctive, here, okay? And they are basically specifying that

all the, all the activities there, t1 to tn basically cannot overlap in time,

okay? So, this sys, this disjunctive

constraints here which is really important.

So, if you think in terms of the constraint programming model that I've

presented in this class earlier, okay? Everyone of these precedence constraints

is one of these guys, okay? And then you have also the global

constraints, every disjunctive constraint for every one of the resources is

going to be one of these constraints. And, of course, they are going to

interact through the decision variable, which are the starting date, the end

date. And not the duration because they are

fixed, but, but essentially the starting and the end dates of every, end dates of

every one of the activities, okay? So that's basically the, the, the, the

overall model, what's happening behind the scene.

What I want to focus on in the rest of the thing, because this is truly

beautiful Is, the disjunctive constraints.

And how you can actually, how you can actually implement algorithm that are

pruning and, and, pruning effectively the search base.

For this disjunctive constraints. So this is, so I'm going to illustrate

that, all, you know some of this issue, on a very simple example, okay?

So we have three activities that are requesting that resources.

Okay so what is duration, six. The other one four the last one three.

And what you see there is essentially the time window in which they have to be

scheduled, okay? So, this is for instance for the, for

the, for the ti, four times three, this is the earliest starting time at which

that task can start, okay? So, let's say one, okay?

And this is the latest finishing date for that particular task, okay?

So in a sense, this is the minimum value in the domain of the starting date.

This is the largest value in the domain of the ending, you know, variable, of the

activity, okay? So every one of these guys can be

scheduled anywhere in that time window, okay?

So and what we have to do is to choose where to place them Such that they don't

overlap in time, okay? And we have to find out, if this is

feasible, okay? And how fast we can do that.

Now there is one very interesting thing here.

Is that, even the simple, one machine problem, is NP-hard, okay?

Which is like, so I did, you know, this beautiful model with this disjunctive

global constraints. And there is no way I can test this

ability in polynomial time. So what are we going to do?

Okay, so we could totally compose these thing into very tiny disjunctive

constraints, okay, And you could verify this one in polynomial time, okay?

But that basically looses. The, the entire globality of the problem.

So, so you really don't want to do that because you will get nowhere in trying to

solve practical problem. So what we going to do is that instead of

solving this problem exactly, we will relax those tenders.

And are we going to show you that. In a moment okay so there are a lot of

algorithms to do that. I'm going to just make you curious here,

okay? And give you basically the intuition

behind some of them. And you know put them some connections

and entire set of algorithms in this space, okay?

And I'm going to use a little of notation.

In think it's intuitive but I'm going to go slowly and because we are doing

something which are a little bit fan, you know, fancy here, okay?

So, I'm going to, I'm going to to, so, so, omega here in this, all, this formula

in this subsequent slide, okay? Is a set of task, okay??

And so I'm going to denote s of omega.Think starting date of omega, okay

So that's the earliest time at which one of the tasks inside omega can stop, okay.

And the way I can compute that is that I look at all the task in omega and I take

the one which is the strongest. Smallest possible value in the domain of

its starting date. That's what I'm doing.

So, when you see s omega, it's the earliest time at which any of these tasks

in Omega can start, okay? Then e omega, okay?

So, is just the opposite for the ending dates, okay?

So, I'm basically look, e omega is the littlest finishing time of all the task

in Omega. So, I taking a max overall of this of the

maximum value the end value, okay? Of every one the activities, okay?

And then the duration of these activities are a little bit interesting as well,

okay? So the duration of omega is essentially

the sum of all the duration of the tasks in omega, okay?

So, you know, remember this notation because we are going to use it over and

over and over again, okay? So s omega, earliest starting date of

this set of tasks. E omega, that's basically the latest

edition date of any of the tasking omega. And d omega is basically the long

duration of all the tasking omega, okay? So, so what I'm at this point what we

have is that we are trying to find if these constraints is feasible.

That's one of the things that the constraints have to do, this disjunctive

constraint, is it feasible? We know that this is a hard problem and

we are not going to, we are not going to try to solve it.

What we are going to do is now approximated efficiently, okay?

Well, approximated somehow. And so this is the test, this is the

first test that I'm proposing ,okay? To check if a set of tasks can be

scheduled on this machine, what I'm going to do is check this little

inequality. I'm going to, check that sT plus the

duration of T, okay? Is more or equal to eT, okay?

You know not eT right, e of T, okay? So, sT is basically the earliest starting

date of T, okay? That's the total duration of all the

task, and I will be sure that when I add these two things, I can fit them before

the latest finishing date of all these tasks, okay?

So in the sense, when you look at this picture here, okay?

So this is the earliest starting time, okay?

This is the latest finishing date, okay? And then you add all the duration of this

task, that's basically the solid line inside every one of these tasks.

And I want to be sure that when I sum the duration, you know and I add that to the

starting date. I'm you know, terminating before I'm

ending before the, the latest has ended, okay?

That's basically what you're testing here, okay?

So the question I have for you is that is this an effective test?

Is this something that's going to detect feasibility or, you know, reasonably

well, okay? And so one of the things that I told you

in the first lecture of this class, right, is that, you know, when you have a

question like this. What you have to do is to exaggerate,

okay? So I'm going to exaggerate, okay?

So this is a set of tasks. Here they have no duration.

Well, the duration is fixed and the, the starting there is fixed, okay?

And I have this guy this little guy who is all alone okay and seems to be really

tight. They don't have much space to be

scheduled okay so this is not feasible, okay?

If you look at this task you can't schedule that.

But because I added this very isolated guy notice that the.

Notice that the earliest date is here, the latest finishing date is there.

Obviously I have a lot of space here so I will be able to schedule.

So essentially I have this I'm always going to say, yes it's feasible, okay?

I believe it's feasible. Of course I don't, ever prove but, I, you

know, my test is going to say yes, you know, I can prove in feasibility, okay?

So, in a sense, this is a terrible test, okay?

I can, I can make it arbitrarily bad by adding some task that are basically doing

nothing. So we don't like that, we want the system

to be robust. So one of the things that we can do is

that, you know [LAUGH], you tried to, you know, annoy me, it's like during the

relaxations, right? So what we want to do is have a test

which you cannot annoy so easily. And what we do, this is another version

of the test, but instead this time instead of just looking at the set T, I

want to look at all of the subset of the tasks inside T, okay?

So once again if I look at my basic set that I've shown you, ok, I want to look

if that subset is feasible, okay? if that subset if feasible, if this

subset is feasible, and so on. So for every possible subset of the

tasks, I will apply the test that I've shown you on the previous slide, okay?

And which is, you know, obviously here as well, okay?

So that's what I want to do. I don't want you, because there is the

one task in my project which is scheduled very, very late, I don't want you to, I

don't want the test to always say yes. Here I want to be sure that the test is,

you know, looking at all the possible subsets and detecting feasibility for any

of these subsets, okay? Good, it looks good, right?

No, what is the issue? So how many subsets do you have in a set?

Well, you have exponentially many, okay? So now we would have to, you know, run an

exponential number of sets And that doesn't sound too good, okay?

But, but there is something which is really interesting in the structure of

this set, okay? And in practice you only have to look at

the quadratic number of them. And I'm going to give you the intuition,

right? So this is all the task of my problem

again, okay? But now I put some dash lines for

everyone of them, okay. So in these dash lines, okay?

Are denoting the starting date and the end date, okay?

And the late, the earliest starting date, and the latest finishing date or end date

of everyone of these tasks, okay? And what I'm going to show you is that

you basically consider a quadratic number of tasks.

Which are basically optimized by choosing a starting date and choosing an end date,

okay? That's going to give you two tasks and

I'm going to show you how to compute the tasks okay?

So this is the intuition, two red tasks okay so you see this one and that one,

okay? So, and so you see the starting date

there and you see the end date there, okay?

So, when you see this test, okay? Obviously you have the starting date and

you have the end date, okay? And now look at this task in the middle,

okay?? So, I am not considering it right, and

now what I am claiming is that I can put it, okay?

I don't have to, I will never have to ever test taking this task and that task

together, okay it's completely useless to consider that subset, why?

Because if I take this, if I take, you know, if I add this task to these two

tasks here, what's going to happen? I'm not going to change the starting

date. I'm not going to change the end date

because this is in the middle. The only thing that I'm doing here is

essentially increasing the duration. So, I'm making this task stronger, okay?

So, essentially what I'm saying here is that I can, you know, I ca, well, when I

have these three task, okay? So, I don't have to look at the subset of

them because this task that I'm considering now we have the three of

them, I'm going to subsume all of them. This is going to be stronger.

Because it has the same start in there, the same ending, but I'm increasing the

duration, okay? So in a sense to actually find out what

are the subsidized need to look. I take a starting bid.

I take a end bid and then I pack everything in it okay which means that

the starting date has be after and the end that has to be before.

And that's that stuff that I have to look for, okay?

So, this is something that Cazzo and Lavorlt called task intervals, okay?

So if you take two tasks, t1 and t2, then you lift essentially as many tasks as you

can inside, okay? So you take all the task t, whose

starting date after, you know t1, the starting date of t1 and before the end

date of t2, okay? So you have this you know, task

intervals, that are basically, out of strongest, set this, this, this strong

subset that you want to look at, okay? And now basically the feasibilities that

you see here, eh, can only focus on all these task intervals.

They never have to look at anything else, okay?

So you look at all the task intervals for ever pair of, of, of task, and that's

what you, the, the, the, that's on this task interval that you apply the task.

You don't have to look at any other subset, okay?

And so the complexity of [UNKNOWN] here is that, you have a quadratic number of

this, you know, inter-, task intervals. And the test, itself, is linear time.

So you have an algorithm which is in cubic time.

Now, cubic is pretty high. So, we ought to try to, to, to do better

than this and I'm going to show you one you know, one in, one algor, one, some of

the intuition on how you can do better, okay?

So, I'm only going to show you one algorithm here which is going to be

quadratic, okay? But, you can do better in practice and,

and here you can do and log and in general, okay?

So let me give you the intuition behind the quadratic algorithm that we can use

for testing or making all these tasks, and the key idea is very, very simple,

okay? So, what you want is that you want, so,

so, so, let's put it this way, you want to look at one ending date and in one

sweep, okay? So you want to compute all these task

intervals that if e is the latest, the finishing time, okay?

So in essence you look at this e, you fix e and then you're going to compute all.

You're going to look at all the starting dates and take it, computing the

feasibility for all the task intervals. You know, which if e, as the, as, as the

ending date, okay? So in this particular case, what you are

going to do, you are going to do e and then we going to, we going to look at

this S, S3, S2, and S1. And we going to compute incrementally

this task, by adding the duration and then, you know, performing the task,

okay? So let me show you the algorithm, because

the algorithm is going to make clear a couple of points.

So we fix e, right? So e is fixed.

And then we're going to look at the first task, okay?

Now since we want a task interval, okay? And we say that e is the finishing date,

we, will only consider S3, S3 is actually finishing before e, okay?

So the latest finishing time of The task which started, you know, at S3, okay?

Is, actually, you know, earlier than e. Otherwise, you know, it's not a task

interval, okay? So if we have this guy, okay?

So this is the test that you see here. We add the duration, okay?

To the duration d, okay? So that, d, think of d as the

accumulation of all the tasks that we are putting inside this task interval, okay?

So this point, okay? So we have nothing, so we put the

duration of S3, okay? So in other words, 3 is there, okay?

And then we perform the test. But we are, you know, the only thing that

we are looking at now is something which started as 3.

So, we, we do the test S3 plus the duration and that is to be smaller or

equal to e, okay? If it is, great, we are good, okay?

Then you move to the next one, okay? And you're going to do exactly the same

kind of test, okay? So what you are going to do is gonn, but,

but now you are, you, you increase the duration so you have two tasks now, okay?

And what you do is, once again it has to finish before e and, and, the key point

here is that the only thing that you have to do from moving from S3 to S2, is to

add the duration. And now we have another test and we know

that S2 is the already starting date, so we don't have to make any computation to

find which one is the minimum. We know that this guy is the minimum and

thought we test essentially this task interval, can we actually put these two

tasks be and complete them before e, okay?

And so we, we, we, keep doing that for all the tasks, for all the, you know,

[SOUND]. We go, we go like that, for all the tasks

which are, all the starting date in that particular order, okay?

Now this algorithm is going to be quadratic, obviously if the test fails at

any point, okay? So you fail, and otherwise you would have

a successful that particular value of e, okay?

Now the other things that, I need to mention here is that, you will do that

for all the e, okay? All these times, okay?

And that, will, give you, an algorithm which is quadratic.

Because it's linear, for every one, of these e.

And it becomes quadratic because you have a linear of times of e, okay?

So basically the intuition, you fix e And then you just do a sweep, okay?

From right to left, okay? And that is basically how you reduce the

complexity from cubic to quadratic, okay? Now, can we do better than that?

Is it possible to actually improve on that again.

Yes you can, okay? And I'm not going to prove all the

results but there is a beautiful connection, okay?

With something which is called preemptive scheduling, okay?

Now I just want to mention this because this is another relaxation, okay?

So so far, and i-, and it's a little bit the same thing as the linear relaxation

although it's completely different, but anyway never mind, okay?

So what we are trying to do here is that so far I implicitly, and this is the case

in, in many problems assume that none of these tasks can be interrupted.

But what happens if you can actually interrupt the task?

If you can interrupt the task, essentially, checking feasibility

becomes, you know, a very, a very simple problem.

You can solve that in n log n time, okay? And the algorithm there would be very

simple. It would look at all the tasks that you

can schedule at any point, okay? Initially, there is only one, so you're

going to schedule it A1, and then you're going to take the one which terminates.

You know, which has the tightest deadline, and that's the one that you are

going to schedule, okay? If at some point, a new task comes in and

then you can schedule it, you will take the one at that point which can be

scheduled. And which has, you know the the, the

earliest, finishing, latest finishing time, okay?

30:30

So in a sense what you do is that every step you look at the task you can

schedule. And you take the one that's the tightest

deadline and that's the one you're scheduling.

So, this algorithm, this greedy algorithm for preemptive scheduling can be

implemented in n log n. And basically solve feasibility for the

preemptive problem, okay? Cool, right?

And actually, you can show that you know, this is exactly the equivalent to all the

tasks that I've shown you. Heck, good connection, right?

Okay, so, now we have essentially good test, for testing feasibility, of course

it's not complete feasibility. We can say, oh we believe it's feasible

but it's not, okay? But the one I want to show you now are

the rules for pruning, and they are truly beautiful, okay?

So they are called, the edge finding rules.

Okay, they are only one of the subset of rules that we can apply.

There are other subset of rules that you can apply for pruning.

But I'm just, you know, once again, I just want to make you curious, okay?

And the key idea is that you're going to select a set omega, of tasks, and then

you're going to select another task, i, okay, which is not in omega, okay?

And the key point is that, look at this, okay?

So you have the task, you have the task here in omega, and you have this task, i.

And what you are wondering is that can this task be scheduled before, has to be

scheduled always after, or always before the other tasks, okay?

That's what you're going to try to do, okay?

And how do you do that, okay? What you're going to do is simply add the

task to the set omega and perform the ta, same tasks, okay?

But, with one variation, right? So, you don't want the test to be to be

such, so you want to the task to note you for the end date the task i, okay?

So you're going to take the start, the earliest starting date of omega and the

task i. So, i, start earlier so you can take it.

You can take that starting date, you take the duration of all the tasks in omega

and i. But then you test, if all these task can

finish before the latest finishing task in omega, okay?

Because you want to see if, you know, if I can be scheduled inside the middle, not

the first, or, you know, before at least, at least one task inside the set omega.

Now if this test tells you no, that basically means that the only way to

satisfy the disjunctive constraints is to make sure that i is actually schedule

after all the tasks. It has to be, you have to increase, you

know, you have to increase the finishing date, that means that i is to be, i is to

be the last task in, of all these sets. It has to be scheduled after all the

tasks of omega, okay? So, in a sense, once you know that, you

can update the starting time, and that's a big update, you'll see, okay?

So you have, you, you, you have, you know that, you know, the task i is to start

after all the tasks in omega. So, what you can do is to think the

starting this over this task, and then the total duration, and you know, that i

is to start after. Actually, you know something which is

even better, you can take the maximum of all the subset.

Because once again, you know, I can actually have pathological case where

it's a subset of this task which gives you the best update.

Not the, the set considering all of the tasks in omega, okay?

So once you have something like this, okay?

So you can apply these rules for the starting date and really pushing a task

very far back. Or for the end date, you can push it, you

know, a task forward, in a sense, okay? And these rules, what is beautiful is

they can, you know, they can be, you can compute a fixed point of these, all these

edge finding rules. of the propagation algorithm in strongly

polynomial time. So this is, this is, so you have a very

strong polynomial time algorithm for applying all these rules for all the set

omega and all the other task, you know, i, okay?

Which is kind of remarkable as well, because once again we have infinitely

many of these sets, okay? So that's the beauty of the algorithm.

You always have the impression that you rare computing exponentially many things

but you can do it in polynomial time, okay?

So let me show you an example of this same example as before, you know A1 A2

A3, okay? What we are wondering is you know can A1

be schedule, be schedule you know before on of these two tasks, okay?

So what do we do, well the task that I've just shown you, is basically assumed well

it cannot be scheduled after so basically reducing.

I'm basically reduce, if we're moving all these values, okay?

I'm reducing the finishing of A1 to the same as A3 and now I'm looking at these

three tasks. Can they be scheduled in these time

windows okay and here it's very easy to see that you cannot find that.

Because you have essentially eleven spots and the total direction is stuck here so

you are going to say ooh I can't schedule these guys there.

That means that one can be scheduled before either tree okay because to be

scheduled after everybody. So you push it by computing what is the

earliest finishing time of this two. You know they take duration of seven.

The first one they cannot be scheduled [SOUND], so this guy is pushed.

And that's to be only in this interval, okay?

So, these you know, these update roots are very strong because they are really

pushing you know, the starting date or pushing the end date of a particular

task, okay? Now, so this is basically the type of

pruning that happens in the scheduling algorithm.

So there are many of, you know many of other resource, resource types but they

do essentially the same thing, pushing the starting date.

You know, pushing the end date, okay? Now, the search is very interesting.

Now, one of the things I told you in the beginning is that you don't need to

actually find the starting time in this application, right?

In Jobshop or as long as you have disjunctive constraints.

What you need to do is basically, order the task on the machine.

So the third strategy is, you know most of the time what it does is choosing a

machine and then ordering the task in the machine.

And then repeating that for the other machine, okay?

That is a typical search procedure for that.

Now you are going to say, which machine? And once again you're going to apply the

first-fail principle that we always use in constraint programming.

Which means all you're going to look at the machine which is tight, okay?

So when you look at the sum of the duration of the task.

And the flexibility that you have in the machine, that's the, you know, these

tasks are really tightly packed inside that machine, okay?

And then which task? Well, what one of the things that, the

search algorithm are going to do is to try to find out which sort of task which

can be scheduled first. And once again they will look at a task

which are really tight or which have to be, you know, scheduled very early on.

And so that's, that's what he's going to be, chosen is a heuristic.

For actually just finding the other task is first on that machine or is before the

other task, okay? So once again what you do here in this

search procedure is exploit the structure of the problem, okay?

So, this is it. What I'm going to do next time is

actually show you some very interesting. You know, search strategies for exploring

the search space of these scheduling problems.

And I'm also going to give you a beautiful hybridization of constraint

programming or map with local search which is really useful in practice, okay?

See you next time.