0:01

Discrete Optimization, Constraint-based Scheduling.

Â So what I'm going to do today is talk about one of the most fascinating areas,

Â area, areas of Discrete Optimization, which is Scheduling.

Â There are entire books on this, there are journals just on this topic, but what

Â you're going to see, what I'm going to try to do is to make you curious.

Â So that you can learn more about these things.

Â There are beautiful algorithms, beautiful search techniques, and you'll see some of

Â them. So I hope, you know, I'll get you excited

Â about this area, because it's at the same time beautiful scientifically, and

Â there's a lot of application in practice. So what I'm going to do is talk only

Â about scheduling of you know, using Constraint Programming.

Â There are also an entire branch of scheduling using, you know, mixed integer

Â programming, local search and so on. But, you know, this is, this is one of

Â the areas where lo, you know, constraint programming has been really, really very

Â successful. I'm going to talk about, modeling a

Â little bit. About global constraints and telling you,

Â you know, you'll see some very interesting issue arising in this

Â context. And you'll see a lot of very nice

Â techniques behind the scene. Okay?

Â So as I said, it's a very, successful application area for constraint

Â programming. And one of the reason is that you can use

Â all the flexibility of constraint programming.

Â These problems are typically very complex.

Â And you have a lot of different kind of, of constraints, and you can put them

Â together using constraint programming, okay?

Â So this is a typical example of what you can have in this area.

Â So you are minimizing the duration of a big project, and that project has a lot

Â of things to schedule, okay? So, and they are different precedence

Â constraints between them, so some task has to come before others, and so on.

Â And also you know, some of these tasks are going to use the same resources.

Â And so sometimes the can overlap in time, or you know, several of them can overlap

Â in time and things like this, okay? So this is the kind of problems that we

Â are going to look at, okay? So this is an example, okay?

Â So we want to build this bridge. [SOUND], okay?

Â So typically we can't do it in one step, okay?

Â So there variety of tasks, oops. No I, I want to do this again, because

Â you, you missed the best part here. Okay, so we don't want to build that

Â bridge like this. You see that little truck?

Â Okay, anyway. so you have various tasks that you see

Â here, okay? So obviously you have to build the

Â bottom, you have to do the excavation before you can actually do the pillars

Â and so on. And so, so you have precedence

Â constraints between some of these tasks. You can build the top of the, of the

Â bridge before the bottom and then obviously some of these tasks are going

Â to use exactly the same resources. And they, they have to share that.

Â And that's going to put some constraints on how you can actually schedule all

Â these different activities for building this bridge, okay?

Â So that's the kind of problems that we are looking at, okay?

Â of course you have other problems in the steel industry and paper industry, all

Â these things, you know, typically are scheduling problems like that.

Â So typically when you model these problems, these days, what, you know,

Â let's say, constraint programming languages or local search.

Â You know, algorithm are doing is basically building abstraction just for

Â that particular area. It's so big that you want this dedicated

Â language that I'm going to talk about activities, and resources, and things

Â like that. So we sometimes call this model-based

Â computing, okay? It's going to make it easier to actually

Â state the problem, okay? So you'll see, you know, concepts of

Â activities, resources, and I'm going to give you, you know, a sense of what this

Â looks like, although I won't go into the details, okay?

Â Precedents, constraints and so on and so forth.

Â Now behind the scene of use with these are going to be compiled into decision

Â variables and global constraints, and things like this, okay?

Â So, essentially that's a high level language which is compiling to you know,

Â the typical language of constraint programming behind the scene.

Â And then also you know, this concept are going to support dedicated search

Â algorithm. Because we can exploit some of the

Â structure that you have In these applications.

Â And I will talk a little bit about that. But there is much more, you know, behind

Â the scene, and I want to be able to talk about all the techniques that we can use

Â here, okay? So, you know, one of the things that is,

Â you know, the TSP, of, scheduling is called the Jobshop scheduling problem,

Â okay? So it's probably the simplest, scheduling

Â problem but it's also very hard, okay? But it's very simple to state, and people

Â have been competing on this forever, okay?

Â So there are a lot of standard, you know, benchmarks on this, and there are also a

Â lot of open problems. It's not very difficult to actually

Â generate instances that nobody can solve optimally, okay?

Â So what I'm going to give you now is, you know, in two pages, it's in two slides.

Â I'm going to give you the specification of the problem.

Â I'm obstructing some of the fac, you know, some of the right information this

Â slide. But essentially what we have is we have a

Â set of tasks that we have to schedule, okay?

Â And every one of them is a duration, okay?

Â So we know, we're going to assume that we know the duration, it's fixed in this

Â particular case. Now,w e also know that every task is

Â actually usually one resources. This is a simplification right, in real

Â life they typically use different resources, but here they only use one

Â resource, one machine, okay? And we specify which machine that, you

Â know, every task is using. And then we have a set of precedence

Â constraints, okay, of the type (b,a), okay?

Â Which basically means that a can only start after b has completed execution.

Â So you have to finish b before you can start a, okay?

Â So think about it in this, like, Excavating before you start building the

Â bridge, right? And so, these are essentially the data

Â and the various constraints. And what you want to do is schedule all

Â these tasks so that you minimize the project completion time.

Â The er, you know want this project to finish as early as possible.

Â Now in the jargon of optimi, of, of scheduling, this is called a mixed time,

Â okay? But this is one of the very famous

Â objective, there are many others, okay? One of the most studied objective

Â function for scheduling, okay? So let me give you a picture of that so

Â can really what the jobshop is, okay? The constraints are coming form the job.

Â So everyone of these a result of the lines you see on this slide, okay.

Â So these are basically the jobs, okay? So you know, this is let's say the

Â starting of the project. Then you see you know, this is the

Â sequence of tasks inside a job. And the semantics here is that you know,

Â this task is to be completed before the second one started.

Â And after, the second one has to be completed before the third starts and so

Â on and so forth. So this a way to you know, the precedence

Â concerns are coming from, okay? So, they are coming from these jobs which

Â are basically sequences of activities. And then what you see, the colors that

Â you see in this, is, in this, in this pictures.

Â In this picture are basically the machine that every one of these activity has to

Â use, every task has to use, okay? So basically all the tasks which are

Â color in yellow, I have to use the same machine, okay?

Â So I am also showing you here you know, all the tasks of a particular machine

Â that I basically call a machine kit, okay?

Â So all these tasks use are, are using basically the same machine and therefore

Â they cannot overlap in time, okay? You know, every two, every two of them.

Â You know, we have to satisfy the fact that one is to be before or after the

Â other, okay? So basically when you think about that,

Â okay? So for solving these problems one of the

Â things that you have to find out is in order for every one of these machines.

Â And what I have shown you on this slide is basically all the possibly links

Â between every two tasks using that machine, okay?

Â And so what we have to do is to choose a subset of them, which are basically.

Â You know, making a sequence, okay? And so is, so, so in a sense the way to

Â think about this, is that a machine, has to handle all these tasks sequentially.

Â They can be ordered in an arbitrary fashion.

Â Of course, you know, you have to satisfy the present constraints on the jobs.

Â But, but mostly what you're doing is trying to find for every one of these

Â machine, an ordering of all these tasks, okay?

Â So this is one of them, for instance, okay?

Â So you start, you know, over there, and then you go over, ooh, where do we go?

Â we go over, let me see. This is really difficult to see.

Â We go here, and then [LAUGH] we go, we go down and then we go up, and so on and so

Â forth, okay? So you basically have one ordering for

Â that particular machine. All the Tasks here are basically ordered

Â and, and the machine as, as very, you know, as a well defined order, ordering

Â for every one of the task, okay? So this point, if you do that for all the

Â machine, if you order all the machine, the only thing that you are left with are

Â what? You are only left with these precedence

Â constraints, okay? So you have this big graph know, we have

Â all the original precedence constraints and every one of them which comes from

Â the ordering of the machines, okay? And so this point, you know, I have a

Â problem where the only thing I have to do, is to minimize the project duration

Â subject to precedence constraints. And the beautiful thing is that this is a

Â polynomial time problem, okay? We can solve this problem in polynomial

Â time, okay? So we can use topological sorting which

Â is called in that popular a literature of the PERT algorism.

Â Or you know it, it the precedence contraints that are most sophisticated

Â they have also distance. You would use transitive closure, but

Â essentially this problem can be solved fast, you know, importing and obviously

Â in polynomial time, okay? So in a sense the only thing we have to

Â do in this particular problem is order the, you know, the machine.

Â You don't have to choose a starting date, okay?

Â So this step here at the end is going to do that automatically, okay?

Â So you just have to find out the right ordering for all the task on everyone on

Â the machine, okay? So let me show you an example of, you

Â know, a constraint programming model for this particular, for this particular

Â problem. So the, so, so once again, I don't want

Â you to understand this in detail. What I want to show you here, and, and

Â illustrate is the kind of languages that people have built for modeling these

Â applications scheduling. These applications in a high level, okay?

Â So the beginning here the first three line are boring basically here the data

Â and description for every job and every task in that job you have a duration.

Â Then you have the machine for which that particular task.

Â On that particular job he's executing, we compute a time horizon, which is

Â basically the longest time we can schedule all these activities.

Â And so this is basically data, it's basically boring.

Â What is a little bit more interesting here, okay?

Â Is basically the description of, at a high-level, the decision variables, and

Â some of the, the, the resources that, that are going to use for stating the

Â constraints. So what you see there is that we are

Â basically defining a schedule, and then we are basically defining these

Â activities, okay? So for every job, and every task in the

Â job, we are defining that activity and that activity is a particular duration,

Â okay? So this is a concept, you know, this

Â activity of the end of the date /g, will have a starting date and an ending date.

Â And this isn't the level at which we want to rezone here, okay?

Â So we want to be thinking about this activity, its starting date, the end

Â date, possibly the duration if it's not fixed, okay?

Â And we are basically expressing that as a high-level concept on which we want to

Â express the model, okay? We also have an activity which is

Â makespan, that's essentially the completion time of the project, it has

Â duration zero. That is basically a dummy activity that

Â we are putting at the end. And we put a precedence constraint of all

Â the tasks, essentially, to that particular activity.

Â Then we have here is a unary resource. We are, this is basically something that

Â tells you that if there are various tasks which are actually using that resource,

Â they won't be able to overlap in time. You will have to schedule one before the

Â other, okay? So basically, you know, the description

Â of your decision variables and the various.

Â You know, act, you know, resources that you will use for stating the problem,

Â okay? The rest is very much like the constraint

Â programs that we have seen before. What we are doing is minimizing the end

Â date of the makespan, okay? So it's the same as the starting date

Â because the duration is zero, okay? So that's the, that, that's essentially

Â modeling the end, you know, the duration of the project.

Â And then what you see there are basically the various constraints, okay?

Â So the first set of constraints I'm basically the precedent constraints

Â inside this job okay and so you look at all the job and all the tasking in that

Â job, okay? so you are setting that task t and job j

Â as to precedes task t plus one in that same job j, okay?

Â So, that's basically what you are seeing, you know of course, you know the system

Â here understand, what it means to be a precedent constraints.

Â And it will translate that into a simple inequality and I'll show you that in that

Â next slide, okay? The last part that you see here, okay?

Â So the, the next, you know, in between the two here you will, you know, these

Â are the precedent constraints for saying that the makespan is after every one.

Â But these are basically the resource constraints for every one of the task and

Â every one of the job. You are basically specifying which

Â resources they are using, okay? And therefore as you know that, you know,

Â no task can overlap on these resources. You are basically specifying implicitly

Â the, you know, the kind of disjunctive constraints that this problem has to

Â satisfy, okay? So that's at the high-level, the kind of

Â modeling that you have is this scheduling application.

Â Of course, in practice, the resources are typically more complicated, the

Â precedence constraints are more complicated.

Â There may be additional side constraints, and things like this.

Â But this is kind of the, you know, the simplest scheduling problem is the TSP of

Â scheduling problems, okay? Now, behind the scene, this model, this

Â high level model is compiled into a particular set of cons, decision

Â variables and constraints, okay? So in a sense, you, you have to think

Â about it, every activity has three variables.

Â The starting date, okay, of that particular, of that particular activity,

Â the duration of that activity, and the end date of that activity.

Â Okay, so in a sense every variable can be thought of as three of, three variables.

Â The starting date, the end date and the duration.

Â Now, they have redundancy specially if the duration is fixed but, you know, they

Â are basically... We typically use these three because it's

Â really convenient for expressing some of the algorithms.

Â Of course you have a constraint linking them.

Â OK, the constraint is basically stating tha tyou know the starting date plus the

Â duration is equal to the end date. OK, so in a sense an activity you have to

Â think of it as giving you three you know interdependent decision variables linked

Â that way by this constraint. Okay then the other things how to express

Â the precedence constraints, if you have precedence constraints between b and a,

Â okay? So simply it's become a very simple tiny

Â inequality that Sa has to be greater or equal to a or b, okay?

Â It's very simple, okay? The most interesting part here, and

Â that's what I want to focus a little bit, you know, about in the la, the, the last

Â part of the lecture, is the resource constraint.

Â So these resource constraints are going to be compiled, into global constraints

Â which are disjunctive, here, okay? And they are basically specifying that

Â all the, all the activities there, t1 to tn basically cannot overlap in time,

Â okay? So, this sys, this disjunctive

Â constraints here which is really important.

Â So, if you think in terms of the constraint programming model that I've

Â presented in this class earlier, okay? Everyone of these precedence constraints

Â is one of these guys, okay? And then you have also the global

Â constraints, every disjunctive constraint for every one of the resources is

Â going to be one of these constraints. And, of course, they are going to

Â interact through the decision variable, which are the starting date, the end

Â date. And not the duration because they are

Â fixed, but, but essentially the starting and the end dates of every, end dates of

Â every one of the activities, okay? So that's basically the, the, the, the

Â overall model, what's happening behind the scene.

Â What I want to focus on in the rest of the thing, because this is truly

Â beautiful Is, the disjunctive constraints.

Â And how you can actually, how you can actually implement algorithm that are

Â pruning and, and, pruning effectively the search base.

Â For this disjunctive constraints. So this is, so I'm going to illustrate

Â that, all, you know some of this issue, on a very simple example, okay?

Â So we have three activities that are requesting that resources.

Â Okay so what is duration, six. The other one four the last one three.

Â And what you see there is essentially the time window in which they have to be

Â scheduled, okay? So, this is for instance for the, for

Â the, for the ti, four times three, this is the earliest starting time at which

Â that task can start, okay? So, let's say one, okay?

Â And this is the latest finishing date for that particular task, okay?

Â So in a sense, this is the minimum value in the domain of the starting date.

Â This is the largest value in the domain of the ending, you know, variable, of the

Â activity, okay? So every one of these guys can be

Â scheduled anywhere in that time window, okay?

Â So and what we have to do is to choose where to place them Such that they don't

Â overlap in time, okay? And we have to find out, if this is

Â feasible, okay? And how fast we can do that.

Â Now there is one very interesting thing here.

Â Is that, even the simple, one machine problem, is NP-hard, okay?

Â Which is like, so I did, you know, this beautiful model with this disjunctive

Â global constraints. And there is no way I can test this

Â ability in polynomial time. So what are we going to do?

Â Okay, so we could totally compose these thing into very tiny disjunctive

Â constraints, okay, And you could verify this one in polynomial time, okay?

Â But that basically looses. The, the entire globality of the problem.

Â So, so you really don't want to do that because you will get nowhere in trying to

Â solve practical problem. So what we going to do is that instead of

Â solving this problem exactly, we will relax those tenders.

Â And are we going to show you that. In a moment okay so there are a lot of

Â algorithms to do that. I'm going to just make you curious here,

Â okay? And give you basically the intuition

Â behind some of them. And you know put them some connections

Â and entire set of algorithms in this space, okay?

Â And I'm going to use a little of notation.

Â In think it's intuitive but I'm going to go slowly and because we are doing

Â something which are a little bit fan, you know, fancy here, okay?

Â So, I'm going to, I'm going to to, so, so, omega here in this, all, this formula

Â in this subsequent slide, okay? Is a set of task, okay??

Â And so I'm going to denote s of omega.Think starting date of omega, okay

Â So that's the earliest time at which one of the tasks inside omega can stop, okay.

Â And the way I can compute that is that I look at all the task in omega and I take

Â the one which is the strongest. Smallest possible value in the domain of

Â its starting date. That's what I'm doing.

Â So, when you see s omega, it's the earliest time at which any of these tasks

Â in Omega can start, okay? Then e omega, okay?

Â So, is just the opposite for the ending dates, okay?

Â So, I'm basically look, e omega is the littlest finishing time of all the task

Â in Omega. So, I taking a max overall of this of the

Â maximum value the end value, okay? Of every one the activities, okay?

Â And then the duration of these activities are a little bit interesting as well,

Â okay? So the duration of omega is essentially

Â the sum of all the duration of the tasks in omega, okay?

Â So, you know, remember this notation because we are going to use it over and

Â over and over again, okay? So s omega, earliest starting date of

Â this set of tasks. E omega, that's basically the latest

Â edition date of any of the tasking omega. And d omega is basically the long

Â duration of all the tasking omega, okay? So, so what I'm at this point what we

Â have is that we are trying to find if these constraints is feasible.

Â That's one of the things that the constraints have to do, this disjunctive

Â constraint, is it feasible? We know that this is a hard problem and

Â we are not going to, we are not going to try to solve it.

Â What we are going to do is now approximated efficiently, okay?

Â Well, approximated somehow. And so this is the test, this is the

Â first test that I'm proposing ,okay? To check if a set of tasks can be

Â scheduled on this machine, what I'm going to do is check this little

Â inequality. I'm going to, check that sT plus the

Â duration of T, okay? Is more or equal to eT, okay?

Â You know not eT right, e of T, okay? So, sT is basically the earliest starting

Â date of T, okay? That's the total duration of all the

Â task, and I will be sure that when I add these two things, I can fit them before

Â the latest finishing date of all these tasks, okay?

Â So in the sense, when you look at this picture here, okay?

Â So this is the earliest starting time, okay?

Â This is the latest finishing date, okay? And then you add all the duration of this

Â task, that's basically the solid line inside every one of these tasks.

Â And I want to be sure that when I sum the duration, you know and I add that to the

Â starting date. I'm you know, terminating before I'm

Â ending before the, the latest has ended, okay?

Â That's basically what you're testing here, okay?

Â So the question I have for you is that is this an effective test?

Â Is this something that's going to detect feasibility or, you know, reasonably

Â well, okay? And so one of the things that I told you

Â in the first lecture of this class, right, is that, you know, when you have a

Â question like this. What you have to do is to exaggerate,

Â okay? So I'm going to exaggerate, okay?

Â So this is a set of tasks. Here they have no duration.

Â Well, the duration is fixed and the, the starting there is fixed, okay?

Â And I have this guy this little guy who is all alone okay and seems to be really

Â tight. They don't have much space to be

Â scheduled okay so this is not feasible, okay?

Â If you look at this task you can't schedule that.

Â But because I added this very isolated guy notice that the.

Â Notice that the earliest date is here, the latest finishing date is there.

Â Obviously I have a lot of space here so I will be able to schedule.

Â So essentially I have this I'm always going to say, yes it's feasible, okay?

Â I believe it's feasible. Of course I don't, ever prove but, I, you

Â know, my test is going to say yes, you know, I can prove in feasibility, okay?

Â So, in a sense, this is a terrible test, okay?

Â I can, I can make it arbitrarily bad by adding some task that are basically doing

Â nothing. So we don't like that, we want the system

Â to be robust. So one of the things that we can do is

Â that, you know [LAUGH], you tried to, you know, annoy me, it's like during the

Â relaxations, right? So what we want to do is have a test

Â which you cannot annoy so easily. And what we do, this is another version

Â of the test, but instead this time instead of just looking at the set T, I

Â want to look at all of the subset of the tasks inside T, okay?

Â So once again if I look at my basic set that I've shown you, ok, I want to look

Â if that subset is feasible, okay? if that subset if feasible, if this

Â subset is feasible, and so on. So for every possible subset of the

Â tasks, I will apply the test that I've shown you on the previous slide, okay?

Â And which is, you know, obviously here as well, okay?

Â So that's what I want to do. I don't want you, because there is the

Â one task in my project which is scheduled very, very late, I don't want you to, I

Â don't want the test to always say yes. Here I want to be sure that the test is,

Â you know, looking at all the possible subsets and detecting feasibility for any

Â of these subsets, okay? Good, it looks good, right?

Â No, what is the issue? So how many subsets do you have in a set?

Â Well, you have exponentially many, okay? So now we would have to, you know, run an

Â exponential number of sets And that doesn't sound too good, okay?

Â But, but there is something which is really interesting in the structure of

Â this set, okay? And in practice you only have to look at

Â the quadratic number of them. And I'm going to give you the intuition,

Â right? So this is all the task of my problem

Â again, okay? But now I put some dash lines for

Â everyone of them, okay. So in these dash lines, okay?

Â Are denoting the starting date and the end date, okay?

Â And the late, the earliest starting date, and the latest finishing date or end date

Â of everyone of these tasks, okay? And what I'm going to show you is that

Â you basically consider a quadratic number of tasks.

Â Which are basically optimized by choosing a starting date and choosing an end date,

Â okay? That's going to give you two tasks and

Â I'm going to show you how to compute the tasks okay?

Â So this is the intuition, two red tasks okay so you see this one and that one,

Â okay? So, and so you see the starting date

Â there and you see the end date there, okay?

Â So, when you see this test, okay? Obviously you have the starting date and

Â you have the end date, okay? And now look at this task in the middle,

Â okay?? So, I am not considering it right, and

Â now what I am claiming is that I can put it, okay?

Â I don't have to, I will never have to ever test taking this task and that task

Â together, okay it's completely useless to consider that subset, why?

Â Because if I take this, if I take, you know, if I add this task to these two

Â tasks here, what's going to happen? I'm not going to change the starting

Â date. I'm not going to change the end date

Â because this is in the middle. The only thing that I'm doing here is

Â essentially increasing the duration. So, I'm making this task stronger, okay?

Â So, essentially what I'm saying here is that I can, you know, I ca, well, when I

Â have these three task, okay? So, I don't have to look at the subset of

Â them because this task that I'm considering now we have the three of

Â them, I'm going to subsume all of them. This is going to be stronger.

Â Because it has the same start in there, the same ending, but I'm increasing the

Â duration, okay? So in a sense to actually find out what

Â are the subsidized need to look. I take a starting bid.

Â I take a end bid and then I pack everything in it okay which means that

Â the starting date has be after and the end that has to be before.

Â And that's that stuff that I have to look for, okay?

Â So, this is something that Cazzo and Lavorlt called task intervals, okay?

Â So if you take two tasks, t1 and t2, then you lift essentially as many tasks as you

Â can inside, okay? So you take all the task t, whose

Â starting date after, you know t1, the starting date of t1 and before the end

Â date of t2, okay? So you have this you know, task

Â intervals, that are basically, out of strongest, set this, this, this strong

Â subset that you want to look at, okay? And now basically the feasibilities that

Â you see here, eh, can only focus on all these task intervals.

Â They never have to look at anything else, okay?

Â So you look at all the task intervals for ever pair of, of, of task, and that's

Â what you, the, the, the, that's on this task interval that you apply the task.

Â You don't have to look at any other subset, okay?

Â And so the complexity of [UNKNOWN] here is that, you have a quadratic number of

Â this, you know, inter-, task intervals. And the test, itself, is linear time.

Â So you have an algorithm which is in cubic time.

Â Now, cubic is pretty high. So, we ought to try to, to, to do better

Â than this and I'm going to show you one you know, one in, one algor, one, some of

Â the intuition on how you can do better, okay?

Â So, I'm only going to show you one algorithm here which is going to be

Â quadratic, okay? But, you can do better in practice and,

Â and here you can do and log and in general, okay?

Â So let me give you the intuition behind the quadratic algorithm that we can use

Â for testing or making all these tasks, and the key idea is very, very simple,

Â okay? So, what you want is that you want, so,

Â so, so, let's put it this way, you want to look at one ending date and in one

Â sweep, okay? So you want to compute all these task

Â intervals that if e is the latest, the finishing time, okay?

Â So in essence you look at this e, you fix e and then you're going to compute all.

Â You're going to look at all the starting dates and take it, computing the

Â feasibility for all the task intervals. You know, which if e, as the, as, as the

Â ending date, okay? So in this particular case, what you are

Â going to do, you are going to do e and then we going to, we going to look at

Â this S, S3, S2, and S1. And we going to compute incrementally

Â this task, by adding the duration and then, you know, performing the task,

Â okay? So let me show you the algorithm, because

Â the algorithm is going to make clear a couple of points.

Â So we fix e, right? So e is fixed.

Â And then we're going to look at the first task, okay?

Â Now since we want a task interval, okay? And we say that e is the finishing date,

Â we, will only consider S3, S3 is actually finishing before e, okay?

Â So the latest finishing time of The task which started, you know, at S3, okay?

Â Is, actually, you know, earlier than e. Otherwise, you know, it's not a task

Â interval, okay? So if we have this guy, okay?

Â So this is the test that you see here. We add the duration, okay?

Â To the duration d, okay? So that, d, think of d as the

Â accumulation of all the tasks that we are putting inside this task interval, okay?

Â So this point, okay? So we have nothing, so we put the

Â duration of S3, okay? So in other words, 3 is there, okay?

Â And then we perform the test. But we are, you know, the only thing that

Â we are looking at now is something which started as 3.

Â So, we, we do the test S3 plus the duration and that is to be smaller or

Â equal to e, okay? If it is, great, we are good, okay?

Â Then you move to the next one, okay? And you're going to do exactly the same

Â kind of test, okay? So what you are going to do is gonn, but,

Â but now you are, you, you increase the duration so you have two tasks now, okay?

Â And what you do is, once again it has to finish before e and, and, the key point

Â here is that the only thing that you have to do from moving from S3 to S2, is to

Â add the duration. And now we have another test and we know

Â that S2 is the already starting date, so we don't have to make any computation to

Â find which one is the minimum. We know that this guy is the minimum and

Â thought we test essentially this task interval, can we actually put these two

Â tasks be and complete them before e, okay?

Â And so we, we, we, keep doing that for all the tasks, for all the, you know,

Â [SOUND]. We go, we go like that, for all the tasks

Â which are, all the starting date in that particular order, okay?

Â Now this algorithm is going to be quadratic, obviously if the test fails at

Â any point, okay? So you fail, and otherwise you would have

Â a successful that particular value of e, okay?

Â Now the other things that, I need to mention here is that, you will do that

Â for all the e, okay? All these times, okay?

Â And that, will, give you, an algorithm which is quadratic.

Â Because it's linear, for every one, of these e.

Â And it becomes quadratic because you have a linear of times of e, okay?

Â So basically the intuition, you fix e And then you just do a sweep, okay?

Â From right to left, okay? And that is basically how you reduce the

Â complexity from cubic to quadratic, okay? Now, can we do better than that?

Â Is it possible to actually improve on that again.

Â Yes you can, okay? And I'm not going to prove all the

Â results but there is a beautiful connection, okay?

Â With something which is called preemptive scheduling, okay?

Â Now I just want to mention this because this is another relaxation, okay?

Â So so far, and i-, and it's a little bit the same thing as the linear relaxation

Â although it's completely different, but anyway never mind, okay?

Â So what we are trying to do here is that so far I implicitly, and this is the case

Â in, in many problems assume that none of these tasks can be interrupted.

Â But what happens if you can actually interrupt the task?

Â If you can interrupt the task, essentially, checking feasibility

Â becomes, you know, a very, a very simple problem.

Â You can solve that in n log n time, okay? And the algorithm there would be very

Â simple. It would look at all the tasks that you

Â can schedule at any point, okay? Initially, there is only one, so you're

Â going to schedule it A1, and then you're going to take the one which terminates.

Â You know, which has the tightest deadline, and that's the one that you are

Â going to schedule, okay? If at some point, a new task comes in and

Â then you can schedule it, you will take the one at that point which can be

Â scheduled. And which has, you know the the, the

Â earliest, finishing, latest finishing time, okay?

Â 30:30

So in a sense what you do is that every step you look at the task you can

Â schedule. And you take the one that's the tightest

Â deadline and that's the one you're scheduling.

Â So, this algorithm, this greedy algorithm for preemptive scheduling can be

Â implemented in n log n. And basically solve feasibility for the

Â preemptive problem, okay? Cool, right?

Â And actually, you can show that you know, this is exactly the equivalent to all the

Â tasks that I've shown you. Heck, good connection, right?

Â Okay, so, now we have essentially good test, for testing feasibility, of course

Â it's not complete feasibility. We can say, oh we believe it's feasible

Â but it's not, okay? But the one I want to show you now are

Â the rules for pruning, and they are truly beautiful, okay?

Â So they are called, the edge finding rules.

Â Okay, they are only one of the subset of rules that we can apply.

Â There are other subset of rules that you can apply for pruning.

Â But I'm just, you know, once again, I just want to make you curious, okay?

Â And the key idea is that you're going to select a set omega, of tasks, and then

Â you're going to select another task, i, okay, which is not in omega, okay?

Â And the key point is that, look at this, okay?

Â So you have the task, you have the task here in omega, and you have this task, i.

Â And what you are wondering is that can this task be scheduled before, has to be

Â scheduled always after, or always before the other tasks, okay?

Â That's what you're going to try to do, okay?

Â And how do you do that, okay? What you're going to do is simply add the

Â task to the set omega and perform the ta, same tasks, okay?

Â But, with one variation, right? So, you don't want the test to be to be

Â such, so you want to the task to note you for the end date the task i, okay?

Â So you're going to take the start, the earliest starting date of omega and the

Â task i. So, i, start earlier so you can take it.

Â You can take that starting date, you take the duration of all the tasks in omega

Â and i. But then you test, if all these task can

Â finish before the latest finishing task in omega, okay?

Â Because you want to see if, you know, if I can be scheduled inside the middle, not

Â the first, or, you know, before at least, at least one task inside the set omega.

Â Now if this test tells you no, that basically means that the only way to

Â satisfy the disjunctive constraints is to make sure that i is actually schedule

Â after all the tasks. It has to be, you have to increase, you

Â know, you have to increase the finishing date, that means that i is to be, i is to

Â be the last task in, of all these sets. It has to be scheduled after all the

Â tasks of omega, okay? So, in a sense, once you know that, you

Â can update the starting time, and that's a big update, you'll see, okay?

Â So you have, you, you, you have, you know that, you know, the task i is to start

Â after all the tasks in omega. So, what you can do is to think the

Â starting this over this task, and then the total duration, and you know, that i

Â is to start after. Actually, you know something which is

Â even better, you can take the maximum of all the subset.

Â Because once again, you know, I can actually have pathological case where

Â it's a subset of this task which gives you the best update.

Â Not the, the set considering all of the tasks in omega, okay?

Â So once you have something like this, okay?

Â So you can apply these rules for the starting date and really pushing a task

Â very far back. Or for the end date, you can push it, you

Â know, a task forward, in a sense, okay? And these rules, what is beautiful is

Â they can, you know, they can be, you can compute a fixed point of these, all these

Â edge finding rules. of the propagation algorithm in strongly

Â polynomial time. So this is, this is, so you have a very

Â strong polynomial time algorithm for applying all these rules for all the set

Â omega and all the other task, you know, i, okay?

Â Which is kind of remarkable as well, because once again we have infinitely

Â many of these sets, okay? So that's the beauty of the algorithm.

Â You always have the impression that you rare computing exponentially many things

Â but you can do it in polynomial time, okay?

Â So let me show you an example of this same example as before, you know A1 A2

Â A3, okay? What we are wondering is you know can A1

Â be schedule, be schedule you know before on of these two tasks, okay?

Â So what do we do, well the task that I've just shown you, is basically assumed well

Â it cannot be scheduled after so basically reducing.

Â I'm basically reduce, if we're moving all these values, okay?

Â I'm reducing the finishing of A1 to the same as A3 and now I'm looking at these

Â three tasks. Can they be scheduled in these time

Â windows okay and here it's very easy to see that you cannot find that.

Â Because you have essentially eleven spots and the total direction is stuck here so

Â you are going to say ooh I can't schedule these guys there.

Â That means that one can be scheduled before either tree okay because to be

Â scheduled after everybody. So you push it by computing what is the

Â earliest finishing time of this two. You know they take duration of seven.

Â The first one they cannot be scheduled [SOUND], so this guy is pushed.

Â And that's to be only in this interval, okay?

Â So, these you know, these update roots are very strong because they are really

Â pushing you know, the starting date or pushing the end date of a particular

Â task, okay? Now, so this is basically the type of

Â pruning that happens in the scheduling algorithm.

Â So there are many of, you know many of other resource, resource types but they

Â do essentially the same thing, pushing the starting date.

Â You know, pushing the end date, okay? Now, the search is very interesting.

Â Now, one of the things I told you in the beginning is that you don't need to

Â actually find the starting time in this application, right?

Â In Jobshop or as long as you have disjunctive constraints.

Â What you need to do is basically, order the task on the machine.

Â So the third strategy is, you know most of the time what it does is choosing a

Â machine and then ordering the task in the machine.

Â And then repeating that for the other machine, okay?

Â That is a typical search procedure for that.

Â Now you are going to say, which machine? And once again you're going to apply the

Â first-fail principle that we always use in constraint programming.

Â Which means all you're going to look at the machine which is tight, okay?

Â So when you look at the sum of the duration of the task.

Â And the flexibility that you have in the machine, that's the, you know, these

Â tasks are really tightly packed inside that machine, okay?

Â And then which task? Well, what one of the things that, the

Â search algorithm are going to do is to try to find out which sort of task which

Â can be scheduled first. And once again they will look at a task

Â which are really tight or which have to be, you know, scheduled very early on.

Â And so that's, that's what he's going to be, chosen is a heuristic.

Â For actually just finding the other task is first on that machine or is before the

Â other task, okay? So once again what you do here in this

Â search procedure is exploit the structure of the problem, okay?

Â So, this is it. What I'm going to do next time is

Â actually show you some very interesting. You know, search strategies for exploring

Â the search space of these scheduling problems.

Â And I'm also going to give you a beautiful hybridization of constraint

Â programming or map with local search which is really useful in practice, okay?

Â See you next time.

Â