1:14

So it's a pretty good aspect of Constraint Programming.

Â Most of the time you try to capture as tightly as possible this set of solutions

Â by expressing the constraints as best as you can, okay?

Â So I'm going to use a couple of simple examples, you know to illustrate this.

Â You know, remember the magic series, okay?

Â So we want you to find numbers from S0 to Sn such that Si represent the number of

Â occurrences of i inside the series, okay? So I asked you last time you, we could

Â actually find this series, this is a beautiful one, okay?

Â So we see you know, there are two instances of 0, took, took, there is one

Â instance of one itself, there is two instances of two you know, itself.

Â And the first one, okay? And there are reviews 0 instan, 0

Â occurrences of 3, 0 occurrences of 4, okay?

Â Now, so this is the model that we saw last time.

Â And what I was you know? One of the things that I'm asking you,

Â is, can you find the properties that the solutions have to satisfy.

Â Now when you look at the, you know, the array of decision variables series.

Â Can you find a constraints that is actually helping, you know, will help,

Â you know, in addition to these constraints that we have, that we have

Â stated. So we have these constraints which are

Â stated, it actually capture exactly what are the solution.

Â But can I, can I find another constraint which is always true for every solution,

Â but which will give the solver much more information?

Â And that basically means the solver will be ab, able to explore it to remove

Â values from the domain of the variables. Because that's the end of you know,

Â that's the rule of the game essentially, okay?

Â So, typically you know, the way you do this is that you exaggerate, right?

Â So look at this thing, okay? So this is my magic series and I'm asking

Â you, can I put 17 there, right? I'm exaggerating right, so nobody would

Â do that, okay? I put 2000 there, you know, it makes no

Â sense. Why?

Â Because what is 17 denoting there? 17 would denote the number of occurrences

Â of 4 in the series, no that series is you know 5, of length 5.

Â If I put 17 of [UNKNOWN] on one variable to put 17 occurrences of 4, okay?

Â So, in a sense, what do I know? Can I get you something which is

Â interesting on the sum of the numbers on this series?

Â Yes, they are all occurrences. How many occurrence can I have?

Â Well the length of the series, okay? So I can have this beautiful essentially

Â constraint which says that the sum of all the elements in the series have to be

Â equal to the length of the series which is n, okay?

Â So these constraints has to be true, it must be true, there is no other way.

Â Okay, so the number of occurrences n, okay?

Â And since these guys, these decision variables, are denoting the number of

Â occurrences okay, the sum of them can not be more than n.

Â So this is a constraint which is redundant from a semantic standpoint.

Â It will never, never remove any solutions.

Â Every solution will satisfy those constraints.

Â But it will be able to prune the search base better, okay?

Â So, can I do better than this, okay? Can I find actually something which is

Â stronger than that? Look at this thing, okay?

Â So what am I writing? Assume that series 2 is equal to 3.

Â What does that really mean? That basically means that the value here,

Â okay. I'm going to put the 3 over there, okay?

Â What does it mean? It means, essentially, that there are 3,

Â 2 in this array, okay? So far so good, that's what we know.

Â But saying that there are three, two's in this array means what?

Â How many occurrences that, that call for. Well essentially, you know, these guys,

Â you know, there are three two's, okay? Two occurrences, and there are that three

Â times, so that means there are six occurrences, okay?

Â So essentially what we know is that there is another way for actually counting the

Â number of occurrences. Instead of summing every element in the

Â series, what you could do is look at the re-element in the series, okay?

Â And then multiply by i. So you look at every element of the

Â series, let's say element i, and you multiply it by i, because what you, what

Â I had before was what, series two was three.

Â So, I multiplied by two, so that makes six occurrences, okay?

Â So it's another way of computing the same information, okay?

Â So I have another redundant constraint, which is basically the summation of i,

Â you know, times series i, has to be equal to the number of occurrences, which is n,

Â okay? So that's another concern, which one of

Â the two that I've shown you before is stronger, okay?

Â I will tell you a little bit which one is stronger, okay?

Â But it doesn't matter, right? We could put the first one, we could put

Â the second one inside the server and let the server propagate these two, two

Â constraints at the same time. Okay, so let me show you the kind of

Â deduction that we can do. Okay, so this is essentially only taking

Â the second one here, okay? Which is stronger in almost all the

Â cases, there's one case where it is not, but almost all the cases it is stronger.

Â So you see essentially the constraint reduction before, but you see this

Â redundant constraint which is over there. Okay, what is interesting is what?

Â You see a five there and you see that this guy is multiplied by four, right?

Â What does that mean? Okay, that means that the maximum value

Â that series 4 can take is what? Well, it's one, right?

Â And the same thing for series 3, and for series 2 the maximum value is going to be

Â two, okay? Otherwise I will exceed these five, okay?

Â And for one it's going to be, you know, five, okay?

Â So immediately, I can deduce very strong information here, on the value of these

Â variables, okay? And I can use that for pruning the search

Â page [SOUND]. Very quickly, many of these ray file

Â constraints can disappear, okay? Because I know that they can't be true,

Â okay? So essentially, here, you know, I have

Â removed about a third of these constraints, okay?

Â Then, assume that I make a very simple choice.

Â I say that series 0 is equal to 2, okay? So that basically means that I put a 2

Â over there, right? So, you see the 2 over there, okay?

Â It also means that I have a 2, right? And therefore, the value that we see here

Â for series 2 here I have already one over there, okay?

Â And then, you look at the constraints, you know, the redundant constraints that

Â I have added, okay? So, what do you see there?

Â I know that series 2 is greater than 0 now, okay?

Â So, and therefore, when you look at these constraints, essentially the 5 has become

Â a 3, okay? And therefore, I can immediately deduce,

Â okay, that series 4 here has to be equal to 0.

Â Because it multiplied by a coefficient 4, and it can be at most 3, and that the

Â other one, the 3 times series, 3 can be at most, you know, 1, okay?

Â So once again, I can use that information and I can start propagating that

Â information, and my search base becomes much smaller, okay?

Â So, if you use the simple model that I've just shown you, you can solve magic

Â series up to really, really large numbers.

Â And if you do that, if you actually implement this beautiful model, you will

Â see a very nice pattern in the way these solutions are actually being so, so, so

Â basically constructed. And the similarities that they have with

Â each other. Okay, so the first role of Redundant

Â Constraints is to express properties of the solution.

Â And because it, it looks at the problem, the redundant constraint is looking at

Â the problem from a different angle, it's going to boost the propagation.

Â Okay, so in a sense what you are, you know, what you are adding is this new

Â constraint inside the overall model. You had all these constraints which were

Â already there. They were proving the search case

Â independently. You add the new one and that one will

Â also reduce these domains and therefore will have an impact on the other one.

Â That's the first rule of redundant constraint, okay?

Â So, I'm going to show you another role, this is the second role, which is to

Â provide a global view, a more global view okay?

Â So, and what I'm going to show you is that you can actually combine existing

Â constraints. And that combination will improve your

Â communication between the various variables okay?

Â So, this is, I'm going to illustrate with a market split problem, okay?

Â 8:59

and essentially once again, so you can think of it as a multi knapsack except

Â that the constraints, instead of being small or equal are going to be equal,

Â okay? So what you have is a set of constraints,

Â a set of you know, you will have a set of constraints, you know, indexed by C, a

Â set of variables indexed by V, okay? For every value in C and V, you will have

Â have essentially a weight, okay? You can think of an abstract in the

Â multiple dimensions, dimensions, that's a WCV, okay?

Â And then you have a right in sight, you can think of it as a capacity, okay?

Â And then the decision variables are going to be this value X.

Â There is one variable for every element of V, okay?

Â And then the constraints are essentially kind of knapsack constraints except that

Â it's an equality and not a second equal. But what you do is you basically multiply

Â the weight and the variable, okay, for everyone of these concerns.

Â And you want that to be equal to the right hand side, okay?

Â Now these constraints are actually pretty, pretty weak from a, from a

Â propagation standpoint. They don't communicate each, to each

Â other, only through the domain. And in this particular case, you have

Â this linear concern that I'm basically doing their work independently.

Â And they communicate very later, okay? So one of the thing you can do, okay, is

Â use the concept of surrogate constraints. So you take all these constraints and you

Â add them, everyone with a different coefficient, okay?

Â So I'm basically using here, you know, alpha to the power of C, okay?

Â And by essentially multiplying the first constraint by alpha, the second

Â constraint by alpha squared, the total constraint by alpha cubed, and so on,

Â okay? I can combine all these constraints.

Â I also multiply the right, the right sorry, the right insight here.

Â And therefore, this is essentially a combination of all the, a linear

Â combination of all the existing constraints, and it's valid, obviously.

Â It's not going to remove any solution, and I can add it to the solver.

Â So what I did was essentially take in constraints that are completely

Â independent, merging them together. And now they are working on the same set

Â of variables and they are going to prove the search base more, okay?

Â So, Surrogate constraints, using Constraint Programming they are using

Â Mathematically Programming as well we might talk about that later on, okay.

Â But essentially what I'm doing is simple combination you know in this particular

Â case, linear combination of existing constraints, okay?

Â So in a sense, a Surrogate constraints is just a new constraint that I add, like

Â the properties of the solution that I shown you before.

Â Except that this time I'm not expressing a properties of the solution per say as

Â something that I discover. I simply taking existing constraints and

Â combining them and getting this new constraints that I am putting inside a

Â constraint ,solver. Once again by definition, if I add new

Â constraints the only thing that can happen is that I prove more in the worst

Â case I prove the same, okay? And so in some application, it can make a

Â big difference. Thank you.

Â