0:00

So, in the last lecture, we saw that it was possible to pick the eigenvalues of

Â the closed loop system in such a way that they became equal to some desired

Â eigenvalues. For the particular case of the point mass

Â robot. Now, today, I want to be a little bit

Â more general. In terms of, how do we go about doing

Â this for, arbitrary and linear systems. And again, the whole idea here is that we

Â pick our control gains such that the actual eigenvalues Of the closed loop

Â system, becomes equal to the desired eigenvalues.

Â So, this is the whole the whole idea of philosophy behind this, this approach.

Â So, lets say that the characteristic equation associated with my closed loop

Â system. So, this is the, again the determinant.

Â Of (lambda*I-(A-BK)). That's what this equation here says.

Â Well, let's assume that if I compute this I get the following expression.

Â Well, what I now do is I pick my favorite eigenvalues.

Â What I would like and in this case the lambda I's are my favorite eigenvalues.

Â so What I have is an actual characteristic equation, which is this

Â thing. And one that I would have had, had indeed

Â the eigenvalues been equal to my favorite eigenvalues.

Â And all I do now is line up the coefficients.

Â So I take Bn-1 has to be equal to An-1. Now, I'm writing An-1 as a function of K

Â because it really is. K is the control knob or the control gain

Â that we have to solve these N equations for.

Â So this is the general procedure and it seems magic, it seems so we can control

Â anything. So the first question we should ask is,

Â is this always possible? The second question is, well.

Â Where do these the side eigenvalues come from? And the third is, you know what

Â determinants are not that fun to compute. 2 by 2, fine.

Â But when we go to higher order systems, it becomes a pain.

Â Now, the first answer is, unfortunately. This isn't magic, we can't always do it.

Â So, what we need to understand is in fact when we can do it and when we cannot do

Â it. the answer to the other question is,

Â unfortunately there isn't a recipe book that says here are the eigenvalues you

Â need to use. In fact, it's a little bit of an art and

Â science, and the design choices that we make ultimately boil down to The choices

Â of eigenvalues. So we have to discuss a little bit about

Â how we pick our eigenvalues. Now luckily for us, that's the answer to

Â the third question. We don't have to actually compute this.

Â In, in math lab, we can very easily use comething, something called the place

Â command. So if I have my a and b matrices and I

Â wrote down A vector of my favorite eigenvalues, then I simply run

Â K=place(A,B,P) to compute this K matrix that gives us the desired eigenvalues.

Â Cool. OKay, let's do some examples.

Â Here is X dot is whatever X plus whatever U.

Â These are 2 arbitrary A and B matrixes. Well, we're going to have to pick a K and

Â again, K in this case has to be 1x2. And the reason we see that is A is 2x2.

Â B is 2x1. That means that K has to be 1x2.

Â Because these guys have to be the same. They cancel out and what I end up with

Â has to be of the right. Dimension.

Â So, that's why k, in this situation, has to be a 1 by 2 matrix.

Â So if I compute this, I get the following matrix.

Â Well, here is the characteristic equation, or the, this determinant.

Â this should be = to 0. But we're actually not going to solve it.

Â All we're going to do is, we're going to compute this determinant.

Â And again. The way compute the determinants is this

Â times this minus this times this. And if you do that in this particular

Â case, you get the following equation. Here is one coefficient that we're

Â going to have to mess with, and here is the other coefficient that we're going to

Â have to. To mess with.

Â Cool. So, moving on, this is our characteristic

Â equation. Let's say again for the sake of argument

Â that we want to place both our eigenvalues in negative 1, then this is

Â what we would have had, had this been indeed true.

Â So now I'm going to have to solve its. These two being equal and these being

Â equal. Those are the equations that we are

Â forced to solve. So, you do that, first of all for the

Â coefficient in front of lambda then we get that, after a while, K1+K2=5 Well

Â lets look at the coefficients in the form of of lamba^0, which means no lambdas.

Â If we do that we get k1+k2=1. Hey, wait a second.

Â This is trouble isn't it? k1+k2=5 and k1+k2=1 this doesn't seem all that

Â promising, in fact it's impossible. It can't be two things at the same time.

Â So here, all of a sudden, we failed. We can't actually solve this.

Â And, what's really at play here is a lack of something called controllability.

Â And controllability is this key term that describes if we have enough Influence or

Â control is already over our system, and when I said that it is not over

Â responsible to do power placement. This is exactly what I mean.

Â If we don't have enough control as provided we can do anything. Nn fact, you

Â can do nothing in that case, you just have to hope for the best or hold your

Â nose. But we'll, we'll talk about that in a

Â little bit late a little bit later. So for now, this is what can go wrong,

Â lack of controllability. We don't know what controllability is yet

Â but that's something to be aware of. Now, let's pretend that we could do it

Â though. how do we pick the eigenvalues? Well.

Â It's not clear, like I said. But some things we should keep in mind

Â is. Well, if the eigenvalues have a non-zero

Â imaginary part. Alright, well first of all, let's say

Â that sigma+Jomega is an eigenvalue. Then there has to be another eigenvalue.

Â In this case I call it lambda J. There has to be another one that is the

Â complex conjugate pair of this because eigenvalues have to show up in conjugate

Â complex pairs if indeed they are. complex.

Â So that's the first thing that we should keep in mind, that we can't just assign

Â an, one complex eigenvalue. We have to assign two, if that's the

Â case. The other thing is, of course, we need

Â the real part of the eigenvalues to be strictly negative, because otherwise, we

Â don't have asymptotic stability. The other thing to know is that if,

Â indeed, we keep an imaginary part around, we get oscillations.

Â So, oscillations, if that's something we would like, typically we don't, but, so

Â if you don't like oscillations the eigenvalues we pick are all real.

Â If for, for some reason wanting oscillations, then we have to introduce.

Â Imaginary parts. And the last thing is that the choice of

Â eigenvalues actually affect the rate of convergence, meaning how quickly the

Â system is stabilized. And in fact the rate of convergence is

Â dominated by the smallest eigenvalue. So let's say that I've actually picked a

Â bunch of eigenvalues here. I've done poll placement, so here are my

Â eigenvalues, my lambdas. And they happen to be here, here, here,

Â here, and here in the complex plane. Notice here that here I have a complex

Â conjugate pair. Well, the eigenvalue that's closest to

Â the imaginary axis is the smallest eigenvalue.

Â And this eigenvalue Actually dominates the performance, in terms of how quickly

Â the thing converges. So what you could do, is if you make all

Â your argmin values equal to minus a million, then you have a really, really,

Â really, really fast system. The problem is that if you make them

Â really, really really fast you get really large control signals which means that

Â any physical actuators going to saturate.So, you don't wanted to go super

Â fast because then your saturated actuators so its that what we need to do

Â is some how play around with these things to balance a low bit aa how fast or how

Â slow you want your systems to be versus what size the control signal should be so

Â let's investigate these few concepts a little bit and we're going to do it in

Â Matlab. So I've picked some matrixes randomly,

Â these are my system matrixes and then I picked some poles or eigenvalues.

Â In this case, I've picked a complex conjugate pair, -0.5+-j, which you have

Â to write as 1i in MATLAB. and then, I run Pole placement.

Â K=place (A,B,P). And then, all I do is I compute the

Â solution. So instead of me chitchatting about this,

Â why don't we switch over to It's a MATLAB here, and I actually see it happen.

Â So Here is the same piece of code that you

Â just saw. this is the eigenvalues.

Â And if you run this, we see that we have a system that slowly, slowly, decays down

Â to 0, possibly. But, there are oscillations going on

Â there, right? Clearly, because I have. imaginary eigenvalues.

Â Now, the real parts here, -0.1. They're determining how quickly the

Â system is converging. So if I, instead, use this p matrix here.

Â Same imaginary part. But a larger negative, real part.

Â Then I should get a. Oscillatory response but faster and if I

Â do that we see here what's happening. This is the new system it's still

Â oscillating but it's quicker getting down to 0 which is what we would expect.

Â Now let's get rid of the oscillations altogether.

Â So if I now pick 2 purely. Real eigenvalues.

Â And, in fact, the smallest one is negative 0.5.

Â So that's going to determine how quickly we're moving.

Â We run this, then, bam! See here. No oscillations.

Â We're decaying down to zero, quite nicely.

Â But, maybe we're thinking that this is a little bit too slow.

Â So let's pick some other eigenvalues here.

Â In fact, negative 5 and negative 4. This should be dramatically quicker.

Â And if I do that, bam. I get this.

Â Bam, very quickly down to 0, tiny bit of overshoot, And then we're stabilizing.

Â So this is how we're going to have to play around a little bit with the ion

Â values to get the system performance out that we're interested in.

Â Next time, we're going to investigate a little bit more.

Â What exactly was it that broke? When we couldn't place the eigenvalues the way we

Â wanted.

Â