0:00

So, in the last lecture, we saw that it was possible to pick the eigenvalues of

the closed loop system in such a way that they became equal to some desired

eigenvalues. For the particular case of the point mass

robot. Now, today, I want to be a little bit

more general. In terms of, how do we go about doing

this for, arbitrary and linear systems. And again, the whole idea here is that we

pick our control gains such that the actual eigenvalues Of the closed loop

system, becomes equal to the desired eigenvalues.

So, this is the whole the whole idea of philosophy behind this, this approach.

So, lets say that the characteristic equation associated with my closed loop

system. So, this is the, again the determinant.

Of (lambda*I-(A-BK)). That's what this equation here says.

Well, let's assume that if I compute this I get the following expression.

Well, what I now do is I pick my favorite eigenvalues.

What I would like and in this case the lambda I's are my favorite eigenvalues.

so What I have is an actual characteristic equation, which is this

thing. And one that I would have had, had indeed

the eigenvalues been equal to my favorite eigenvalues.

And all I do now is line up the coefficients.

So I take Bn-1 has to be equal to An-1. Now, I'm writing An-1 as a function of K

because it really is. K is the control knob or the control gain

that we have to solve these N equations for.

So this is the general procedure and it seems magic, it seems so we can control

anything. So the first question we should ask is,

is this always possible? The second question is, well.

Where do these the side eigenvalues come from? And the third is, you know what

determinants are not that fun to compute. 2 by 2, fine.

But when we go to higher order systems, it becomes a pain.

Now, the first answer is, unfortunately. This isn't magic, we can't always do it.

So, what we need to understand is in fact when we can do it and when we cannot do

it. the answer to the other question is,

unfortunately there isn't a recipe book that says here are the eigenvalues you

need to use. In fact, it's a little bit of an art and

science, and the design choices that we make ultimately boil down to The choices

of eigenvalues. So we have to discuss a little bit about

how we pick our eigenvalues. Now luckily for us, that's the answer to

the third question. We don't have to actually compute this.

In, in math lab, we can very easily use comething, something called the place

command. So if I have my a and b matrices and I

wrote down A vector of my favorite eigenvalues, then I simply run

K=place(A,B,P) to compute this K matrix that gives us the desired eigenvalues.

Cool. OKay, let's do some examples.

Here is X dot is whatever X plus whatever U.

These are 2 arbitrary A and B matrixes. Well, we're going to have to pick a K and

again, K in this case has to be 1x2. And the reason we see that is A is 2x2.

B is 2x1. That means that K has to be 1x2.

Because these guys have to be the same. They cancel out and what I end up with

has to be of the right. Dimension.

So, that's why k, in this situation, has to be a 1 by 2 matrix.

So if I compute this, I get the following matrix.

Well, here is the characteristic equation, or the, this determinant.

this should be = to 0. But we're actually not going to solve it.

All we're going to do is, we're going to compute this determinant.

And again. The way compute the determinants is this

times this minus this times this. And if you do that in this particular

case, you get the following equation. Here is one coefficient that we're

going to have to mess with, and here is the other coefficient that we're going to

have to. To mess with.

Cool. So, moving on, this is our characteristic

equation. Let's say again for the sake of argument

that we want to place both our eigenvalues in negative 1, then this is

what we would have had, had this been indeed true.

So now I'm going to have to solve its. These two being equal and these being

equal. Those are the equations that we are

forced to solve. So, you do that, first of all for the

coefficient in front of lambda then we get that, after a while, K1+K2=5 Well

lets look at the coefficients in the form of of lamba^0, which means no lambdas.

If we do that we get k1+k2=1. Hey, wait a second.

This is trouble isn't it? k1+k2=5 and k1+k2=1 this doesn't seem all that

promising, in fact it's impossible. It can't be two things at the same time.

So here, all of a sudden, we failed. We can't actually solve this.

And, what's really at play here is a lack of something called controllability.

And controllability is this key term that describes if we have enough Influence or

control is already over our system, and when I said that it is not over

responsible to do power placement. This is exactly what I mean.

If we don't have enough control as provided we can do anything. Nn fact, you

can do nothing in that case, you just have to hope for the best or hold your

nose. But we'll, we'll talk about that in a

little bit late a little bit later. So for now, this is what can go wrong,

lack of controllability. We don't know what controllability is yet

but that's something to be aware of. Now, let's pretend that we could do it

though. how do we pick the eigenvalues? Well.

It's not clear, like I said. But some things we should keep in mind

is. Well, if the eigenvalues have a non-zero

imaginary part. Alright, well first of all, let's say

that sigma+Jomega is an eigenvalue. Then there has to be another eigenvalue.

In this case I call it lambda J. There has to be another one that is the

complex conjugate pair of this because eigenvalues have to show up in conjugate

complex pairs if indeed they are. complex.

So that's the first thing that we should keep in mind, that we can't just assign

an, one complex eigenvalue. We have to assign two, if that's the

case. The other thing is, of course, we need

the real part of the eigenvalues to be strictly negative, because otherwise, we

don't have asymptotic stability. The other thing to know is that if,

indeed, we keep an imaginary part around, we get oscillations.

So, oscillations, if that's something we would like, typically we don't, but, so

if you don't like oscillations the eigenvalues we pick are all real.

If for, for some reason wanting oscillations, then we have to introduce.

Imaginary parts. And the last thing is that the choice of

eigenvalues actually affect the rate of convergence, meaning how quickly the

system is stabilized. And in fact the rate of convergence is

dominated by the smallest eigenvalue. So let's say that I've actually picked a

bunch of eigenvalues here. I've done poll placement, so here are my

eigenvalues, my lambdas. And they happen to be here, here, here,

here, and here in the complex plane. Notice here that here I have a complex

conjugate pair. Well, the eigenvalue that's closest to

the imaginary axis is the smallest eigenvalue.

And this eigenvalue Actually dominates the performance, in terms of how quickly

the thing converges. So what you could do, is if you make all

your argmin values equal to minus a million, then you have a really, really,

really, really fast system. The problem is that if you make them

really, really really fast you get really large control signals which means that

any physical actuators going to saturate.So, you don't wanted to go super

fast because then your saturated actuators so its that what we need to do

is some how play around with these things to balance a low bit aa how fast or how

slow you want your systems to be versus what size the control signal should be so

let's investigate these few concepts a little bit and we're going to do it in

Matlab. So I've picked some matrixes randomly,

these are my system matrixes and then I picked some poles or eigenvalues.

In this case, I've picked a complex conjugate pair, -0.5+-j, which you have

to write as 1i in MATLAB. and then, I run Pole placement.

K=place (A,B,P). And then, all I do is I compute the

solution. So instead of me chitchatting about this,

why don't we switch over to It's a MATLAB here, and I actually see it happen.

So Here is the same piece of code that you

just saw. this is the eigenvalues.

And if you run this, we see that we have a system that slowly, slowly, decays down

to 0, possibly. But, there are oscillations going on

there, right? Clearly, because I have. imaginary eigenvalues.

Now, the real parts here, -0.1. They're determining how quickly the

system is converging. So if I, instead, use this p matrix here.

Same imaginary part. But a larger negative, real part.

Then I should get a. Oscillatory response but faster and if I

do that we see here what's happening. This is the new system it's still

oscillating but it's quicker getting down to 0 which is what we would expect.

Now let's get rid of the oscillations altogether.

So if I now pick 2 purely. Real eigenvalues.

And, in fact, the smallest one is negative 0.5.

So that's going to determine how quickly we're moving.

We run this, then, bam! See here. No oscillations.

We're decaying down to zero, quite nicely.

But, maybe we're thinking that this is a little bit too slow.

So let's pick some other eigenvalues here.

In fact, negative 5 and negative 4. This should be dramatically quicker.

And if I do that, bam. I get this.

Bam, very quickly down to 0, tiny bit of overshoot, And then we're stabilizing.

So this is how we're going to have to play around a little bit with the ion

values to get the system performance out that we're interested in.

Next time, we're going to investigate a little bit more.

What exactly was it that broke? When we couldn't place the eigenvalues the way we

wanted.