0:00

Now we have lots of really good building blocks. We haven't yet put them together.

Â Cause we don't fully know how they fit together, but we have lot's of cool

Â things. We have controllability, that tells us

Â whether or not it's possible to control the system, if we have access to the

Â state. And the way we do that is using state

Â feedback. We have this notion of observability,

Â which tells us whether or not it is possible to, to figure out the state from

Â the output, and the way we do that is. by building observers and we have this

Â tool that seems remarkably strong which is pole-placement which basically allows

Â us to place the closed loop eigen values where ever we want.

Â So make them equal to the desired eigen values and the big question now is how do

Â we put everything together. And the answer is known as the separation

Â principle. And in a nut shell, the separation

Â principle, which by the way, is quite wonderful tells us that we can actually

Â decouple observer design and control design from each other meaning we can

Â actually control the system as if we have X, even though we don't.

Â And then we can get their estimate from x using an observer structure.

Â So this is the topic of today's lecture and it really is the reason why we're

Â able to effectively control linear systems.

Â So, here's the game plan. Now, I have x dot is Ax + Bu.

Â Y is Cx. So this is a standard linear time and

Â variance system. Now I'm going to assume that this system

Â is both completely controllable and completely observable.

Â If it's not then, to be completely frank, we're toast.

Â What that means, we need to go and buy new sensors, which is fancy speak for get

Â a new C matrix. Or we need to buy more actuators which

Â means get a better B matrix. So let's assume that we have complete

Â controllability and complete observability.

Â Well, the 1st step in our game plan is let's ignore the fact that we don't have

Â X. So I'm going to design the state feedback

Â controller as if I had X, meaning I'm going to pick U=-Kx, which means that I

Â get my closed loop, my closed loop dynamics to be this.

Â Now, this is what I designed for and I have my favorite pole placement tool to

Â do this. Now, in reality, I don't have that.

Â reality is I have u is -Kx hat, where the hat is my estimated state.

Â So that's what I actually have even though that's not what I designed for.

Â Now step two, of course, is I'm going to estimate x using, using an In order to

Â get this x hat and to make it be as pleasant as it can.

Â The big thing that we should note now is that previously we didn't have a U term

Â in the observer dynamics. Now we do have a U term that we need to

Â take into account but it turns out that it's very simple to do that.

Â I built my predicter and the predicter part now.

Â Contains both Ax hat and a BU term, because a predictor is just a copy of the

Â dynamics. And then I have my corrector part which

Â is this error between the actual output and what the ouput would have been if I

Â had x hat instead of x. Well, this structure again gives me the

Â same aerodynamics here. So what we do is I pick L so that my

Â error, my estimation error is stabilized. And that's before the error is the actual

Â state minus my estimated state. So this is my game plan.

Â Now, let's see if this game plan is any good.

Â A fact. It should be good right? because

Â otherwise I'm wasting everyones time with these slides, but let's make sure that it

Â indeed is worth while. What do we want, this system to do? We

Â want to drive x to zero, because we're stabilizing it, and we want to drive e to

Â zero, because we want the estimate to be good.

Â So, what I need to do, is analyze the joint dynamics together.

Â So x dot, is ax+bu, but u is, if you remember, u = -k, not x, but x-hat, which

Â is why I get my x dynamics to look like this.

Â While me e dynamics, that's the matrenary dynamics, is what it has always been.

Â Okay, let's simplify this a little bit. So, I know that the error is X minus X

Â hat. So I can replace this X hat with X minus

Â the error. So then I get my dynamics after some

Â pushups to be A minus BKX. Plus BK.

Â E. So now I have something that involves X

Â and E and here it only invovles E. So now I can actually write everything in

Â a joint way. X dot E dot is this large matrix now

Â that's not an NxN but it's a 2Nx2N*X E. And mow, our strategy, our joint strategy

Â works if and only if this new joint system is an asymptotically stable

Â system. Which means that we need to check the

Â eigenvalues of this new system matrix. Now, here is where the separation

Â principle comes into play. This is my dynamics.

Â Now, this matrix here is a rather special matrix.

Â Because it's triangular. It has a block there, it has a block

Â there and it has a block there. And triangular matrices, or block

Â triangular matrices. May they be upper or lower triangular.

Â They have a particularly nice structure. So this is an upper triangle block-matrix

Â and the eigenvalues are given by the diagonal blocks.

Â Which means that the eiganvalues to this whole matrix are the eiganvalues to this

Â matrix and the eiganvalues to this matrix.

Â Or another way of writing it is that the characteristic equation is the

Â characteristic equation to the first block here.

Â Times the characteristic equation to the second block here.

Â All that this means is that the eigenvalues are given by the values of

Â the diagonal blocks. And here is the wonderful part.

Â If we haven't been stupid in how we did the design, then this thing has been

Â stabalized, because we did cold placement to make sure that the real part of our

Â diagonals is strictly negative. This part we have made sure is also well

Â behaved because we have designed our observer in such a way that real part of

Â the eigenvalue is completely negative. Which means that we haven't messed

Â anything up. Everything works.

Â What that means is. Control design people, we can design our

Â controllers as if we had the state and than we rely on our clever sensing people

Â to estimate the state for us. And thanks to the separation principal

Â everything works. Now the ones that we keep in mind is that

Â we still have this term here, and this term that basically tells you something

Â about what happens to transients. But after awhile.

Â This term doesn't really matter and everything works, so now we are ready to

Â state the separation principle. The separation principle tells us that we

Â can in fact design controllers as if we have x.

Â And then, we can design the observers independent of the control actions,

Â because all we're doing is, we're adding a + Bu in the observer dynamic, so the

Â control actions are actually just canceled out.

Â In other words, control and observer designs can be completely separated.

Â So, if you put everything together, in a final Glorious block diagram.

Â This is what the world looks like. We have our system.

Â This is physics. This is what a system does.

Â Now, we have modeled it using A B and C matrices, but what comes out of this

Â thing is Y, meaning our measurements and what we push into this system is U, our

Â control action. Now, we're taking u, sorry, we're taking

Â y and feeding it into the observer. So the observer now is, ax-hat + bu +

Â l(y-Cx-hat) and, the one thing to note, is that we need both, y and u to Feed

Â into the observer. Now, out of the observer comes x-hat,

Â meaning, our estimate of what the system is actually doing.

Â And now, we use x-hat to feed back, to get are, are u.

Â And the beautiful thing here is that, these two blocks together, they

Â constitute the controller. So these two blocks are what's being done

Â in software and this, is the physics of the world.

Â So this the plant, there's nothing we can do about that and the controller consists

Â now of two pieces. One piece that, estimates the state and

Â another piece that computes the control action/g.

Â So now we have everything we need to do effective control design and what we'll

Â do in the next lecture, which is the final lecture, lecture of this module is

Â that we'll actually deploy it. And in fact, we're going to see it in

Â action on a humanoid robot where we're doing simultaneous control and state

Â estimation.

Â