0:00

So now we have seen that controls deals with dynamical systems in generality. And

Â robotics is one facet of this. Now what we haven't done is actually try to make any

Â sense of what this means in any precise or mathematical was. And one thing that we're

Â going to need in order to do this is. Come up with, with models. And models are gong

Â to be an approximation, and an abstraction of what the actual system is doing. And

Â the control design is going to be made, rather done, relative to that model and

Â then deployed on the real system. But, without models we can't really do much in

Â terms of control design. We would just be stabbing in the dark without knowing

Â really what, what's going on. So, models are actually Key when it comes to

Â designing controllers, because if you remember that the question is really how

Â in control theory, is how do we pick the input signal u. So u again, takes the

Â reference, compares it to the output, the measurement and comes up with a

Â corresponding Course correction to what the system is doing. And the objectives

Â when you pick this kind of control input, well, there are a number of different

Â kinds of, of objectives. The first one is always stability. Stability, loosely

Â speaking, means that the system doesn't blow up. So, if you decide a controller

Â that makes the system go unstable, then no other objectives matter cause you have

Â failed. If your robots drive into walls or your aerial robots fall to the ground,

Â basically control stability is always control objective number one. Now, once

Â you have that, the system doesn't blow up. You may want it to do something more than

Â not blow up, and something that we're going to deal with is tracking a lot.

Â Which means, here is a reference either of value. 14, how do we make our system do 14

Â or here is a path how do I make my robot follow this path or how do I make my

Â autonomous self driving car follow a road. So tracking reference signals is another

Â kind of objective. assert important type of objective is robustness in the sense

Â that. ,, . Since we are dealing with models when we're doing our design. And

Â models are never going to be perfect. We can't overcommit to a particular model.

Â And we can't have our controller be too highly dependent on what the particular

Â parameters in the model. R, model r. So, what we need to do is to design

Â controllers that are somewhat immune to variations across parameters in the model,

Â for instance. So this is very important. I'm calling it robustness. a companion to

Â robustness, in fact one can ague that it's an aspect of robustness. It's disturbance

Â rejection, because, at the end of the day. We are going to be acting on measurements.

Â And sensors have measurement noise. things always happen if you're flying a in the

Â air, all of a sudden you get a wind gust. Now that's a disturbance. if you're

Â driving a robot, all of a sudden you're going from Linoleum floored carpet, now

Â the friction changed. So all of a sudden you have these disturbances that enter

Â into the system and your controllers have to be able to overcome them. at least,

Â reasonable disturbances for the, the controllers to be To be effective. Now

Â once you have that we can wrap other questions around it like optimality, which

Â is not only how do we do something but how do we do it in the best possible way. And

Â best can mean many different things, it could mean how do I drive from point A to

Â point B as quickly as possible, Possible or as using as little fuel as possible or

Â while staying as centered into the middle of the road as possible. So optimality can

Â mean different things and this is typically something we can do on top of

Â all these other things and in other to do any of this we really need a model to, to

Â be a. Effective. So effective controlled strategies rely on

Â predictive models. Because without them, we have no way of knowing what our control

Â actions are, are actually going to do. So, what do these models look like? Let's

Â start in discrete time. this means that, what's happening is that, that Distinct

Â time instances, thi ngs happen. In discrete time, what we have typically,

Â is that the state of the system, remember that x is the state. So this is at time

Â instance, k plus 1. Well, it's some function of what the state was like,

Â yesterday, the time, k, and, what they did yesterday. So, the position of the robot,

Â position of the robot tomorrow, is a function of where the robot is today, and

Â what I did today. And then, f, somehow tells me how to go from today's state and

Â controlling to tomorrow's state. This is known as a difference equation because it

Â tells you the difference between different values across, different time instances.

Â So, that's in, in discrete time. and here's an example of this. This is the

Â world's simplest discrete time system. It's a clock, or a calendar. This is the

Â time today. Now I'm adding one second. And this is the time one second later. So the

Â time right now plus 1 second is the time 1 second later. This is clearly a completely

Â trivial discreet time system, but it is a difference equation. It's a clock, so if

Â you plot this we see that here is, this is the 8, which is what time it is As a

Â function of time. So it's silly. But at 1 o' clock, the state is one. At 2 o' clock,

Â the state is two, and so forth. And you get this thing with slope one. So, this

Â would be the world's simplest model. There are no control signals or anything in

Â there. But it least it is a dynamic discrete time model that describes. How a

Â clock would work. Now the problem we have with this though is that the Laws of

Â Physics are all in continuous time. And when we're controlling robots we are going

Â to have to deal with the Laws of Physics. Newton is going to tell us that the force

Â is equal to the mass times acceleration. Or, if we're doing circuits, Kirchoff's

Â Laws is going to relate various properties to capacitances and resistances in the, in

Â the, in the circuit. So, we're going to have to deal with things in continuous

Â time, and in continuous time, there is no notion of next. But we have something

Â almost be tter, and that's the notion of a derivative, which is, it's not next, but

Â it tells us How is the time change? The change with respect to time. So in

Â continuous time, we don't have difference equations. What we have are these things

Â called differential equations. And right here you see that the derivative of the

Â state with respect to time. Is some function of x and u. So this not telling

Â me what the next value it is. It's telling me, what's the change? Instantaneous

Â change. And here, it's the same thing. But now I'm written, I've written x. instead

Â of dx, dt. and time derivatives, a lot of times, is written with a dot. And I'm

Â going to use that in this. Class and this actually traces back to the, the slight

Â controversy between Newton and Leibniz. Leibniz, so in 1684, Newton said, oh I

Â have this idea that I call it differential, and Leibniz at the same time

Â had the same idea. Well, this is Leibniz's notation and this is Newton's notation,

Â and we're going to use the dot for time derivatives here. The point is that these

Â are both the same equations and they are differential equations, because They are

Â relating changes to the values of the states. so if you go back to our clock,

Â what would the differential equation of a clock look like? Well, it would be very,

Â very simple it would say that the, the change, the rate of change of the time is

Â one. Which basically means The clock changes a second every second. That's what

Â it means. So when I drew this picture of the discrete time clock. Or, I drew this

Â line diagonally across it. What I was really doing was describing this. So this

Â is the continuous time clock. x dot is equal to 1 And.

Â We are going to need, almost always, continuous time. Models for, for our

Â systems. And next couple of lectures, we're going to start developing models of

Â particular systems. But, before we do this, I want to say a few words about how

Â to go from continuous time to discrete time. Because our models are going to be

Â continuous time differential equations. But th en, when we're putting this on a

Â robot, we're going to put it on a computer. The computer runs in discrete

Â time, so somehow we need to map continuous time models onto discrete time models. So

Â now if I say that x at time k, well it's x at time k times delta t, where delta t is

Â some sample time. So we've sampled k measurements. Well if I, use this. What is

Â x at time K plus 1 which is at the x.k plus first sample time. Well, I need to

Â relate this thing somehow to the differential equation. So how do I do

Â this? Well, this is not always easy to do exactly but what you can note is that.

Â This is known as a tailor approximation. So x at time k times delta t plus a little

Â delta t which is exactly the next state. Well it's roughly what it was last time

Â times the length of the time interval, delta t times the derivative. So this is a

Â an approximation but the cool thing here is that this x at time k plus delta t,

Â well that's, xk. So these things are the same. x dot at time k equals delta t, well

Â that's f. So this are the same things, and then I just have to multiply my delta t

Â there. So this is a way of getting a discrete time model from the continuous

Â time model. And, this is how we're going to have to take the things we do in

Â continuous time, and map it onto the actual Implementations of computers that

Â ultimately run in discrete time.

Â