And then if you're off a little bit you might have to have more thrust to come, or

if you're too high, you might reduce the thrust slightly, right?

So this is the reference part, the ten Newton.

The delta u becomes the feedback part, because you can solve for the actual u.

The control is being u r plus delta u.

Defeat forward part plus defeat back part.

That's kind of the structure.

So the reference trajectories we have often then generate our feed forward part

of the control.

If UAV was to accelerate, well, to accelerate it has to have more thrust, but

you can compute that.

So nominally, yes ramp up from 10 to 20 newtons over the time period.

That'll give you the history that you want.

That will again be your reference.

So you as a user designed this and this is the reference that will achieve,

this is the reference control that would achieve the reference motion.

Hover, start to rise, stop.

You can come up with an open loop force history, that would have to happen.

So that's what we are linearizing about.

So if you look at an actual Dynamical System that's nonlinear,

we have control applied.

We want to linearize the departures, delta x is x minus xr.

So you put dots on everything and say, okay,

my delta x dot is going to be x dot minus xr dot.

And xr dot is just f of x r and u r.

Right, we've already got that one.

But the x dot, we are now going to linearize with the Taylor series.

So if you remember Taylor series, we have y is equal to f of x and

you want to linearize about x is equal to five.

You put in f for

five plus the first partial times the delta plus the next second partial.

And that's what we're seeing here.

We're doing first order stuff, so your Taylor series expansion of a function is

going to be the function evaluated at the reference plus the first partial

with respect to the states, evaluated at the states.

That's why it's xr and ur multiply times the small departure.

But we not only have departures in states, we also have departures in controls.

We will be putting in just the open loop control, we may have to stabilize it so

we have a delta u that happens as well, so we take the partial of

f with respect to your control variable and then apply small departures.

And everything else is higher ordered terms.

So if you do this, all you're left is this minus this always cancels and

you're left with some equation here that's basically delta x double dot plus this

partial times delta x plus another partial times delta u is equal to,

well, those two things are equal.

This form, some of you may not have seen.

Many of you will have seen this form, though.

Basically, you got a x dot equal to a x plus b u, right?

That's the classic, that's the plan matrix, that's the control ability matrix,

that's where you come together.

But for non linear system, this is how you find that A and that is the B.

It's the partial of the actual f, denominator f function,

with respect to the states after reference.

That's what gives you the A matrix.

And the other partial, that's what gives you the B matrix.

[COUGH] So I expect you to do Taylor series expansion.

it is also in vectorial form.

We've done this kind of a partials when we did that one over RQ thing in class with

a gravity gradients derivation and so forth.

So it's the same math as being applied here.

Take one of the hall marks as you solve this is as well in some systems.

So I'll let you guys work with that.

So but

this is, you can always do this as well and then you can argue linear stability.

But just not realize if you do this approach,

at best you've only argue local stability.

Not global.