0:06

So let's cover Devenport's q-Method.

This was done.

It's a classic method, you don't see it much flown any more, but

many formulations are built on the answers that come from this.

So it's a good one to be aware of.

And so we'll talk about the benefits.

It's this magic math that happens but also the challenges of implementing this.

Quest is the one that's probably flown the most out there.

But there are all kinds of other modifications these days that people have

done to that even.

So this research continues, that people look at this stuff.

So with the q-Method, this comes from the quaternion.

Our book, at least, uses betas.

And the order like I said depending on which source you look at.

The scalar one might be the first or it might be the last so

it just kind of depends on those things.

So that cost function, we needed the norm of this squared.

The norm of vectors is the same thing as vector squared is the same thing as v

dotted with v, all right?

That gives you the norm of v squared or

in the matrix form v transpose v is equivalent to the dot product.

So I'm just taking that residual vector or matrix in this, transpose it with itself.

That's going to give me the known squared of the residuals.

I'm summing them up.

Now this matrix math, if you carry it out there'll be V in the B frame transpose,

V in the B frame.

Well these are all unit vectors, a unit vector representation dotted with itself.

The transpose with itself always gives you 1.

It has to be a unit vector.

Same thing here, if you do this transpose with this, you have the BN transpose BN.

Because of the orthogonality, BN transpose BN just gives you identity.

And you end up with V in N frame transpose V in the N frame again.

A unit vector norm squared is just one.

So, 1 plus 1 is 2 minus, there are these terms.

This one transposed with this and this one transposed with this.

Looks like they're opposites.

But the answer's a scalar.

That's a convenient trick in matrix math.

If the answer's a scalar, you can always transpose that term.

Which of course reverses the matrix math order and

every element gets transposed again, all right?

a times b transposed is b transposed, a transposed.

That identity, that's what you use.

because now you can group them together and you get minus 2 times that one term,

they're actually identical.

If you do that the factors of 2 come out.

I have a one-half that's going to completely drop out, and

you end up with this.

So this is exactly cost function.

I've just done some matrix math to manipulate it a little bit.

But it will turn out this is a pretty convenient form that we have.

2:57

But this cost function has two terms.

This term,

the weights times 1 summed up is just going to be the sum of the weights.

And you pick the weights, nothing is going to happen with that.

The second term, this is the part you adjust.

We're finding the correct bn matrix such that this is minimized.

So minimizing J is equivalent to maximizing this G function.

This G-function is that second term, the weights times this other stuff.

And because it's minus G that's what's in there.

So if you look back here these weights times this is G.

So this is the sum of the weights minus G.

If you want to make J as small as possible, the sum of the weights is fixed.

You have to make G as big as possible.

So we've replaced a minimization

of J with an equivalent maximization of g.

Okay, so now to keep that in mind we want to make g as big as possible.

That will be the best fit of the attitude measure.

Now how do we do that?

Well, Devenport found a way to do it with quaternions.

So we're going to take this BN matrix and write it in terms of quaternions.

You've seen it element by element.

This is a nice matrix compact way to write that thing.

And so this is the BN matrix in terms of the quaternion.

Where epsilon is the vectorial part, and beta nought is the scalar part, right?

Now this is where the magic happens.

We're not doing this in class.

But Devenport proves that this quantity written out in

terms of quaternions, this g-function.

Can be rewritten into this beautiful elegant quadratic term.

4:38

Quadratic functions are amazing for optimizations.

They make life much easier.

There are whole fields on convex optimizations.

So we do this, but there's a K matrix which is a 4 by 4 because this beta set is

a 4 by 1, and so the answer has to be a scalar still.

We want to find the beta set, the quaternion set, that pre-imposed

multiplied on the K matrix makes this g function as big as possible.

Right, this is a maximization now of g to minimize the cost function, j.

Now how is this K defined?

First thing you do is you take the observations, v hats in the B frame.

And you do an outer vector product with the same observations that you know.

You know your environment,

you know what they're supposed to be pointing in the end frame, right?

That's all the stuff that's given.

This is the part that we measure.

And you multiply times the weights.

So the weights are embedded inside the b matrix.

This vector, this is a three by one, this would be a one by three,

the answer gives you a three by three.

So the vector outer product,

you end up with everyone whose elements you're summing is a three by three.

And the answer b is a three by three.

5:47

Now we use that as a stepping stone.

You can see the K matrix is decomposed like this.

Sigma is a scalar, Z here is a 3 by 1, and S is a 3 by 3, and

of course this is an identity operator also a 3 by 3.

So there's some partitioning of how you assemble this.

The S matrix is simply B plus B transpose.

Which makes it a symmetric matrix.

Sigma that appears here and here is simply the trace of the b matrix.

So that's just a diagonal three terms of this b matrix.

And the z is defined as differences of these off diagonal terms.

That's kind of how the math works out.

So Devenport proved this.

I even asked.

He worked at Godard where he was a colleague of Devenport at some point.

Well, how did you ever come up with this?

And even Landis was saying, I don't know.

[LAUGH] He quietly went away and worked on something and shows up and

goes, I think I've got something interesting.

Holy shit, that's cool.

because now if you can do this, let's look at the magic math that happens.

How we get here?

If it seems mystical it is.

It's even worse then Euler parameter properties.

I get all this math to prove it.

But once you find it it's like wow, this is very powerful.

So fundamentally we're taking g.

We want to maximize it.

And we have to find the attitude measure,

what is the right quaternion set that makes g as big as possible?

So let's look at this further.

7:17

Let's look.

If I have a function y(x) and

you look at this function y(x) and it does stuff.

If you want to find the extreme end points of this function y.

What's the classic operation you have to do, how do we find these points?

Differentiate, right, with respect to x.

So you say the partial of y with respect to x.

And that derivative has to be what for an extreme end point?

Zero, right, that finds all the flat spots.

7:53

That could happen.

Now you don't know if they're maximum or minimums.

You have to look at local curvatures or

different numerical techniques to find that kind of stuff.

But to find the extremums it's just taking a cost function and

taking the derivatives with respect to those things t hat you're estimating.

And now you're looking for all the places where could those be zero, right.

Now that's fine if it's an unconstrained cost function.

Here, the MRPs are limited in that I

cannot just have an MRP of 000.

Or here, if you want to maximize this, the answer would be make beta infinity,

infinity, infinity, infinity.

8:30

I challenge anybody to come up with a bigger number than that,

infinity squared summed up.

Why is that not the correct answer?

Well, that's cheating because we know

betas live on this four dimensional sphere, right?

But it's a unit sphere.

So this is not just an unconstrained optimization problem,

this is in fact a constrained optimization.

Now I'm just going to show you how to do this,

this whole class doesn't now how to do this staff.

So some of you may have seen Lagrange multipliers, some of you may not have.

But essentially this is what's called an augmented cost function.

You take the original cost function for which we want to find an extremum.

And you add, or you can add or subtract, really.

9:11

The constraint is written in a form where if it's applied,

that constraint function is zero.

That's always how these things are formulated.

That's why you see basically the sum of beta squared minus 1

has to be equal to zero.

And then times lambda.

Lambda is your Lagrange multiplier that you have to find.

So if you've seen this before hopefully it makes sense.

If you haven't seen it it's not a big deal, not important for

the class otherwise.

But this is how you can do it.

So if you find a set of betas subject to this being satisfied,

then this actually doesn't matter in the end, right?

But if you pick infinity, infinity,

infinity it's going to greatly impact your answer.

And mathematically we can't find an extremum.

So good, so we want to find the extremum but it's a constrained extremum function.

So now we take the derivative of this,

this part of this matrix I can find the derivatives.

And, yeah, this is kind of like 2 times x squared.

The derivative ends up being, with respect to x, 2 times x.

Instead of just 2, I have a matrix here and t his is kind of like your x squared.

So the derivative of this just gives you 2 times the matrix times x.

And if you want to you can just carry it out as a simple math problem in

component form.

And prove this to yourself if you haven't seen that.

Great, same thing here, beta transpose beta, this is really just x squared,

1 times x squared.

That'll just give you 2 times x.

And I have a lambda scalar in front of it.

The derivative of 1 or lambda, it's a constant, vanishes in this case.

10:44

So we've taken derivatives with respect to beta.

Sorry, to the 1 lambda, don't depend on beta.

This is what you end up with.

We set this partial equal to 0, like y prime equal to 0.

That's what gives us the flat spots.

So the 2s we can cancel out.

And you end up with this relationship.

K times beta estimated has to be equal to lambda times beta estimated.

11:13

Yeah, this is an eigenvector, eigenvalue problem.

But instead of on a 3 by 3 as we did earlier, we relate it to the e hats.

Where an eigenvector with a plus 1 eigenvalue of the dcm,

this is now a 4 by 4 matrix.

So we can show that maximizing this, not maximizing, I should say, the extremums.

These could be maximums or minimums of this.

The answers are going to be this.

Now, a four by four matrix like this will have four possible answers.

They will have four possible lambda values and four possible eigenvectors.

Remember, eigenvectors are not unique.

One zero zero is an eigenvector so it's two times one zero zero, right?

We are always going to pick the ones that are normalized because we know those

are the answers we're looking for.

12:23

But I don't feel lucky, so I want to have some math to prove which one to use.

If we go back to the original cost function,

it was basically beta squared times k, that we had, right?

That was the quadratic measure.

And I know this is the condition for

an extremum, the K times beta I can plug in as lambda times beta.

12:43

The Lagrange multiplier is your eigenvalue of the K matrix.

So we plug that one in, beta transpose lambda times beta.

The lambda is a scalar.

So you can move that anywhere in that matrix math.

I'm moving it up front.

Now I have beta transpose beta.

I know that has to be one because it's a unit quaternion set.

So if I apply this condition to my original const function.

It turns out this const function will always just be lambda.

It's a four by four matrix, I will have four eigenvalues.

Which lambda do I pick now of those four?

>> The biggest.

>> The biggest, right?

because the goal is, I had to maximize g and minimize my cost function j, right?

So it's a maximization.

So, out of all the possible answers, the answer is it's the eigenvector

corresponding to the maximum eigenvalue of that K matrix.

And that's it.

Yes?

>> Why don't we just differentiate

it again to find the [INAUDIBLE] multiplication?

13:53

>> because you're basically done at this point.

because numerically, I mean I could do that but then I come up with curvatures.

You need to figure out what's minimum, what's maximum and so

forth if the second order is precise enough.

But in this case, numerically, you can even do the higher order and

then apply this for every of the four possible answers.

I have to find the four possible answers first.

And here you must solve the eigenvalue, eigenvector problem.

And just knowing which one is the largest tells me that's it.

because I want to make g as big as possible.

So you could do that, but in this case, you would just slow down your algorithm.

Once you have this insight,

14:38

So this is basically the steps and we'll go through the mathematical once.

You set up your observation.

In the homework, you do the same.

But then you do those outer products to get the B matrix, and

then from the B matrix, the trace and transpose.

You assemble the K matrix, the 4 by 4.

Once you have it, you find the eigenvalues, the eigenvectors.

Don't do this by hand, please.

That'd be completely wasted at this point.

Use a computer, use MATLAB, Mathematica, something, and

ask it to solve eigenvector, eigenvalues.

It'll be very happy to do this for you,

and it'll spit out four answers, all right?

Then you find the largest eigenvalue and the associated eigenvector.

Remember, eigenvalues and eigenvectors come in pairs.

If MATLAB spits out minus 1, plus 1, 0.5 minus 0.5,

it was the second element that was the largest.

Pick the second vector MATLAB gave you, that's the one.

16:26

>> The positive- >> Yeah, the positive scalar, right?

So MATLAB may give you one with a positive first term, or not.

It has no idea.

It has some internal algorithm that just flips a coin and says, of the two vectors,

this is one I'm giving you.

Right, it's still you as user who gets to decide, this is the one I want to use.

They're both perfectly valid.

It's just one might be giving you 359 degrees and

the other one tells you it's minus 1, all right?

So you do get two possible answers, as you would expect, with the unit constraint.