0:06

The QUEST method.

Â The biggest difference with QUEST is speed, as we just mentioned earlier.

Â It's solving the same cost function.

Â So really, if you fully implement QUEST,

Â there should be no numerical difference in the answers.

Â I should get exactly the same accuracy that I saw earlier.

Â But it's the effort to get to that answer, is going to be much less.

Â So it's all about avoiding an eigenvalue, eigenvector problem, and

Â we have to do other stuff.

Â That's what Malcolm Shuster came up with,

Â with the QUEST algorithm many many years ago.

Â But it's a very famous algorithm so it's flown a lot.

Â So here's this cost function, right.

Â We had j, some of the weights minus this.

Â This was basically the g function.

Â And with Davenport's q method,

Â we said the minimization of j is replaced with the maximization of g.

Â And Davenport proved that these g's are all the eigenvalues of this k matrix and

Â picked the biggest one to maximize it, great.

Â So it was the optimal eigenvalue, which was the largest eigenvalue in our case.

Â That was the answer.

Â So now let's rewrite this.

Â This J function is really the sum of the weights

Â minus the second part is the g function.

Â And we know the g function should be just the optimal eigenvalue.

Â So the cost function is the sum of the weights minus an optimal eigenvalue.

Â 1:25

Let's solve for that optimal eigenvalue.

Â That was Malcolm's idea, which was quite brilliant.

Â Because his invert stat says,

Â okay, the j comes over here, optimal eigenvalue comes over there.

Â So the optimal eigenvalue will simply be the sum of the weight minus j.

Â 1:44

If you have reasonably good measurements, we're not having sum

Â heading errors that are 80 degrees or something usually have hopefully

Â a fraction of a degree magnetic field might be a degree or two.

Â Those are all pretty small errors compared to the four pie to radians

Â of altitudes that we have to consider.

Â J ends up being typically pretty darn small.

Â 2:15

So that is too funny, okay.

Â If we look at this, and you find the cost one, look at this lambda value.

Â 1.99967, that was the optimal eigenvalue.

Â What was the sum of the weights?

Â 1 + 1,2, it's like whoa, this is really, really close, right.

Â So here's Malcolm's idea.

Â 2:40

We're going to take, to find the optimal eigenvalue,

Â yes, I can make the K matrix and solve a complete eigenvalue, eigenvector problem,

Â which you can do, but it's a lot of math, it's a lot of CPU requirements.

Â Yes?

Â >> Wasn't, maybe I'm just confused with the math,

Â I thought g was, would bend in the sum.

Â The K matrix has v's, if we could go back a slide.

Â 3:06

>> Okay what's the question?

Â >> Shouldn't G have a sub-script on it, in the very bottom equation?

Â >> This G has the summation already here.

Â >> It's the sum of them.

Â >> Right, so it's the sum of the weights minus g.

Â And we've shown the g is the optimal eigenvalue, the largest eigenvalue.

Â So that's that, so what is this eigenvalue?

Â If I don't have to solve an eigenvalue, eigenvector problem,

Â Malcom just says well, look, as a first guess, and I'll show you how good

Â this guess is, some people just the first guess and good enough.

Â You can say it's just a sum of the weights.

Â If you only have two measurements it's pretty darn close.

Â And there will be an iterative way now that we find the optimal one.

Â Instead of solving an eigenvalue eigenvector problem we actually have

Â to solve a root solving problem.

Â But have a guess at where that root has to be that's pretty darn good.

Â Your site was 1.999 something and it's supposed to be 2.

Â Man, you're going to be all over it, right?

Â So you can get there to the same accuracy,

Â there's no accuracy lost doing this if you do these iterations.

Â If you have multiple values, like in your homework, it's more sensitive.

Â If you just take that first guess, it's not that good.

Â You have to get it really down to machine precision.

Â So those iterations I'm about to show you are really important if you have four

Â measurements, as you do in this one homework.

Â 4:24

So what is this iteration?

Â Well, to find an eigenvalue, you go back and you have to look at that, right.

Â You did this once by hand.

Â That was your warm up for this stuff.

Â You take that matrix minus a scalar times the identity of the same dimension.

Â And the determine of that matrix has to be zero.

Â So for the three by three, you end up with a third order polynomial.

Â And you had to find the roots of that, which you could use MatLab at the time.

Â But that's essentially what you have to do.

Â Here, I end up with a fourth order polynomial,

Â because it's a four by four matrix.

Â And i need to find what is the four roots of that.

Â Fortunately, I have a really good guess of where the root is that I care about.

Â I will have four zero crossings, because there's four roots to this K matrix.

Â But I need the one for the largest eigenvalue.

Â And I have a really good guess it's going to be somewhere around two if my

Â weights are one and one.

Â So you basically start your guess at that root as simply the sum of the weights.

Â This is my f function for which I have to find roots.

Â This is the fourth order polynomial.

Â And then I'm using here a Newton-Raphson method which you've all,

Â I'm assuming people have seen Newton-Raphson.

Â Otherwise, go look it up.

Â How to do root solving.

Â There's different secant methods, there's Newton-Raphson.

Â There's a bunch of different methods.

Â This one finds the local slope and say, it's okay.

Â You know that's zero, but zero seems to be more to your left or right.

Â And it projects you down, and you resolve it again, that's the math for that.

Â So I do this math, and evaluate how close I am to zero.

Â I get the gradient, and the end of a fourth order is pretty trivial.

Â Gives you a cubic polynomial.

Â And then I get an updated version and then you repeat this process.

Â If this is net zero, you do it again and again.

Â And this will iteratively get you to where you have to be.

Â This is the step that completely replaces the eigenvalue eigenvector evaluation and

Â it's way faster.

Â So, let's wrap this up then.

Â So, now how does this work?

Â The QUEST method though, doesn't actually find the quaternion set.

Â 6:18

What we have to do is now I know the eigenvalue and

Â the k times beta bar is equal to the optimal eigenvalue times beta bar.

Â Great.

Â That's still a four by four.

Â And, we have to actually find an analytic answer now,

Â on how to come up with the proper attitude measure.

Â And he uses, it's a q here, but in our notation,

Â q's is classical Rodriguez parameters, right?

Â That was e hat times phi over 2.

Â So it's basically the vectorial part of the quaternion, divided by beta naught, or

Â e vector divided by beta naught.

Â So when we had earlier beta naught, beta one, two, three, four and

Â divide everything by beta naught, the first element just becomes one and

Â then the rest of them become ratios of beta one over beta naught,

Â beta two over beta naught, which are nothing but your CRP's.

Â Those are the classic Rodriguez parameters we just learned about last week.

Â 7:07

So that's what Malcolm does.

Â He re-writes this into this form.

Â That's your k matrix times beta over beta naught at optimal lambda,

Â which we now have found to numerical precision times this.

Â The first equation we can somewhat ignore.

Â It's the second one that we can use, though it's z times one,

Â s times this tough times q and lambda optimal times q.

Â That gives you this last equation.

Â It's a simple equation in that you can group the q bars together.

Â Bring them all to one side.

Â And then invert the stuff that pre-multiplies q bar and

Â that's the answer.

Â 7:54

So, CK, if you hear CRPs, what issue comes to mind right away?

Â >> They're close to singularity.

Â >> You could be close to a singularity.

Â What if your attitude is upside down?

Â Then you might have an issue.

Â Now this is mile can fix that.

Â If this is my body frame on this spacecraft,

Â let's say I'm lining it up with the pen, right?

Â That's great.

Â And the pen instead of pointing up,

Â that would be zero altitude, happens to be pointing down.

Â This algorithm will have issues,

Â because the coordinates blow up to infinity, right?

Â But the body frame you picked is one of an infinity of

Â body frames you could have picked.

Â Instead of picking this one, let's say we pick one where you've rotated 90 degrees.

Â Now, with this is upside down, that frame is upside down plus 90 degrees.

Â 8:37

So with one frame is going singular since you have a second alternate body frame,

Â the other one is guaranteed not to go singular.

Â This is similar mentality that the MRP switching that we do.

Â So that's what is done with QUEST.

Â You have this primary body frame and

Â you always see about you can use sequential rotations.

Â You just have an alternate body frame that Is not the same as the first.

Â The one is singular, 180, the other one's guaranteed not to be at 180.

Â And you can estimate either attitude, and

Â then you pull out the 90 degree rotation again at the end.

Â And you do it once you've mapped to something nonsingular, like quaternions.

Â So there's that hidden step in here.

Â In the homework you don't have to do that, just there will be nonsingular attitudes,

Â but if you implemented this be aware of that.

Â