0:00

[MUSIC]

Â Hopefully, you will all now have a reasonable feeling for

Â what an eigen-problem looks like geometrically.

Â So in this video, we're going to formalise this concept into an algebraic expression,

Â which will allow us to calculate eigenvalues and

Â eigenvectors whenever they exist.

Â Once you've understood this method, we'll be in a good position to see

Â why you should be glad that computers can do this for you.

Â If we consider a transformation A,

Â what we have seen is that if it has eigenvectors at all, then these are simply

Â the vectors which stay on the same span following a transformation.

Â They can change length and even point in an opposite direction entirely.

Â But if they remain in the same span, they are eigenvectors.

Â If we call our eigenvector x, then we can say the following expression.

Â Ax = lambda x.

Â Where, on the left hand side, we're applying the transformation matrix A to a vector x.

Â And on the right-hand side,

Â we are simply stretching a factor x by some scalar factor lambda.

Â So lambda is just some number.

Â We're trying to find values of x that make the two sides equal.

Â Another way of saying this is that for our eigenvectors,

Â having A apply to them just scales their length or does nothing at all,

Â which is the same as scaling the length by a factor of 1.

Â So in this equation, A is an n dimensional transform,

Â meaning it must be an n by n square matrix.

Â The eigenvector x must therefore be an n-dimensional vector.

Â To help us find the solutions to this expression,

Â we can rewrite it by putting all the terms on one side and then factorizing.

Â So

Â (A - lambda I) x = 0.

Â If you're wondering where the I term came from,

Â it's just an n by n identity matrix, which means it's a matrix the same size as A but

Â with ones along the leading diagonal and zeros everywhere else.

Â We didn't need this in the first expression we wrote,

Â as multiplying vectors by scalars is defined.

Â However, subtracting scalars from matrices is not defined, so

Â the I just tidies up the maths, without changing the meaning.

Â Now that we have this expression we can see that for the left-hand side to

Â equal 0, either the contents of the brackets must be 0 or the vector x is 0.

Â So we're actually not interested in the case where the vector x is 0.

Â That's when it has no length or direction and is what we call a trivial solution.

Â Instead, we must find when the term in the brackets is 0.

Â 3:54

Evaluating this determinant, we get what is referred to as

Â the characteristic polynomial, which looks like this.

Â So lambda squared

Â -(a + d) lambda

Â + ad - bc = 0.

Â Our eigenvalues are simply the solutions of this equation, and we can then plug

Â these eigenvalues back into the original expression to calculate our eigenvectors.

Â Rather than continuing with our generalized form, this is a good moment to

Â apply this to a simple transformation, for which we already know the eigensolution.

Â Let's take the case of a vertical scaling by a factor of two,

Â which is represented by the transformation matrix A = 1, 0, 0, 2.

Â We can then apply the method that we just described and

Â take the determinant of A minus lambda I and then set it to zero and solve.

Â So det (1- lambda,

Â 0, 0, 2- lambda)

Â = 1- lambda, 2- lambda,

Â which is of course equal to 0.

Â This means that our equation must have solutions at lambda equals 1 and

Â lambda equals 2.

Â Thinking back to our original eigen-finding

Â formula, (A - lambda I) x = 0.

Â We can now sub these two solutions back in.

Â So thinking about the case where lambda = 1,

Â we can say (1- 1, 0, 0, 2- 1)

Â times this x vector,

Â x1 and x2, must equal to

Â (0, 0, 0, 1) x1, x2,

Â therefore we've got 0 and x2 must equal 0.

Â Now, thinking about the case where lamda equals 2,

Â at lamda = 2, you get 1- 2, and 2- 2,

Â 6:39

Which equals to minus x1, 0,

Â which equals zero.

Â So what do these two expressions tell us?

Â Well, in the case where our eigenvalue lambda equals one,

Â we've got an eigenvector where the x2 term must be zero.

Â But we don't really know anything about the x1 term.

Â Well, this is because, of course any vector that points along

Â the horizontal axis could be an eigenvector of this system.

Â So we write that by saying @ lambda = 1,

Â x, our eigenvector, can equal anything along the horizontal axis,

Â as long as it's 0 in the vertical direction.

Â So we put in an arbitrary parameter t.

Â Similarly for the lambda = 2 case,

Â 7:30

We can say that our eigenvector must equal 0,t.

Â Because as long as it doesn't move at all in the horizontal direction,

Â any vector that's purely vertical would therefore also

Â be an eigenvector of this system, as they all would lie along the same span.

Â So now we have two eigenvalues, and their two corresponding eigenvectors.

Â Let's now try the case of a rotation by 90-degrees anti-clockwise, to ensure that

Â we get the result that we expect which, if you remember, is no eigenvectors at all.

Â The transformation matrix corresponding to a 90-degree rotation is as follows.

Â A = (0, -1), (1, 0).

Â So applying the formula once again we

Â get the det (0- lambda- 1),

Â (1, 0- lambda),

Â which if you calculate this through,

Â comes out to lambda squared + 1 = 0.

Â Which doesn't have any real numbered solutions at all.

Â Hence, no real eigenvectors.

Â We can still calculate some complex eigenvectors using imaginary numbers, but

Â this is beyond what we need for this particular course.

Â Despite all the fun that we've just been having, the truth is that you will almost

Â certainly never have to perform this calculation by hand.

Â Furthermore, we saw that our approach required finding the roots of

Â a polynomial of order n, i.e., the dimension of your matrix.

Â Which means that the problem will very quickly stop being

Â possible by analytical methods alone.

Â So when a computer finds the eigensolutions of

Â a 100 dimensional problem it's forced to employ iterative numerical methods.

Â However, I can assure you that developing a strong conceptual understanding of

Â eigen problems will be much more useful than being really good at calculating them

Â by hand.

Â In this video, we translated our geometrical understanding of eigenvectors

Â into a robust mathematical expression, and validated it on a few test cases.

Â But I hope that I've also convinced you that working through lots of eigen-problems,

Â as is often done in engineering undergraduate degrees, is not a good

Â investment of your time if you already understand the underlying concepts.

Â This is what computers are for.

Â Next video, we'll be referring back to the concept of basis change to

Â see what magic happens when you use eigenvectors as your basis.

Â See you then.

Â [MUSIC]

Â