Now that we've walked through the theory of eigenbasis and diagonalisation,
let's have a go at a simple 2D example,
where we can see the answer graphically,
so we'll know whether or not our method has worked as expected.
Consider the transformation matrix, T equals 1 1 0 2.
Hopefully, you'd feel fairly comfortable at this point in
drawing the transformed square and vectors that we used in the previous examples.
As the first column is just 1 0,
this means that our i hat vector will be unchanged.
However, the second column tells us that j hat,
the second vector will be moving to the point 1 2.
Let's also consider the orange diagonal vector to point 1 1.
Multiplying through, gives us 1 1 0 2 multiplied by 1 1.
So thinking about rows times columns,
we're going to get 1 plus 1 and 0 plus 2,
which gives us 2 2.
It's interesting to consider that this particular
transform could be decomposed into a vertical scaling by a factor of 2,
and then a horizontal shear by a half step.
Because we've chosen such a simple transformation, hopefully,
you've already spotted the eigenvectors and can state their eigenvalues.
These are at lambda equals 1,
our eigenvector is 1 0.
And at lambda equals 2,
our eigenvector equals 1 1.
Now, let's consider what happens to the vector,
minus 1 1 when we apply T. So 1 1 0 2 apply to minus 1 1.
This time is going to equal rows times columns, minus 1 plus 1,
and 0 plus 2 which equals 0 2.
And if we apply T again,
we're going to get the following, 1 1 0 2 applied to 0 2,
which is going to equal rows times columns again,
0 plus 2 and 0 plus 4, so this thing finally,
is going to equal 2 4.
Now instead, if we were to start by finding T squared,
so T squared is going to
equal this matrix multiplied by itself.
So applying rows times columns,
we're going to get one times one plus one times zero, that's one.
Rows times columns here, we're going to get three.
Rows times columns here, we're going to get zero.
Rows times columns here, we're going to get a four.
Now, we can apply this to our vector and see if we get the same result.
So, 1 3 0 4 multiplied by minus 1 1,
is going to equal, rows times columns,
so we're going to get minus 1 plus 3 and 0 plus 4,
which of course, equals 2 4.
We can now try this whole process again,
by using our eigenbasis approach.
We've already built our conversion matrix,
C from our eigenvectors.
So, C is going to equal 1 0 1 1.
But we are now going to have to find its inverse.
However, because we picked such a simple 2 by 2 example,
it's possible just to write this inverse down directly by considering that
C would just be a horizontal shear, one step to the right.
So, C inverse must just be the same shift back to the left again.
So, C inverse is going to equal 1 minus 1 0 1.
It's worth noting that despite how easy this was,
I would still always feed this to the computer instead of risking making silly mistakes.
We can now construct our problem.
So T squared is going to equal C D squared C inverse,
which of course in our case,
is going to equal 1 1 0 1 multiplied by our diagonal matrix,
which is going to be 1 and 2,
and that's all squared,
multiplied by C inverse which is 1 minus 1 0 1.
Working this through, we can see that,
let's keep this first matrix,
1 1 0 1, and work out this bit.
So we'll say, this is going to be 1 and 4 on the diagonal, 1 minus 1 0 1.
And let's work out these two matrices here.
So you've got 1 1 0 1 multiplied by,
so we're doing rows times columns in each case.
So for example, 1 0 times 1 0,
you get a 1 here.
And do the second row and the first column, we get a zero there.
First row and second column we're going to get a minus 1 there.
And the second row and the second column,
we're going to go get four here.
Okay. And I'm working it through one more step.
We're going to see that we get more, more grinding.
We get first row, first column, one.
Second row, first column, zero.
First row, second column is three.
And second row, second column is four.
And applying this to the vector,
minus 1 1, we're going to get something like this,
so 1 3 0 4 applied to minus 1 1,
is going to be rows times columns,
so minus 1 plus 3,
and 0 plus 4, which equals 2 4.
Which pleasingly enough, is the same result as we found before.
Now, there is a sense in which for much of mathematics,
once you're sure that you've really understood a concept,
then because of computers,
you may never have to do this again by hand.
However, it's still good to work through a couple of
examples on your own just to be absolutely sure that you get it.
There are of course many aspects of eigen-theory that we haven't covered in
this short video series including un-diagonisable matrices and complex eigenvectors.
However, if you are comfortable with the core topics that we've discussed,
then you're already in great shape.
In the next video, we're going to be looking at
real world application of eigen theory to finish off this linear algebra course.
This is a particularly famous application which requires
abstraction away from the sort of geometric interpretations,
that we've been using so far.
Which means that you'll be taking the plunge and just trusting the maths. See you then.