2:01

but on the other axis,

Â I am going to move e2 hat to b,d.

Â I'm going to move it over to some place where e2 prime equal to b,d.

Â So what I've done then is I've taken my original grid here,

Â as well as stretching it out by a and up by d,

Â I've also sheared over by b.

Â But this area here is just the base times the perpendicular height.

Â This perpendicular height is d. So the area here is still ad,

Â so that a determinant is still ad.

Â I've still changed the size, the space,

Â the scale of the space is what the determinant is,

Â by a factor of ad.

Â But if I have a general matrix a, c, b, d,

Â then what that is going to do to the space,

Â to my original axis vectors,

Â is it's going to turn that into something like this.

Â The first one is going to be something like that, say,

Â and the second one is going to be something like that say.

Â So my new area is going to be that.

Â If I want to find that area,

Â I have to a little bit of maths.

Â So I've done a bit of maths and here it is.

Â What I've done is, I found the area of

Â this parallelogram by finding the area of the whole box here,

Â and taking off all the little bits around it, and I found that the area is

Â ad minus bc. Pause for a moment if you like and verify that that's correct.

Â And I'm going to denote,

Â if this is the matrix A,

Â I am going to denote that this area,

Â this determinant is with vertical lines here and

Â that's then say the determinant of A for two by two matrix is just

Â ad minus bc.

Â Now, in school when you looked at matrices,

Â you probably saw that you could find the inverse in the following way.

Â If you've got a matrix a, b, c, d,

Â then you probably said that you could find

Â the inverse by multiplying by a matrix where you exchange the a and the d,

Â and you took the minus sign on the b and the c on the off diagonal terms.

Â And you also multiplied by another number.

Â Let's just multiply that out and see what we get.

Â So when we multiply that out, we're going to get,

Â multiply that row by that column we'll get

Â ad minus bc, interesting.

Â And we'll get a times minus ab plus ab, so that's zero.

Â Here will get cd minus cd so that's 0.

Â And when we multiply that row by that column we'll get minus bc plus ad.

Â That's ad minus bc again.

Â Now, if I divide that through by a number, ad minus bc.

Â If I divide by the determinant,

Â I'll divide these through by ad minus bc and these will turn into one.

Â So now I've got the identity matrix,

Â and this guy time's the determinant is in fact,

Â the inverse of the two by two matrix.

Â And that's how you would have done it by school.

Â And this is the determinant here and that's really what the determinant is.

Â It's the amount that the original matrix stretched out space.

Â And by dividing by the determinant,

Â we're normalizing the space back to its original size.

Â That's what that determinant bit does.

Â Now, we could spend another video looking an extension

Â of the idea of elimination and back substitution,

Â that row-echelon idea to find out how to find determinants computationally.

Â But this is both tricky to show and derive, and is kind of pointless.

Â Knowing how to do the operations isn't a useful skill

Â anymore because we just type det(A) into a computer.

Â Thus I'll just type det(A) and my computer gives me the answer, done.

Â From a learning perspective,

Â it doesn't add much.

Â Row-echelon does, which is why we went through it.

Â So I'm not going to teach you how to do determinants.

Â If you want to know,

Â then look up a QR decomposition online,

Â or better yet, look in a linear algebra textbook.

Â Now, let's think about this matrix.

Â Think about a matrix A, which is 1, 1, 2, 2.

Â What this guy does is he takes space,

Â and he makes our first basis vector here into 1,1.

Â Makes it go there up to 1,1.

Â And makes our second basis vector go to this one here,

Â go to 2,2.

Â So what he's done is he's taken a space with areas,

Â and he has collapsed everything onto a line.

Â All of my y's here are going to collapse into this line,

Â all my x's are going to collapse onto this line.

Â So he's reduced the dimensionality of the space.

Â All of this space is going to end up,

Â every point in space is going to map on to some point on this line here.

Â Now, notice that the determinant of this matrix is going to be zero.

Â The area enclosed by the new basis vectors is zero,

Â and if I do ad minus bc,

Â I've got one time two minus one time two.

Â So the determinant of A is one times two minus one times two is nought.

Â So graphically, we can see that it's zero and we can compute that it's zero.

Â If I had a three by three matrix describing a 3D space,

Â then that linear dependence of one of the new basis vectors on the other

Â two would mean that the new space was either a plane,

Â or if there was only one independent basis vector, it would be a line.

Â In either case, the volume enclosed will be zero so the determinant would be zero.

Â Now, let's turn back to our row-echelon form.

Â Let's take this set of simultaneous equations here.

Â Let's take 1, 1, 2,

Â 1, 2, 3, 3, 4, 7,

Â times a, b, c,

Â some vector is equal to 12, 17, 29.

Â Now, you'll notice here that if I take the sum of the first two rows,

Â I get the third row.

Â So that is row one plus row two is equal to row three.

Â And also if I take the columns,

Â if I take two times column one plus one times column two,

Â then I'll get two plus one is three,

Â two plus two is four,

Â four plus three is seven.

Â So that's equal to column three.

Â So if I think of these as the new basis vectors of my matrix,

Â then they're not linearly independent.

Â This one guy is a linear combination of these two,

Â it's twice this one plus one times the middle one.

Â So, this is going to be a problem.

Â It's going to be a thing that collapses my vector space from being 3D to 2D,

Â it's going to collapse every point in space onto a plane.

Â That's going to be tricky, let's see how.

Â So when I try to reduce this row-echelon form,

Â if I take the first row 1, 1, 3,

Â a, b, c equals 12.

Â If I take that off the second row,

Â then I'm going to get zero,

Â take the first row of the second.

Â I'm going to get zero there,

Â and I'm going to take one off of there, and I'm going to have one,

Â take three off of there and I'm going to have one,

Â take 12 off of 17 and I will have five.

Â And if I take row one and row two off of row three,

Â then I'm going to get

Â 0, 0, 0 all on the bottom row.

Â And if I take 12 and 17 of 29,

Â I'm going to get zero as well.

Â So this is row-echelon form but I don't have a one here.

Â So I find that I'm now getting zero c equals zero.

Â And that's true, but it's not very useful,

Â I don't have a solution for c. So now I can't back substitute,

Â I cant solve my system of equations anymore.

Â I don't have enough information.

Â When I went into the shop what my mistake was,

Â when I went into buy apples and bananas and carrots the third time,

Â I ordered just a copy of my first two orders.

Â The sum of my first two orders.

Â So I didn't get any new information.

Â I just found out that the cost of my two first orders combined was 29.

Â So I don't have enough data to find out

Â the solution for how much apples and bananas and carrots cost.

Â My third order wasn't linearly independent from my first two,

Â in sort of matrices and vectors language.

Â So we've shown that where

Â the basis vectors describing the matrix aren't linearly independent,

Â then the determinant is zero,

Â and I can't solve the system.

Â That is because these aren't linearly independent,

Â I don't have a determinant with any volume,

Â it's now collapsed to a plane so the determinant is nought,

Â and that means when I try my row-echelon form,

Â I can't solve the problem.

Â And that means I cant invert the matrix, which means I'm stuck.

Â So in fact this matrix has no inverse,

Â it's what's called singular.

Â So, there are situations where I might want to do

Â a transformation that collapses the number of dimensions in a space.

Â I might want to do that sometimes,

Â but that will come at a cost.

Â Another way of looking at this is the inverse matrix let's me undo my transformation.

Â It lets me get from the new vectors back to the original vectors.

Â But if I have dumped the dimension,

Â if I have scrapped dimension by turning 2D space into a line,

Â or a 3D space into a plane or a line,

Â I can't undo that anymore.

Â I don't have enough information.

Â So I've lost some of it in doing the transformation,

Â I've lost that third dimension.

Â So in general, it's worth checking before you propose a new basis vector set,

Â and then use a matrix to transform

Â your data vectors that this is a translation you can undo.

Â And you do that by checking that your proposed new basis vectors

Â are linearly independent.

Â