Lecture 13
Properties of Determinants

Direct computation of the determinant of a square matrix of size larger than 3 x 3 can be very tedious. But we already know that the determinant of an upper triangular matrix is simple, and we know that Gaussian elimination provides a means of transforming any matrix into an upper triangular one. If we know the effect of the elementary row operations on determinants, we will have a more efficient method of computing determinants.

Theorem Let A be an n x n matrix.
(a) If B is a matrix obtained by interchanging two rows of A, then det B = - det A.
(b) If B is a matrix obtained by multiplying a row of A by the scalar k, then detB = k detA.
(c) If B is a matrix obtained by adding a multiple of some row of A to a different row, then detB = detA.

These results may also be expressed in terms of the elementary matrices that accomplish the row operations.

Theorem For any n x n matrix A, and any n x n elementary matrix E, we have det(EA)= (det E) (det A).

In order to establish these results, first notice that (a) follows by comparing the determinant of A, computed by expanding along the first of the two interchanged rows, and then next along the second, with the determinant of B, expanded along the corresponding rows in that order. Similarly, (b) can be deduced by computing the determinant of A and of B by expanding along the row which is multiplied by a scalar. Finally, in order to establish (c), notice by expanding the determinant along the row "j" that is being modified by a multiple of row "i", we see that the determinant of B is the sum of the determinant of A and the determinant of the matrix that has the multiple of row "i" in place of row "j". This latter matrix has determinant which is the same multiple of the one which has row "i" in place of row "j" and which is therefore a matrix with two identical rows. But this matrix must have determinant zero, because when you interchange its row you negate the determinant, but you have not changed the matrix, so the determinant must be the same. The only number which is unchanged when negated is zero, so the determinant of this matrix is zero. And the determinant of B is therefore the same as the determinant of the original matrix A.

Since the elementary matrices corresponding to the row operations in (b) and (c) are diagonal, it is simple to check that (b) and (c) imply the corresponding statements about EA. It is also easy to compute directly that the determinant of an elementary matrix corresponding to a row interchange is -1, so that (a) implies the corresponding statement about EA.

Examples

Now, when any matrix A is transformed to one in upper triangular form using Gaussian elimination, the process can be completed using only row operations of types (a) and (c) above. (The second type of operation is only needed to guarantee that all pivots have value "1".) The upper triangular matrix has determinant given by the product of the diagonal entries. Thus the determinant of the original matrix is the same as this product, except for a factor of (-1) corresponding to each row interchange used in the Gaussian elimination process.

Theorem Let A be an n x n matrix which is transformed into an upper triangular matrix U using only elementary operations of the type "interchange two rows" and "replace row j by row j plus a multiple of row i", for i different from j. Then

detA = (-1)ru11 u22 · · · unn,

where r is the number of row interchanges performed in the reduction of A to U.

Examples

We can use this result to deduce four essential properties of determinants.

Theorem Let A and B be n x n matrices. Then
(a) A is invertible if and only if det A is nonzero.
(b) det(AB) = det(A) det(B).
(c) detAT = det A.
(d) If A is invertible, then det(A-1)= 1/(detA).

For (a), notice that A is invertible if and only if its row echelon form has n pivots, which occurs if and only if the product on the diagonal of that (upper triangular) row echelon form is nonzero. For (b), notice that if A is invertible, we can write A = Ek · · · E 1 as a product of elementary matrices, and the result

det(AB) = det(Ek · · · E1B) = det(Ek )· · · det(E1 )detB = det(A)det(B)

follows by repeated application of the earlier theorem. On the other hand, if A is not invertible then we know that its determinant is zero, and we also know that AB is not invertible. (If it were, then letting its inverse be C, we would have ABC =In, and hence BC would be the inverse of A, contradicting the assumption that A is not invertible.) Thus in this case part (a) implies that both det(A)det(B) and det(AB) are zero. In order to check (c), notice that if A is invertible we can again write it as a product A = Ek · · · E1 of elementary matrices. The transpose is then

AT = E1T ... EkT.

Since an elementary matrix and its transpose are easily seen to have the same determinant, the result det AT = det A follows in this case from repeated application of (b). On the other hand, if A is not invertible, then neither is its transpose, so in this case both detA and detAT are zero by part (a). Finally, for (d), notice from part (b) that det(AA-1)= det A det A-1. But AA-1 = In, so det(AA-1) = 1. Hence det(A) det(A-1) = 1, and therefore det(A-1) = 1/(detA).

Notice that the statement about the transpose of a matrix allows us to deduce that the determinant of a matrix can be computed using a cofactor expansion along any column of the matrix. Indeed, the transpose of the matrix results in the interchange of rows and columns, without changing the determinant, and therefore the earlier result on computing the determinant using a cofactor expansion along any row applies.

Finally, we have one of the original applications of determinants - to the computation of solutions to invertible linear systems. Although not much used these days, it was the state of the art in 1750!

Cramer's Rule Let A be an invertible n x n matrix. The solution to the system Ax = b is the vector u with components uj = det(Mj) / (detA). Here Mj is the matrix with the j'th column of A replaced by b.

To see this, note that we know that we can write u = A -1b as the solut ion to the system. For each j, let Uj denote the matrix obtained by replacing the j'th column of the identity matrix with u. Then by cofactor expansion along row j, we see that det(Uj) = uj. On the other hand,

AUj = A [e1. . . ej-1 u ej +1. . . en] = [a1 . . . aj-1 Au aj+1 . . . an],

where ak is the k'th column of A. But Au = b, and hence AUj = Mj. Therefore
det(M
j) = det(AUj) = (detA) uj, and the result uj = det(Mj) / det(A) follows by division.

Back 250 Lecture Index