What did I do? I wrote a "random" 3-by-3 matrix, and then proceeded to check the textbook's formula for A-1. The formula I refer to is on p. 383 (Theorem 8.18). I believe engineering students should be aware that such an explicit formula exists, so that if such a formula is necessary, then it can be used.

I had students compute the entries of the adjoint of A, and we checked (at least a little bit!) that the result was A-1. I decided for the purposes of this diary to have Maple find minors, then get determinants, then adjust the signs, and then take the transpose: the result should be (after division of the entries by det(A)) the inverse. And it was. I should mention that it took me longer to write the darn Maple code than it did to have people do the computations in class. The # indicates a comment in Maple and is ignored by the program.

> with(linalg):
Warning, the protected names norm and trace have been redefined and unprotected
> A:=matrix(3,3,[1,-2,3,2,1,0,2,1,3]); #defines a matrix, A.
                                   [1    -2    3]
                                   [            ]
                              A := [2     1    0]
                                   [            ]
                                   [2     1    3]

> det(A); #Finds the determinant
                                      15

> minor(A,2,1);  #This gets the (2,1) minor, apparently.
                                   [-2    3]
                                   [       ]
                                   [ 1    3]

>B:=matrix(3,3,[seq(seq((-1)^(i+j)*det(minor(A,i,j)),j=1..3),i=1..3)]);
#This mess, when parsed correctly, creates almost the adjoint of A.
                                  [ 3    -6     0]
                                  [              ]
                             B := [ 9    -3    -5]
                                  [              ]
                                  [-3     6     5]
> evalm(A&*transpose(B)); #We need to divide by 15, which is det(A).
# evalm and &* do matrix multiplication.
                               [15     0     0]
                               [              ]
                               [ 0    15     0]
                               [              ]
                               [ 0     0    15]

> inverse(A);
                            [1/5     3/5     -1/5]
                            [                    ]
                            [-2/5    -1/5    2/5 ]
                            [                    ]
                            [ 0      -1/3    1/3 ]

> evalm((1/15)*transpose(B));
                            [1/5     3/5     -1/5]
                            [                    ]
                            [-2/5    -1/5    2/5 ]
                            [                    ]
                            [ 0      -1/3    1/3 ]
> B:=matrix(3,1,[y1,y2,y3]);
                                        [y1]
                                        [  ]
                                   B := [y2]
                                        [  ]
                                        [y3]

> evalm(inverse(A)&*B); # How to solve the matrix equation AX=B
# Left multiply by A-1.  
                            [  y1    3 y2    y3  ]
                            [ ---- + ---- - ---- ]
                            [  5      5      5   ]
                            [                    ]
                            [  2 y1    y2    2 y3]
                            [- ---- - ---- + ----]
                            [   5      5      5  ]
                            [                    ]
                            [      y2     y3     ]
                            [   - ---- + ----    ]
                            [      3      3      ]

> A2:=matrix(3,3,[1,y1,3,2,y2,0,2,y3,3]); # Preparing for Cramer's rule
                                   [1    y1    3]
                                   [            ]
                             A2 := [2    y2    0]
                                   [            ]
                                   [2    y3    3]

> det(A2)/det(A); # The same result as the second entry in A-1. 
                               2 y1    y2    2 y3
                             - ---- - ---- + ----
                                5      5      5
I "verified" Cramer's rule in a specific case. A similar computation is shown above done by Maple. Thie is on p. 392, Theorem 8.23, of the text.

I have never used Cramer's rule in any significant situation. But, again, should you need such an explicit formula, you should be aware that it exists. I also mention that, using my Maple on a fairly new computer, over half a second was needed to compute the inverse of a 5-by-5 symbolic matrix. So these things should be done with some care. Sigh.

So the stuff last time was really background. I don't think you need to know everything I discussed, but I do honestly believe that engineers should have some feeling for the real definition, even if it is very painful (and it is, indeed it is). In any case, you need to change your paradigm about determinants! since n=2 and n=3 are too darn simple to give you intuition for the general case.

The Oxford English Dictionary lists the first appearance of paradigm in 1483 when it meant "an example or pattern", as it does today.

But you must know for the purposes of this course some standard computational methods of evaluating determinants. So I'll tell you about row operations and cofactor exponasions.

Row operations and their effects on determinants
The row operationWhat it does to det
Multiply a row by a constant Multiplies det by that constant
Interchange adjacent rows Multiplies det by -1
Add a row to another row Doesn't change det

Examples
Suppose A is this matrix:

( -3  4  0 18 ) 
(  2 -9  5  6 )
( 22 14 -3 -4 )
(  4  7 22  5 )
Then the following matrix has determinant twice the value of det(A):
( -6  8  0 36 ) 
(  2 -9  5  6 )
( 22 14 -3 -4 )
(  4  7 22  5 )
because the first row is doubled.
Also, the following matrix has determinant -det(A)
( -3  4  0 18 ) 
( 22 14 -3 -4 )
(  2 -9  5  6 )
(  4  7 22  5 )
because the second and third rows are interchanged. Notice that you've got to keep track of signs, so that if we interchange, say, the second and fourth rows leaves the value of the determinant would not be changed.
The following matrix has the same value of determinant as det(A)
( -3  4  0 18 ) 
(  2 -9  5  6 )
( 24  5  2  2 )
(  4  7 22  5 )
because I got it by adding the second row to the third row and placing the result in the third row.

Silly examples (?)
Look:

   ( 1 2 3 )    ( 2 3 4 )
det( 4 5 6 )=det( 3 3 3 ) (row2-row1)
   ( 7 8 9 )    ( 3 3 3 ) (row3-row2)
Now if two rows are identical, the det is 0, since interchanging them both changes the sign and leaves the matrix unchanged. So since det(A)=-det(A), det(A) must be 0.
Look even more at this:
 
   (   1   4   9  16 )    (  1  4   9  16 )                (  1  4  9 16 )
det(  25  36  49  64 )=det( 24 32  40  48 ) (row2-row1)=det( 24 32 40 48 )
   (  81 100 121 144 )    ( 56 64  72  80 ) (row3-row2)    ( 32 32 32 32 ) (row3-row2)
   ( 169 196 225 256 )    ( 88 96 104 112 ) (row4-row3)    ( 32 32 32 32 ) (row3-row2)
so since the result has two identical rows, the deteminant of the original matrix must be 0.

There are all sorts of tricky things one can do with determinant evaluations, if you want. Please notice that the linear systems gotten from, say, the finite element method applied to important PDE's definitely give coefficient matrices which are not random: they have lots of structure. So the tricky things above aren't that ridiculous.

Use row operations to ...
One standard way of evaluating determinants is to use row operations to change a matrix to either upper or lower triangular form (or even diagonal form, if you are lucky). Then the determinant will be the product of the diagonal terms. Here I used row operations (actually I had Maple use row operations!) to change this random (well, the entries were produced sort of randomly by Maple) to an upper-triangular matrix.

[1 -1 3 -1] And now I use multiples of the first row to create 0's 
[4  4 3  4] below the (1,1) entry. The determinant won't change:
[3  2 0  1] I'm not multiplying any row in place, just adding
[3  1 3  3] multiples of row1 to other rows.
                              
[1 -1  3 -1] And now multiples of the second row to create 0's 
[0  8 -9  8] below the (2,2) entry. 
[0  5 -9  4] 
[0  4 -6  6]

[1 -1    3  -1] Of course, multiples of the third row to create 
[0  8    9   8] 0's below the (3,3) entry.
[0  0 -27/8 -1] 
[0  0  -3/2  2]

[1 -1    3   -1 ] Wow, an upper triangular matrix!
[0  8   -9    8 ]
[0  0 -27/8  -1 ]
[0  0    0  22/9]
The determinant of the original matrix must be 1·8·(-27/8)·(22/9). Sigh. This should be -66, which is what Maple told me was the value of the determinant of the original matrix. And it is!

Minors
If A is an n-by-n matrix, then the (i,j)th minor of A is the (n-1)-by-(n-1) matrix obtained by throwing away the ith row and jth column of A. For example, if A is

[1 -1 3 -1]
[4  4 3  4]
[3  2 0  1]
[3  1 3  3]
Then the (2,3) minor is gotten by deleting the second row and the third column:
>minor(A,2,3);
       [1 -1 -1]
       [3  2  1]
       [3  1  3]
Of course I had Maple do this, with the appropriate command.

Evaluating determinants by cofactor expansions
This field has a bunch of antique words. Here is another. It turns out that the determinant of a matrix can be evaluated by what are called cofactor expansions. This is rather weird. When I've gone through the proof that cofactor expansions work, I have not really felt enlightened. So I will not discuss proofs. Here is the idea. Suppose A is an n-by-n matrix. Each (i,j) position in this n-by-n matrix has an associated minor which I'll call Mij. Then:

The (-1)i+j pattern is an alternating pattern of +/-1's starting with +1 at the (1,1) place (think again about {checker|chess} boards).

Here: let's try an example. Suppose A is

[1 -1 3 -1]
[4  4 3  4]
[3  2 0  1]
[3  1 3  3]
as before. I asked Maple to compute the determinants of the minors across the first row.
Here are the results:
> det(minor(A,1,1));                                 
                                      -3
> det(minor(A,1,2));
                                       6
> det(minor(A,1,3));
                                      -16
> det(minor(A,1,4));
                                      -21
Remember that the first row is [1 -1 3 -1] Now the sum, with the +/- signs, is
+1·(-3)-(-1)·6+3·(-16)-(-1)·(-21) -3+6-48-21=-66. But I already know that det(A)=-66.

Recursion and strategy
You should try some examples, of course. This is about the only way I know to learn this stuff. If I had to give a short definition of determinant, and if I were allowed to use recursion, I think that I might write the following:
Input A, an n-by-n matrix.
     If n=1, then det(A)=a11
     If n>1, then det(A)=SUMj=1n(-1)j+1a1jdet(M1j) where M1j is the (n-1)-by-(n-1) matrix obtaining by deleting the first row and the jth column.
This is computing det(A) by repeatedly expanding along the first row. I've tried to write such a program, and if you have the time and want some amusement, you should try this also. The recursive nature rather quickly fills up the stack (n! is big big big) so this isn't too practical. But there are certainly times when the special form of a matrix allows quick and efficient computation by cofactor expansions.

More formulas
You may remember that we had a
A decision problem Given an n-by-n matrix, A, how can we decide if A is invertible?
Here is how to decide:
       A is invertible exactly when det(A) is not 0.
Whether this is practical depends on the situation.
There was also a
Computational problem If we know A is invertible, what is the best way of solving AX=B? How can we create A-1 efficiently?
Well, this has an answer, too. The answer is on page 383 of the text. The inverse of A is the constant (1/det(A)) multiplied by the adjoint of A. I have to look this up. The adjoint is the transpose (means: flip over the main diagonal, or, algebraically, interchange i and j) of the matrix whose entries are (-1)i+jdet(Mij).

I think this is hideous and the only example I have seen worked out in detail (enough to be convincing!) is n=2. So here goes:

A is (a b) and M11=d and M12=c and M21=b and M22=a
     (c d)
Then the adjoint (put in +/- signs, put in transpose) is ( d -b)
                                                         (-c  a).
Since the det is ad-bc, the inverse must be 
( d/(ad-cd) -b/(ad-bc) )  
(-c/(ad-cd)  a/(ad-bc) )
If you mentally multiply this matrix by A you will get I2, the 2-by-2 identity matrix. I hope! Check this, please!

So there will be times you might need to decide between using an algorithmic approach and trying to get a formula. Let me show you a very simple example where you might want a formula. This example itself is perhaps not too realistic, but maybe you can see what real examples might look like.

Suppose we need to understand the linear system
2x+7y=6
Qx+3y=Q
Well, if the parameter Q is 0, then (second equation) y=0 and so (first equation) x=3. We could think of Q as some sort of control or something. I tried inadequately to convey a sort of physical problem that this might model, but the effort was perhaps not totally successful. What happens to x and y when we vary Q, for example, move Q up from 0 to a small positive number? I don't think this is clear. But we can in fact find a formula for x and y. This is sort of neat, actually. You may remember such a formula from high school, even.

   det( 6 7 )     det( 2 6 )              
      ( Q 3 )        ( Q Q ) 
x= ----------  y= ----------    
   det( 2 7 )     det( 2 7 )
      ( Q 3 )        ( Q 3 )
so that x=(18-7Q)/(6-7Q) and y=-4Q/(6-7Q). Now there are formulas, and I can ask questions about them. What is the 6-7Q in the bottom? There's must be some sort of trouble when Q is 6/7. What trouble is there? On one level, hey, the trouble is that we aren't supposed to divide by 0. On another level, the trouble is that the system "collapses": the coefficient matrix is singular, not invertible, drops down in rank from 2 to 1. So we shouldn't expect nice things to ccur. But is Q is 0, and then increases a bit, you know we could find out what happens to x (and y) by looking at dx/dQ and dy/dQ etc. So there is a place for formulas.

Cramer's Rule
I think this is Theorem 8.23 (page 392) of the text. It discusses a formula for solving AX=B where A is an n-by-n matrix with det(A) not equal to 0, and B is a known n-by-1 matrix, and X is an n-by-1 matrix of unknowns. Then xj turns out to be det(Aj)/det(A), where Aj is the matrix obtained by replacing the jth column of A by B.

Well, the QotD was computing part of an example when n=3 of this.
3x-5y+z=2
2x+5y-z=0
x-y+z=7
What is z? According to Cramer's Rule, z is

   (3 -5 2)
det{2  5 0)
   (1 -1 7)
------------
   (3 -5  1)
det{2  5 -1)
   (1 -1  1)
I think I computed the determinant on the top in several ways, once with row operations, and once by cofactor expansions. Both gave 161. And, of course, there is this method:
>det(matrix(3,3,[3,-5,2,2,5,0,1,-1,7]));
                                      161
The bottom determinant was what I asked people to compute. It is
> det(matrix(3,3,[3,-5,1,2,5,-1,1,-1,1]));
                                      20
uhhhh ... twenty, yes, that's it, twenty. That's the answer to the QotD. Or, actually, we can check another way:
> A:=matrix(3,4,[3,-5,1,2,2,5,-1,0,1,-1,1,7]);
                                [3    -5     1    2]
                                [                  ]
                           A := [2     5    -1    0]
                                [                  ]
                                [1    -1     1    7]
> rref(A);
                             [1    0    0    2/5]
                             [                  ]
                             [               29 ]
                             [0    1    0    -- ]
                             [               20 ]
                             [                  ]
                             [               161]
                             [0    0    1    ---]
                             [               20 ]
This is the augmented matrix corresponding to the original system, and so z must be 161/20.

I first merely stated two facts which are sometimes really useful in computing determinants:

  1. If A and B are n-by-n matrices, then det(AB)=det(A)det(B).
    Comments This is best understood if one realizes that det also measures a volume distortion factor, so A in effect "maps" Rn to Rn by matrix multiplication, and det(A), it turns out, is the way n-dimensional volumes are stretched. So multiplying by A and then by B concatenates the effects. Notice, as we observed in class, that det(A+B) is not ... Mr. Shah's example: A=I2 and BA=2I2 so A+B=3I2 and det(A)=1 and det(B)=4 and det(A+B)=9.
  2. If A is an n-by-n matrix, with At its transpose, then det(A)=det(At).
    Comments Here the transpose is the result of flipping the matrix over its main diagonal: rows and columns get interchanged. Algebraically, the ijth entry in At is the jith entry in A. The reason these determinants are the same is that when the flip is done, the signs on each rook arrangement is preserved. (-1)# counts stuff in each upper-right quadrant. But each upper-right quadrant is counted exactly when the matrix element is in the lower-left quadrant of the other element. And transposing lets upper right change to lower left. Whew -- this is actually a correct explanation, but perhaps not totally clear!
Now here is what I showed on the screen as I attempted to catch up with the last decade or so of the twentieth century in terms of technical pedagogy.