Lecture 1

Algebra may be roughly described as the study of number systems, or more generally sets together with operations defined on those sets, and functions (involving "variables") constructed out of the natural operations. The typical questions are about the structure of the sets themselves andthe solution of equations involving the functions in question.

What about Linear Algebra? In this case, there are natural operations that correspond to the addition of real numbers and the multiplication of a number by a real number. If the basic set is the real numbers, as in the case of one-variable calculus, then the basic questions are about the solutions of the equation a x = b, for fixed numbers a and b, and the "variable" x. All of the possibilities are illustrated by separating into three cases: if a is different from 0, then there is exactly one solution; if a = 0 and b is different from 0, there are no solutions, and if a and b are both zero, then all real numbers x are solutions.

This categorization is the end of the story in one (real) variable, so of course the real subject of linear algebra is what happens if there are more variables. Then both the computations and the theory become substantially more complicated. (In more abstract settings, where the universe of "numbers" is no longer tied to the real numbers, one must re-examine the one variable case as well!) The basic functions under study are the linear ones, corresponding to the sums of real multiples of the variables. But even one-variable calculus indicates why an understanding of linear algebra is so important: the linear functions are the best simple approximations to many of the more complicated functions encountered in the mathematical modelling of the physical universe.

We will provide a general category of object of study - the matrix - and find that virtually all of our spaces, and equations, can be dealt with via operations on matrices.

Matrices

As spreadsheets become more ubiquitous in analyzing information, the importance of arranging information in 2-dimensional arrays becomesmore self-evident.

Def. A matrix is a rectangular array of real numbers (scalars). Such an array is formed ofrows(horizontal) and columns (vertical), and the size (or shape) of the matrix is called m x n if there are m rows andn columns. If m = n, the matrix is called square. The scalar in row i, column j, is called the (i,j)entry. Rows and columns are always numbered consecutively starting at the upper left hand corner.

Notation: A = (aij ) denotes the matrix with (i,j) entries ai j.
O denotes the matrix all of whose entries are 0.

Examples

Def. A submatrix of a matrix A is the (new) matrix obtained by deleting some (or none) of the rows andor columns of A.

Def. The transpose of a matrix Ais the matrix A T , with the rows and colummns of A interchanged. In other words, the (i,j) entry of AT is the (j,i) entry of A. If A is m x n, then A is n x m.

Examples

Operations: Addition and Scalar Multiplication Let A = (a i j )and B = (b i j ) both be m x n matrices. Then A + B (the sum) is the m x n matrix with (i,j) entry a ij + bij . If c is a scalar, then cA(the < b>scalar product) is the matrix with (i,j) entry cai j .
Notation: -A denotes (-1)A.

The following rules are easily verified, given that they hold for addition and scalar multiplication of the individual entries. The rules involving transposition are simple to verify, and the last one is very important.

Properties
Addition: matrix addition is commutative and associative; O is the additive identity, and -A is the additive inverse of A.
Scalar Multiplication: For all matrices and scalars,
1A = A, 0A = O, (st)A = s(tA), s(A + B) = sA + sB, (s + t)A = sA + tA.

Transposition: (A + B) T = A T + B T, (sA)T = sAT , (A T )T = A.

Examples

Vectors

"Vectors" are as important as matrices in their own right, but in fact can be viewed as a subclass of matrices, so it makes sense to define matrices first, and then carry over their already considered operations and properties.

Def. A vector is a matrix with one row (row vector) or one column (column vector). The entries of a vector are called components. The set of vectors (column or row depending on the context) with n components is denoted Rn.

Notation: Vectors are typically denoted by lower case letters, bold faced u in print, or underlined u, when typed, or overlined (with ¯ ), when written.

Note that vector addition and scalar multiplication are subexamples of the matrix case. Also note that the transpose of a row vector is a column vector, and vice versa.

Geometric Interpretations:
Vectors and Vector Addition and Scalar Multiplication in R
2 and in R3: the usual.
The vector corresponds to the ray connecting the origin with the point whose coordinates are the vector's components; addition corresponds to lining up the rays end to end (parallelogram law), and scalar multiplication corresponds to a stretching or shrinking of the ray in the given, or the opposite, direction.

Applications: e.g. vector addition of velocities, as in boating on a river (the velocity of the boat relative to the water, plus the velocity of the water relative to the shore, gives the velocity of the boat relative to the shore).

Linear Combinations

Addition and scalar multiplication are the basic rules of the game for vectors, and if we have a given collection of vectors (of the same size) we can consider all of the vectors that result from such operations.

Def. A linear combination of the vectors u1, . . ., uk in Rn is a sum of scalar multiples of these vectors, i.e., c1 u1 + · · · + ck uk . The scalars c1, . . ., ck are called the coefficients.

Examples

Def. u, v in Rn are parallel if one is a scalar multiple of the other.

It is not hard to show, as in the examples, that if u and v in R2 are NOT parallel, then EVERY vector in R2 is a linear combination of u and v.

Back 250 Lecture Index