Annotated and linked table of linear algebra terms


Vector and scalar Vectors can be added and subtracted, and there is a zero vector. Vector addition is associative and commutative. Vectors can be multiplied by scalars. Several different kinds of vectors and scalars of interest in scientific and engineering problems are described here. See pages 331-332 of the textbook for the formal definition, where vector space is also defined.
Linear equation A linear equation is a sum of scalar multiples of unknown quantities equal to a scalar. First officially mentioned here. See p.351 in the textbook.
System of linear equations A system of linear equations is a collection of linear equations. Defined here. See p.351 of the textbook.
Coefficient matrix; augmented matrix. A coefficient matrix is the rectangular array of coefficients of a linear system. An augmented matrix includes an additional column for the constants of the system. Defined here. The augmented matrix is mentioned on p.354 of the text.
Matrix algebra Sums of matrices whose shapes agree are defined. Products of matrices whose "inner" dimensions agree are defined (this turns out to be repeated dot products of rows and columns). Much of what we will do in linear algebra can be described in terms of matrix manipulation. Here is the beginning of the discussion of simple matrix algebra. See section 8.2 of the textbook.
Solution A solution to a system of linear equations is a n-tuple of scalars which when substituted into each equation of the system makes the equation true. "n-tuple" is used here since the order of the variables is important. First mentioned here but discussed constantly. I think this is first mentioned in the text on p.352.
Homogeneous system A collection of linear equations which are all set equal to 0. See here. See p.359.
Trivial and non-trivial solutions. A homogeneous system always has a solution where all of the variables are 0. This is the trivial solution. A non-trivial solution is a solution where at least one variable is not 0. See here again, and p.359 of the text.
Equivalent systems Systems are linear equations are equivalent when they have the same set of solutions. Equivalence is sometimes noted with ~. See here.
Row operations Row operations are three ways of producing equivalent systems. Row operations are described here and in section 8.2 of the textbook.
Reduced row echelon form (RREF) A equivalent form of the augmented matrix of a linear system from which deductions about existence and form of solutions can be made easily. The definition is here but there is an instructive example here, followed by further discusssion. See section 8.2 of the text.
Linear combination A linear combination of vectors is a sum of scalar multiples of the vectors. First mentioned here and used constantly later.
Consistent; inconsistent; compatibility condition A linear system is consistent if it has solutions. A system is inconsistent if it has no solutions. The compatibility conditions are homogeneous linear equations which must be satisfied by the variables of the original system for the system to be consistent. These terms are defined in the discussion of the first major RREF example.
Linear independence A collection of vectors is linearly independent if any linear combination of the vectors which is equal to 0 must have all of its scalar coefficients equal to 0.
That is, w1, w2, ...,wk are linearly independent if whenever the linear combination v=SUMj=1kcjwj=0,
then all of the scalars cj MUST be 0.
See here for the first mention, but this will be used constantly in what follows. The official definition appears on p.334 of the text but there are many other references. Linear dependence and llinear independence are also discussed in several examples in what follows this.
Spanning A collection of vectors is spanning if every vector can be written as the linear combination of vectors in the collection. Defined here and discussed during the lecture.
Basis A collection of vectors is a basis if the vectors are linearly independent and if every vector is a linear combination of these vectors (that is, the span of these vectors is everything). First defined here.
Consistent and inconsistent systems A system of equations is consistent if it has a solution. If there are no solutions, then the system is inconsistent. A homogeneous system is always consistent, since it always has the trivial solution. These terms are used in the text, and are prominent in the diagram used to describe the qualitative aspects of solutions of linear systems.
Rank The rank is the number of non-zero rows in the RREF of the matrix. First mentioned here and used frequently afterwards.
Inverse of a matrix;
identity matrix
The inverse of a square n-by-n matrix A is another square n-by-n matrix, usually called A-1, so that A·A-1=In. Here In, the n-by-n identity matrix, is the square n-by-n matrix whose diagonal entries are all 1's and whose off-diagonal entries are all 0's. If B is any n-by-n matrix, then B·In=B and In·B=B. The definition is given here. A method for finding A-1 is discussed immediately after, with some analysis of its work and how it can fail. The matrix A has an inverse exactly when the rank of A is n.
Determinant The determinant is a function of all n2 entries of an n-by-n matrix. It is very complicated, but has some wonderful properties. It can be useful in getting formulas for inverses and for solutions of linear equations if you absolutely need them. Well, for all the good it does, here is the official definition of determinant. Please use it with care, because it is so unwieldy.
Transpose If B is a p-by-q matrix, then Bt, the transpose of B, is a q-by-p matrix, and the (i,j)th entry of Bt is the (j,i)th entry of B. Here, along with the fact that det(A)=det(At) when A is a square matrix.
Minor If A is an n-by-n matrix, then the (i,j)th minor of A is the (n-1)-by-(n-1) matrix obtained by throwing away the ith row and jth column of A. Here, followed by evaluating determinants using cofactor expansions.
Eigenvalue; eigenvector If A is an n-by-n matrix, an n-dimensional vector X is called an eigenvector of A is X is not zero and if there is a scalar so that AX=X. The scalar is called an eigenvalue and X is an eigenvector associated to that eigenvalue. Also called characteristic value, proper value, etc. See here.
Characteristic polynomial If A is an n-by-n matrix, and we let In be the n-by-n identity matrix (1's on the diagonal with 0's elsewhere) then the characteristic polynomial of A is det(A-In). This is a polynomial of degree n. The roots of the characteristic polynomial are the eigenvalues of A. See here.
Diagonalization Suppose A is a square matrix. Then a diagonalization of A is an invertible matrix C and a diagonal matrix D so that C-1AC=D. We could also write A=CDC-1. See here. Reasons for diagonalizing are discussed afterwards, and conditions and examples are then shown.
Symmetric; skew-symmetric A matrix A is symmetric if A=At (A is equal to its own transpose). A matrix A is skew-symmetric if A=-At (A is equal to minus its own transpose). See here.
Orthogonal A matrix C is orthogonal if C-1=Ct: its inverse is equal to its transpose. See here.


Maintained by greenfie@math.rutgers.edu and last modified 10/4/2005.