Lecture 15
Basis and Dimension

We want to consider basic properties of subspaces that allow us to see that the examples we have already considered (spans of sets and solutions of homogeneous linear systems) are different ways of describing all the subspaces of Rn. When we consider spanning sets, we will want to eliminate redundancies in the vectors used to span the subspace.

Definition
Let V be a subspace of R
n. A basis for V is a linearly independent set that spans V.

Examples:

(1) The standard unit vectors {e1, . . ., en }are a basis for Rn.

(2) A basis for the column space of a matrix can be found by identifying a linearly independent subset of the columns which spans the same space as the columns. (Gaussian elimination allows us to find such a subset, by picking out linearly independent columns from among the original ones.)

Theorem (Reduction Principle) Let V be a subspace of Rn spanned by a finite set S. Then there is a basis for V contained in S.

Indeed, if we form the matrix whose columns are the vectors in S, then the procedure above constructs the corresponding linearly independent spanning subset.

Theorem (Extension Principle) Let V be a subspace of Rn. Every linearly independent subset of V is contained in a basis for V.

To see this, start with any linearly independent subset of V. If that subset spans V, then it is a basis. If not, then by our earlier results there is a vector in V that is not in this subset. Repeat the argument with the subset expanded to contain this vector. Eventually, if this process terminates, we will have a linearly independent subset which either spans all of V. Of course, the process must terminate, because we can not have a set with more than n linearly independent vectors in Rn.

Basis Theorem Let V be a subspaceof Rn other than the zero-space {0}. Then V has a basis. Moreover, every basis of V has the same number of vectors.

Notice that any single nonzero vector in V is a linearly independent subset of V. The extension principle then allows us to find a basis containing any such starting vector. Now, suppose we have two bases for V, one with j vectors and one with k vectors. We already know that the number of linearly independent vectors in V is at most the number of vectors in a spanning set. But this means that j < k and k <j, and hence j = k.

Def. The dimension of a subspace V of Rn other than {0} is the number of vectors in any basis of V. It is denoted dim(V). The dimension of the subspace {0} is defined to be zero.

Notice that another way of stating the earlier theorem (that the number of linearly independent vectors in a set spanned by k vectors is k) is to say that every linearly independent set in a subspace V has at most dim(V) elements.

How do we recognize that a subset of a subspace V is a basis for V ?

Theorem Let V be a subspace of R n with dim(V) = k. Then any two of the following three conditions on a subset S of V imply that S is a basis for V.
(a) S is linearly independent
(b) S spans V.
(c) S has exactly k vectors.

Notice that the combination of (a) and (b) is exactly the definition of S being a basis for V. If (a) holds, then the extension principle means that S can be extended to a basisof V, but if (c) also holds then S must already be a basis (because extending it would result in a basis of more elements than the dimension of V). If (b) holds, then the reduction principlemeans that S can be reduced to a basis o f V, but if (c) also holds then S must already be a basis (because reducing it would result in a basis of fewer elements than the dimension of V).

Examples

Back 250 Lecture Index