The standard unit vectors in Rn are the keys to relating algebraic notions to geometric ones. Their algebra is particularly simple because they are mutually perpendicular (and because each has length one). When dealing with subspaces of Rn, it is useful to find similar collections of vectors.
Definition. A nonempty subset of nonzero vectors in Rn is called an orthogonal set if every pair of distinct vectors in the set is orthogonal.
Examples
Orthogonal sets are automatically linearly independent.
Theorem Any orthogonal set of vectors is linearly independent.
To see this result, suppose that v1, . . ., vk are in this orthogonal set, and there are constants c1, . . ., ck such that c1 v1 + · · · + ck vk = 0. For any j between 1 and k, take the dot product of vj with both sides of this equation. We obtain cj ||vj ||2 = 0, and since vj is not 0 (otherwise the set could not be orthogonal), this forcescj = 0. Thus the only linear combinations of vectors in the set which equal the 0 vector are those in which all of the coefficients are zero, which means that the set is linearly independent.
Definition. If V is a subspace of Rn and S is an orthogonal set which spans V (and hence is a basis for V ), S is called an orthogonal basis for V.
Now, if {v1, . . ., vk } is an orthogonal basis for V and v is any element of V, we know that we can write
v = c1 v1 + · · · + ck vk
for some choice of coefficients. The advantage to orthogonality is that the coefficients are easy to determine. Again, by taking the dot product of vj with both sides of this equation we obtain v·vj = cj ||vj ||2 and therefore cj = v· vj /||vj ||2 . Thus we have
Theorem (Representation of Vectors in terms of Orthogonal Bases)
Let
{v1,
. . .,
vk}
be an orthogonal basis for a subspace V of
Rn,
and let v be any vector in V. Then
v =
(v·v1
/
||v1||2
)v1
+ · · ·
+
(v·vk
/
||vk||2
)vk.
Examples
Orthogonal bases are computationally easier to deal with than general bases. Is it true that every subspace of Rn has an orthogonal basis? The answer is yes, and the reason is that, starting with any basis, we can construct an orthogonal one via the following algorithm. The algorithm is derived by following the same procedure we used to find the decomposition of a vector into one parallel to a given line and one perpendicular to that line; now we decompose successive vectors into ones in the span of an already constructed subspace and one perpendicular to that subspace.
Theorem
(Gram-Schmidt Algorithm)
Let
{u1,
. . .,
uk}
be a linearly independent subset of
Rn.
Then the set
{v1,
. . ., vk}
constructed as follows is orthogonal, and has the same span as
the original set:
set v1
=
u1
, and having constructed
v1,
. . .,
v
j
-1, set
vj = uj - {(uj·v1 / ||v1||2 )v1 + · · · + (uj·vj-1 / ||vj-1||2)vj-1}.
Indeed, the set {v1, . . ., vk} has the same span as {u1, . . ., uk}, because each element in it is a linear combination of the corresponding element of the original set (with nonzero coefficient) and its predecessors. Assuming we have already proved that {v1, . . ., vj-1} is orthogonal, then {v1, . . ., vj }is orthogonal because for i < j we have
v i·v j = v i ·u j - {(uj ·v 1 /||v1 || 2 )vi ·v 1 + · · · + (uj ·vj-1 / ||vj-1||2) vi·vj-1 } = vi·uj - {(uj·vi / ||vi||2)vi·vi} = 0.
By induction, then, the set {v1 ,. . ., vk} is orthogonal.
Example
It is convenient to deal with orthogonal vectors each of which has norm 1. Such vectors are called unit vectors. Notice that any orthogonal set can be transformed into one in which each vector has norm 1, by simply dividing each vector by its norm - the orthogonality is unaffected, and the new norms are all 1. An orthogonal set of unit vectors is called an orthonormal basis, and the Gram-Schmidt procedure and the earlier representation theorem yield the following result.
Theorem
Every subspace W
of
Rn
has an orthonormal basis.
If {w1,
. . .,
wk}
is an orthonormal basis for W, and v is a vector in W,
then the coordinates of v with respect to the basis are given
by dot products: