Some definitions discussed in sections 1 and 5 of Math 250
Lecture 1 (1.1 & 1.2) [1/19/2011]
- Matrix
- Matrix addition; scalar multiplication
- Transpose
- Row vector
- Column vector
- Linear combination
Lecture 2 (1.3) [1/24/2011]
- System of linear equations
- Consistent and inconsistent system
- Solution and solution set
- Equivalent systems of linear equations
- Elementary row operations
- Basic variables and free variables; general solution
- Coefficient matrix of a linear system; augmented matrix of a
linear system
- Reduced row echelon form (RREF)
Lecture 3 (1.4) [1/31/2011]
- Again, row echelon form
- RREF
- Pivot and pivot column (done only in one class!)
- Rank and nullity (not done due to forgetfulness)
Lecture 4 (1.6) [2/2/2011]
- Pivot, pivot column, rank, nullity (from 1.3)
- Linear combination (again)
- Span
- Generating set
Lecture 5 (1.7) [2/7/2011]
- Homogeneous system of linear equations
- Linearly dependent set of vectors
- Linearly independent set of vectors
Lecture 6 (2.1) [2/9/2011]
Lecture 7 (2.1) [2/14/2011]
- Nilpotent is defined in the text, although much later (p. 221)
- Identity matrix, Im (not done due to forgetfulness)
The two concepts which follow are not explicitly
mentioned in the text but people should know about them. They will
come up later in the course and corresponding ideas/facts are used in
many other places in math and its applications.
- Associated homogeneous system
- Particular solution, general solution
Lecture 8 (2.3) [2/16/2011]
- Identity matrix, Im
- Invertible matrix
- Inverse of a matrix
- Elementary matrix
Lecture 9 (2.4) [2/21/2011]
This is not an official part of the course!
- Unimodular matrix
These are invertible matrices with integer entries whose inverse also
has only integer entries. I gave an example in class:The matrix Its inverse
[2 1 0 0] [ 2 -4 1 1]
[1 1 1 0] [-3 8 -2 -2]
[0 2 2 -1] [ 1 -3 1 1]
[1 0 2 1] [-4 10 -3 -2]
Here is
the relevant Wikipedia entry.
Such matrices initially seem very strange
and "rare". They are used in many mathematical areas, and also arise
in applications such as geometrical problems arising from material
science, and also in operations research.
Lecture 10 (2.5, 2.6) [2/23/2011]
- Partitioned matrices and block multiplication
- Upper triangular matrix
- Lower triangular matrix
- LU decomposition
This is not an official part of the course!
- Here is
Strassen's original 1969 paper, entitled "Gaussian elimination is not
Optimal". The expressions I wrote in class and did not explain all
appear in exactly the same form in his short (3 page) paper and
... errr ... are not further explained. The other aspect of his
algorithm, using a recursive block approach to multiplication, also
appears in this very neat paper.
The
Wikipedia
article on matrix multiplication has a section on the faster
methods.
Here are relevant parts of various e-mail messages I got about matrix
multiplication. The quotations have been slightly edited.
I consulted the first expert to walk into my office *** and he said
that he believed no one used Strassen for floating point because it's
not numerically stable. Instead, doing fast matrix multiplies is all
about cache sizes. The standard O(n^3) algorithm is very cache
inefficient since every time you move to a new row the column you're
multiplying by has been evicted so clever people spend a lot of time
working in smaller blocks so that doesn't happen.
So serious people use BLAS
and if you want BLAS to perform well, you download ATLAS which is a program
that figures out the cache sizes of your machine and autotunes
BLAS. (Similar libraries exist for FFTs.)
If you can talk intelligently about numerical stability I think that
would be a great thing to mention. Even knowing there are potential
land mines in the area would be good. If you're doing integer
arithmetic Strassen's might be used but we don't know of an
application that uses very large integer matrices. [But note: another
message mentioned Operations Research applications which use large
integer matrices.]
I believe Coppersmith-Winograd is best known and completely impractical,
and I think smart people believe you can get arbitrarily close to O(n^2).
- I only mentioned Fast Fourier Transform which is used in
very many real-world applications. As the
Wikipedia article declares, "[The discrete Fourier transform,]
using the definition, takes O(N2) arithmetical operations, while an
FFT can compute the same result in only O(N log N) operations. The
difference in speed can be substantial, especially for long data sets
where N may be in the thousands millions -- in practice, the
computation time can be reduced by several orders of magnitude in such
cases ...".
The FFT algorithm seemed to have been first published by Cooley and
Tukey in 1965, but later investigation revealed that Gauss, who was a
fantastic calculator (yes, I mean with arithmetic), used similar
methods in 1805.
Lecture 12 (3.1) [2/28/2011]
- Minor of a matrix
- Cofactor of a matrix
- Determinant
Lecture 13 (3.2) [3/7/2011]
I may be mistaken but I think there were no definitions (!) in
this lecture: shocking. We discussed various methods of computing
determinants and some properties of determinants.
Lecture 14 (4.1) [3/9/2011]
- Subspace
- Null space of a matrix (Nul)
- Column space of a matrix (Col)
Lecture 15 (4.2) [3/21/2011]
- Basis of a subspace
- Dimension of a subspace
Lecture 16 (4.3) [3/23/2011]
Lecture 17 (5.1) [3/28/2011]
- Eigenvector
- Eigenvalue
- Eigenspace
This is not an official part of the course!
- The very basics of Markov chains are discussed in the first
pages of section 5.5 of the textbook. You would get a far better idea
of the uses of Markov chains if you glanced at the wikipedia
article. Even if you do not, the list of applications includes
discussions relevant to "Physics, Chemistry, Testing, Information
sciences, Queueing theory, Internet applications, Statistics,
Economics and finance, Social sciences, Mathematical biology, Games,
Music, Baseball, Markov text generators": lots of stuff. The
motivating idea is what we discussed in class.
The example I showed "from the web" was taken straight from http://www.sosmath.com/matrix/eigen0/eigen0.html.
The example I asked students to discuss was copied from a Maple help page on the eigenvects command.
Lecture 18 (5.2) [3/30/2011]
- Characteristic polynomial
- Multiplicity of an eigenvalue
Lecture 19 (5.3) [4/4/2011]
Lecture 20 (5.3) [4/6/2011]
This is not an official part of the course!
Lecture 21 (6.1) [4/11/2011]
- {Dot|scalar|inner} product
- Unit vector
- {Orthogonal|perpendicular|normal} vectors
Lecture 22 (6.2) [4/18/2011]
This is not an official part of the course!
- Coordinates of a vector in a subspace corresponding to a basis of
that subspace
- Orthogonal basis
- Orthonormal basis
- Gram-Schmidt process or algorithm
Lecture 23 (6.3) [4/20/2011]
- Orthogonal complement
- Orthogonal projection
Lecture 25 [4/27/2011]
This is not an official
part of the course!
I discussed a symmetric "tridiagonal" matrix which resembles matrices
used in spline approximation. Splines are discussed fairly
accessibly in this reference: a
paper by Sky McKinley and Megan Levine of the College of the
Redwoods.
I also remark that a recent article in the American Mathematical
Monthly (Francis's Algorithm by David Watkins, May 2011),
available online here)
describes a modern numerical technique to get eigenvalues and
eigenvectors. The method is very different from what we've done
in class "by hand" and uses ideas related to the QR decomposition.