Math 373 Fall 2000

Table of Contents

General Information

There is only one section (01) of 373 this semester. It is taught by Prof. Bumby and meets MW8 (7:40 - 9:00 PM) in SC-205.  Information about that section appears on this page.  For general information about the course, see the Math 373 page.

A handout distributed at the first lecture contained a list of three supplementary problems, one for each of the sections 2.1, 2.2, 2.3. Some parts of these problems will be used as examples in lecture; other parts as homework.

There is also a second handout, entitled Pi and the AGM and distributed on September 18. It contains problems S4 and S5 dealing with the algorithms described in the handout and problem S6 that supplements Section 2.5.

Taylor series appear in our 152 course, and copies of slides that I used in lectures in that course during the Fall 1999 semester are available. The lectures can be obtained from the lecture section home page from that course. Series were in chapter 10 with Taylor series featured mainly in lecture 12.

The promised notes proving quadratic convergence of Steffensen's acceleration are available for those who want them. Because of the delay in producing the notes, they will only be available from this page. No paper copies have been prepared for distribution in class.

A supplement on interpolation was prepared for the class of October 11. It includes Supplementary problem 7 illustrating the loss and regaining of accuracy when interpolation formulas are obtained by divided differences.

Notes entitled An introduction to numerical integration were prepared for the class of November 1.

There are also notes on Euler's method that were prepared for Math 252 in the Spring term, 2000. One feature of those notes is a simple derivation of a slightly weaker form of the error estimate. The simplicity of derivation seems more valuable than a slight sharpening of the bound through a formula that is much harder to interpret.

A summary of the course (with exercises) has been prepared for distribution on December 11.

More information about homework can be found in the details  section of this page.


Richard L. Burden & J. Douglas Faires; Numerical Analysis (sixth edition);Brooks/Cole, 1997 (776 pp.); (ISBN# 0-534-95532-0)

The course will cover (almost all of) Chapters 1 through 5, as described below.


This list will grow during the semester.
A ROOTFINDING method that starts with an interval on which a function changes sign and repeatedly cuts the interval in half and selects the half on which the function changes sign. After n steps the size of the interval has been multiplied by 2-n. Since this estimate on the root is guaranteed, this rate of convergence is the minimal acceptable performance of a numerical method. It corresponds to LINEAR CONVERGENCE for an ITERATIVE method. This process gives a proof of the intermediate value theorem. The assumption of continuity on the theoretical side simply asserts that the function doesn't change much on a small enough interval. This is an abstract way of expressing that the function can be computed.
If you say that six times nine is forty-two, that is not an error (as we use the word in this course), it is a blunder. The analysis of methods only works if every calculation does what it claims to do within known limitations. Hand calculations should be checked and programs tested with data for which the results can be predicted before answers can be trusted. (See also ERROR)
When we say that pi is 3.1416 to four decimal places, we mean that we have identified the closest rational number with denominator 10000 to pi. When more places are computed, the 6 at the end may (and, in fact, does) change to 5 followed by a large digit (e.g., 9 in this case). More loosely, when we describe a computation as being accurate to n decimal places, we mean that the true result lies in an interval of diameter 10-n centered at the computed value (or equivalently the computed value lies in an interval of diameter 10-n centered at the true result). To emphasize the level of accuracy, all n digits are written, even if the last digit is a zero. (see also SIGNIFICANT FIGURES)
This is the bread and butter of Numerical Analysis. Every time an exact formula is replaced by an approximate one, or a real number replaced by something that can be represented on your computational device, there is an error. The subject seeks to find provable upper bounds for such differences and to find methods for which these bounds are small. (see also BLUNDER)
A process for using values of a function at a few points to derive a formula for computing the function at an arbitrary point of an interval. Typically, this uses the given data to find a polynomial taking the same values, since polynomials are easily evaluated at any given point.
Literally, this means repeating a computation. Typically, it is used to find a fixed point of a function g, i.e., a solution of g(x)=x. Rootfinding and location of fixed points are equivalent problems, although the roots of one function are fixed points of a different function. If the function g has a derivative of absolute value less than 1 (and bounded away from 1) on an interval, then it is contracting on that interval. If g also maps the interval into itself, there will be a unique fixed point on the interval and every sequence of iterates starting in the interval will converge to it.
Usually applied to an ITERATION. It signifies that the distance from the current approximation to the goal shrinks by a factor less than 1 at each step, in the sense that one can prove that this distance after the step is at most k times the value before for some k strictly less than 1. A consequence of this is that a fixed amount of work is needed to get an additional decimal place of accuracy. Compare to QUADRATIC CONVERGENCE.
An iterative process that finds a root of a function f described geometrically as following the tangent at a point of the graph until it meets the axis. (This leads to a simple expression, but this format does not lend itself to a nice representation of formulas.) If the graph is not tangent to the axis at the root, ITERATION of this formula exhibits QUADRATIC CONVERGENCE. It also provides an automatic way of creating a fixed point problem from a ROOTFINDING problem.
Usually applied to an ITERATION. It signifies that the distance from the current approximation to the goal after a step is less than a fixed multiple of the square of the distance of the value before. Once the distance is small enough, each additional step gets you much closer to the goal. With a suitable scale of measurement, the number of correct digits doubles with each step. Compare to LINEAR CONVERGENCE. NEWTON'S METHOD is a familiar example. We have also met STEFFENSEN ACCELERATION as a method to produce quadratic convergence from a linearly convergent process.
Any process that uses the ability to compute a function f to solve an equation f(x)=0. Methods described in this course are BISECTION and ITERATION.
The error that results from representing real numbers in finite space. It typically arises when the decimal or binary representation of number is limited to a fixed accuracy to be stored in a register of a computer or calculator. It may also arise when a book has chosen to print only a few decimal places of a tabulated function. Errors caused by the inability to store intermediate results in a computation or loss of precision in computer arithmetic are often included under this term. The effect of this error is add a small quantity to values represented by variables in formulas. Contrast with TRUNCATION error.
A number x is said to be known to n significant figures if x is between 10k-1 and 10k and you have have an approximation to within (1/2)10k-n. (see also DECIMAL PLACES)
A process that derives turns a LINEARLY CONVERGENT ITERATION into a process with QUADRATIC CONVERGENCE. When you have three consecutive values from a process having linear convergence, it is possible to compute an estimate of the limit of the process. If this is used as the basis for finding three more values and the estimate of the limit, then the sequence of estimates will be quadratically convergent.
The type of error that results from cutting off (that is what truncation means) a series expansion after a limited number of terms. This is the error term that is part of the analysis of methods that are based on series or interpolation formulas. This analysis assumes that the remaining calculations can be performed exactly, so when such formulas are used in practice, it is necessary to add another component of the error due to ROUNDOFF of data or intermediate results.


A record of the topics covered in class and homework assigned will grow in this spot.  The Examples column refers to textbook exercises used in class.  They are recorded here just because it is easy to do so, and the information may be of some use.  In some cases, the discussion may be completed in a later class or with a description linked to this page. The Problems column identifies those textbook exercises  which should be done as homework.  Neatly written solutions of these problems should be submitted two class meetings after the class in which the textbook section was discussed in class.  The submitted solution should be a report describing the method used to solve the problem and the main features of the calculation.  A short program may be included, but a transcript of an interactive computation should be edited so that only a direct path to the answer remains.  In either the Examples or Problems column, a number like S1 refers to a problem on a supplement distributed in class. Some of the Examples or Problems entries contain links to additional material, including tables or graphs that aim to clarify the intent of the exercise.
Lecture Homework
Date Section Examples Due Page Problems
Sept. 6 2.1 4 (a,d) Sept. 13 53 4 (b,c), 6; S1 (all)
Sept. 11 2.2 1, 2, S2(a, b), 16 Sept. 18 63 S2 (c, d, e), 11a
Sept. 13 2.3 S3a, 5a Sept. 20 75 S3(b,c), 5b
Sept. 18 2.4 1d, 2d, 3 Sept. 25 86 S4, S5
Sept. 20 2.5 7 Sept. 27 90 S6, 10a
Sept. 25 1.1 8 Oct. 02 13 9a,b
Sept. 27 3.1 1 Oct. 04 119 15b, 16
Oct. 02 3.2 4a Oct. 09 132 4b
Oct. 04 3.3 3 Oct. 11 141 1(a,b), 2(a,b)
Oct. 09 3.4 3a, 3c Oct. 16 154 11
Oct. 11 Supplement S7 (degree 1) Oct. 18   S7 (higher degree)
Oct. 25 4.3 1 - 6 (a,b) Nov. 01 197 1 - 6 (e,f)
Oct. 30 General discussion
Nov. 01 4.4 and Supplement 8, 11 Nov. 08 205 7, S8
Nov. 06 & 08 4.1 1a, 2a, 3d, 4d Nov. 15 178 3b, 4b
Nov. 08 & 13 4.5 1f Nov. 20 213 1b
Nov. 15 5.2 1b, 2b Nov. 27 267 3a, 4a
Nov. 20 5.3 1b, 2b, 5 Nov. 29 274 3d
Nov. 27 & 29 5.4 1b, 10b Dec. 06 284 4a, 11a
Dec. 4 & 6 5.6 4c Dec. 13 304 5a
The midterm exam is scheduled for Monday, October 23. It will cover chapters 1 through 3. If we run out of review topics in the week before the exam, there will be an introduction to numerical integration as an application of interpolation. For the benefit of those who are not able to make my regular office hours, there will be special office hours in Hill 438 from 5 to 7 PM on Tuesday, October 17. Also, on Friday, October 20, I will not be able to be on Busch Campus for my regular office hours in the early afternoon, but should be available from 4 to 7PM.

We will begin chapter 4 with sections 4.3 and 4.4 to give an immediate application of the results on interpolation from chapter 3. This will show how this subject applies to formulas that you may have seen in other courses. Then, we discuss numerical differentiation from the beginning of the chapter before returning to the question of efficiently calculating integrals. Note that "integral" means "definite integral" since we need a problem with a numerical answer in order to use numerical methods. The numerical results can be used to produce a graph of an indefinite integral, but will not be useful for the techniques of integration that caused such grief in Calc II.

In the lecture of November 13, adaptive quadrature was introduced along with completing the discussion of the Romberg integration method. Adaptive quadrature requires some programming to see the ability of the method, and they may well be the first topic in the course that exceed the ability of a pocket calculator. Since there is no fixed programming requirement for this course, no regular assignment will be given from this section. However, those who are comfortable with programming are encouraged to implement an adaptive method and test it. Work done on this project can be used to partially offset low grades on other graded work in the course.

Note that Adaptive Quadrature is the only topic for this project, since adaptive methods lead to interesting program challenges.

The project should be finished by the end of classes, and the program should be able to get values of integrals of irregularly oscillating functions. In addition to evaluating the integral, there should be a report about how the work was done. My program reported the depth of the subdivision and the total number of intervals considered. Program output may be supplemented by a report on the testing of the program. The program need not be production grade, but it should give evidence that it was tested for correctness.

The course moves into chapter 5 on November 15, although we may return to Gaussian quadrature at the end of the course.

The last week of class will be devoted to a general review while previously assigned homework is completed. The role of roundoff in numerical work only became significant in the second half of the course, but the review will give a chance to show how it affects rootfinding and interpolation problems as well.

Final Exam and Course Grades

Many of the goals of this course did not lend themselves to being done on an in-class exam, so the homework component of the course contributed 5 points per assignment for a total of 125, while the midterm and final exams were taken at their graded values of 100 and 200 points, respectively. This gave a ranking of the class, but the grades do not necessarily measure the proportion of the course mastered. There were some clear clusters of grades that formed the basis of the letter grades for the course. The course grades are those that I prepared on December 22. On reviewing the grades, I found some adjustments that will change one C+ to a B and one B to a B+. Forms requesting those changes have been submitted.

The median grade on the final was 118. While the exam questions were somewhat open-ended, the grades emphasized the identification of an appropriate method of solution and using it to get a numerical answer. Some things that should have been noticed are mentioned below the tables of grades.

Those that can count will notice that there is a difference between the total counts in the two tables. Everyone taking the exam is included in the first table, but the one who did not have other grades is excluded from the second. Also note that these tables report clusters of grades. In particular, there are fairly large gaps between the lowest score for a particular letter grade and the highest score for the next lower grade.

Final Exam
168 - 182 4
156 - 163 4
144 - 151 4
134 1
108 - 123 5
78 - 100 5
48 - 73 6
Course Grades
A 352 - 376 3
B+ 316 - 341 7
B 292 - 309 3
C+ 240 - 262 3
C 171 - 222 7
D,F,TF 86 - 143 5

To aid in interpreting the following comments, you can view the final exam. This copy of the exam includes links to full solutions of the problems. The same material has also been prepared in a single file with colored headings. This is a test of the dvipdfm program. Comments will be appreciated.

Problem 1. Some calculators will give about the requested accuracy from the graphics screen. If you can see a point where two graphs meet, the intersection can be computed. The description in the manual suggests that bisection is used. If you need to step through more than 30 bisection steps, you find it awkward, but a program can do it quickly. The question was phrased in terms of an iteration, and several hundred iterations from any starting value will lead to the fixed point. Steffensen's method does accelerate the convergence to allow the required accuracy to be obtained with no more than a dozen evaluations of the function. The fixed point question can be rephrased as finding a root of the difference of the two expressions (as the calculator did when it applied bisection), and Newton's method used on this expression. One student found it "awesome" that three iterations of Newton's method did as well as 600 simple iterations. You were also asked if you could prove that iteration would converge on some interval. There was only one satisfactory response to this. The answer is that there is a very small interval on which the theorem applies. You need the absolute value of the derivative less than 1, and you can find an exact description of the interval where this holds; but you also need to have an interval that is taken into itself, and only this only happens if the first iteration does not give a value larger than the largest value allowed by the other condition. If you start inside this interval, you can prove that only about 100 iterations will suffice to get the requested accuracy. A slight adjustment of the coefficients would lead to something where iteration would not converge, although the first few hundred steps would look just like the given iteration, and you may be able to get good accuracy before settling into a cycle that alternates between two numbers separated by a small amount. All alternative methods (bisection, Steffensen, Newton) would continue to give accurate answers at the same rate seen for the given problem.

Problem 2. First, it was necessary to use the data in the form that it was given. Since x is a polynomial function of y, there is no difficulty interpolating it exactly by a polynomial. The inverse function is a different story. With the given restriction on the interval of definition, there is no doubt that a function has been defined, but there is no useful formula for that function. Tabulating that function at equally spaced points would be a chore, but that is not necessary. The given values were easily obtained, and you would have no difficulty getting more exact pairs of values if you needed them. What we have here is a suggestion that interpolation can be used as a rootfinding method. With a little more work, you could use further implicit differentiation to estimate the error term in the interpolation formula, but a different way to estimate the error is proposed. First, you need to get a value to test. The nature of the error term for interpolation suggests that it is always best to use a few values close to the value of x at which the function is to be evaluated. This means that you should ignore most of the table. In the days before calculators, you evaluated standard functions to 5 decimal places, by using linear interpolation on a table. You simply found the nearest values to the number you were given and used one or another version of the linear interpolation formula to estimate the value of the function at the value you had. With a little practice, this could be done without any written scratch work because the calculation was easy. A table could extend for several pages, and the page containing values close to the one you had was always considered more relevant for getting an answer than the first page of the table. For the given values of this function, calculus assures us that the difference of x values is a slight magnification of the difference of the y values. Computing (exactly, because it only involves evaluating a polynomial) the x corresponding to your approximate y and comparing it to the given value is a reasonable estimate on the error. Knowing the order of the interpolation error allows you to modify this to get compute this function to any desired degree of accuracy.

Problem 3. The last term in the displayed equation was, as usual the error term when the expression on the left is approximated by the first three terms on the right side of the equation. For all x less than .12, this shows that the truncation error is as small as required. In particular, this means that the truncation error can be ignored in using this formula to compute the function at the given values to 10 decimal places. Also, the roundoff in computing a polynomial with only three terms will be entirely due to the register size of your calculator. Since the terms are of vastly different sizes, the worst that will happen is that later terms have no effect when added to earlier ones. For very small x, this means that you will only get the limiting value of .5. Unfortunately, the given selection of values only illustrated effects that were easy to miss. You should try smaller values. You may never trust your calculator again! Direct evaluation of the expression on the left involves an approximate computation of the cosine and storing the result. However accurate this computation was, when it is subtracted from 1, there will be a significant roundoff error, and you will never regain the accuracy. Any difference between the first 10 decimal places of the two expressions in the given region is entirely due to this roundoff.

Problem 4. Using the numerical differentiation formulas is easy, but you were supposed to interpret the answers. In particular, you told to assume that the five point formula was much more accurate than the three point formula. What this was intended to mean is that the difference between the two formulas for the derivative at 0.5 could be taken as a good guess of the true error in the less accurate three point formula. Since the truncation error is second order, dividing the step size by 10 should divide the truncation error by 100. It's as simple as that! Roundoff error behaves differently. The simple combinations of function values in the numerator of the differentiation formula will have the same 6 decimal place absolute accuracy as the given data, but dividing by the step size will reduce the accuracy by one decimal place in the given table and two decimal places in the hypothetical refined table. The competition between these two errors to spoil your day is the main theme of the study of numerical differentiation.

Problem 5. It is fairly easy to differentiate the integrand in this integral and put it in the error estimate for any of the composite rules discussed in the text. Once you know how many terms you need, it is easy to make an excuse for not completing the computation using one of those rules. However, if you have practiced using the integration routines in your calculator, you may find that there is no difficulty letting the firmware do the walking through 128 or 256 function evaluations. The Romberg method was suggested as an alternative because it is self-estimating. All you need to know is that all the error estimates behave reasonably, and that is easy to see in this case, and you know that answers that agree to 8 decimal places are both correct to that accuracy. Unfortunately, many attempts to use the Romberg method ignored the extrapolation step and simply calculated a few trapezoidal rules with unimpressive results. Four rows of a complete Romberg table should be adequate. This means only 16 function values and a handful of simple extrapolation steps.

Problem 6. As in problem 4, the intent here was to have a comparison between a second order method and a fourth order method on the same interval. The exact solution could also be used to confirm that the fourth order method was much more accurate. The order of the method tells you the power of the step size appearing in the error term, so multiplying the step size by a factor multiplies the error by a known power of that factor. If you know what size you need to be, you can find the factor you need. In addition, once you select a particular method, you know the effect of halving the step size, so you know what difference between those values is small enough to make the more accurate one within your requirements.

The textbook exercises often emphasized working with the formulas at a level that could be seen not to give useful answers. This did not seem to be in the spirit of a mathematics course, so supplementary problems were also provided. A better way to enforce a deeper study of the topics in the course needs to be developed. I thank all who participated in class. Your efforts will help improve the future of this course.

Comments on this page should be sent to: R.T. Bumby
Last updated: January 03, 2001