There is only one section (01) of 373 this semester. It is taught by Prof. Bumby and meets MW8 (7:40  9:00 PM) in SC205. Information about that section appears on this page. For general information about the course, see the Math 373 page.
A handout distributed at the first lecture contained a list of three supplementary problems, one for each of the sections 2.1, 2.2, 2.3. Some parts of these problems will be used as examples in lecture; other parts as homework.
There is also a second handout, entitled Pi and the AGM and distributed on September 18. It contains problems S4 and S5 dealing with the algorithms described in the handout and problem S6 that supplements Section 2.5.
Taylor series appear in our 152 course, and copies of slides that I used in lectures in that course during the Fall 1999 semester are available. The lectures can be obtained from the lecture section home page from that course. Series were in chapter 10 with Taylor series featured mainly in lecture 12.
The promised notes proving quadratic convergence of Steffensen's acceleration are available for those who want them. Because of the delay in producing the notes, they will only be available from this page. No paper copies have been prepared for distribution in class.
A supplement on interpolation was prepared for the class of October 11. It includes Supplementary problem 7 illustrating the loss and regaining of accuracy when interpolation formulas are obtained by divided differences.
Notes entitled An introduction to numerical integration were prepared for the class of November 1.
There are also notes on Euler's method that were prepared for Math 252 in the Spring term, 2000. One feature of those notes is a simple derivation of a slightly weaker form of the error estimate. The simplicity of derivation seems more valuable than a slight sharpening of the bound through a formula that is much harder to interpret.
A summary of the course (with exercises) has been prepared for distribution on December 11.
More information about homework can be found in the details section of this page.
The course will cover (almost all of) Chapters 1 through 5, as described below.
Lecture  Homework  

Date  Section  Examples  Due  Page  Problems 
Sept. 6  2.1  4 (a,d)  Sept. 13  53  4 (b,c), 6; S1 (all) 
Sept. 11  2.2  1, 2, S2(a, b), 16  Sept. 18  63  S2 (c, d, e), 11a 
Sept. 13  2.3  S3a, 5a  Sept. 20  75  S3(b,c), 5b 
Sept. 18  2.4  1d, 2d, 3  Sept. 25  86  S4, S5 
Sept. 20  2.5  7  Sept. 27  90  S6, 10a 
Sept. 25  1.1  8  Oct. 02  13  9a,b 
Sept. 27  3.1  1  Oct. 04  119  15b, 16 
Oct. 02  3.2  4a  Oct. 09  132  4b 
Oct. 04  3.3  3  Oct. 11  141  1(a,b), 2(a,b) 
Oct. 09  3.4  3a, 3c  Oct. 16  154  11 
Oct. 11  Supplement  S7 (degree 1)  Oct. 18  S7 (higher degree)  
Oct. 25  4.3  1  6 (a,b)  Nov. 01  197  1  6 (e,f) 
Oct. 30  General discussion  
Nov. 01  4.4 and Supplement  8, 11  Nov. 08  205  7, S8 
Nov. 06 & 08  4.1  1a, 2a, 3d, 4d  Nov. 15  178  3b, 4b 
Nov. 08 & 13  4.5  1f  Nov. 20  213  1b 
Nov. 15  5.2  1b, 2b  Nov. 27  267  3a, 4a 
Nov. 20  5.3  1b, 2b, 5  Nov. 29  274  3d 
Nov. 27 & 29  5.4  1b, 10b  Dec. 06  284  4a, 11a 
Dec. 4 & 6  5.6  4c  Dec. 13  304  5a 
We will begin chapter 4 with sections 4.3 and 4.4 to give an immediate application of the results on interpolation from chapter 3. This will show how this subject applies to formulas that you may have seen in other courses. Then, we discuss numerical differentiation from the beginning of the chapter before returning to the question of efficiently calculating integrals. Note that "integral" means "definite integral" since we need a problem with a numerical answer in order to use numerical methods. The numerical results can be used to produce a graph of an indefinite integral, but will not be useful for the techniques of integration that caused such grief in Calc II.
In the lecture of November 13, adaptive quadrature was introduced along with completing the discussion of the Romberg integration method. Adaptive quadrature requires some programming to see the ability of the method, and they may well be the first topic in the course that exceed the ability of a pocket calculator. Since there is no fixed programming requirement for this course, no regular assignment will be given from this section. However, those who are comfortable with programming are encouraged to implement an adaptive method and test it. Work done on this project can be used to partially offset low grades on other graded work in the course.
Note that Adaptive Quadrature is the only topic for this project, since adaptive methods lead to interesting program challenges.
The project should be finished by the end of classes, and the program should be able to get values of integrals of irregularly oscillating functions. In addition to evaluating the integral, there should be a report about how the work was done. My program reported the depth of the subdivision and the total number of intervals considered. Program output may be supplemented by a report on the testing of the program. The program need not be production grade, but it should give evidence that it was tested for correctness.
The course moves into chapter 5 on November 15, although we may return to Gaussian quadrature at the end of the course.
The last week of class will be devoted to a general review while previously assigned homework is completed. The role of roundoff in numerical work only became significant in the second half of the course, but the review will give a chance to show how it affects rootfinding and interpolation problems as well.
Many of the goals of this course did not lend themselves to being done on an inclass exam, so the homework component of the course contributed 5 points per assignment for a total of 125, while the midterm and final exams were taken at their graded values of 100 and 200 points, respectively. This gave a ranking of the class, but the grades do not necessarily measure the proportion of the course mastered. There were some clear clusters of grades that formed the basis of the letter grades for the course. The course grades are those that I prepared on December 22. On reviewing the grades, I found some adjustments that will change one C+ to a B and one B to a B+. Forms requesting those changes have been submitted.
The median grade on the final was 118. While the exam questions were somewhat openended, the grades emphasized the identification of an appropriate method of solution and using it to get a numerical answer. Some things that should have been noticed are mentioned below the tables of grades.
Those that can count will notice that there is a difference between the total counts in the two tables. Everyone taking the exam is included in the first table, but the one who did not have other grades is excluded from the second. Also note that these tables report clusters of grades. In particular, there are fairly large gaps between the lowest score for a particular letter grade and the highest score for the next lower grade.


To aid in interpreting the following comments, you can view the final exam. This copy of the exam includes links to full solutions of the problems. The same material has also been prepared in a single file with colored headings. This is a test of the dvipdfm program. Comments will be appreciated.
Problem 1. Some calculators will give about the requested accuracy from the graphics screen. If you can see a point where two graphs meet, the intersection can be computed. The description in the manual suggests that bisection is used. If you need to step through more than 30 bisection steps, you find it awkward, but a program can do it quickly. The question was phrased in terms of an iteration, and several hundred iterations from any starting value will lead to the fixed point. Steffensen's method does accelerate the convergence to allow the required accuracy to be obtained with no more than a dozen evaluations of the function. The fixed point question can be rephrased as finding a root of the difference of the two expressions (as the calculator did when it applied bisection), and Newton's method used on this expression. One student found it "awesome" that three iterations of Newton's method did as well as 600 simple iterations. You were also asked if you could prove that iteration would converge on some interval. There was only one satisfactory response to this. The answer is that there is a very small interval on which the theorem applies. You need the absolute value of the derivative less than 1, and you can find an exact description of the interval where this holds; but you also need to have an interval that is taken into itself, and only this only happens if the first iteration does not give a value larger than the largest value allowed by the other condition. If you start inside this interval, you can prove that only about 100 iterations will suffice to get the requested accuracy. A slight adjustment of the coefficients would lead to something where iteration would not converge, although the first few hundred steps would look just like the given iteration, and you may be able to get good accuracy before settling into a cycle that alternates between two numbers separated by a small amount. All alternative methods (bisection, Steffensen, Newton) would continue to give accurate answers at the same rate seen for the given problem.
Problem 2. First, it was necessary to use the data in the form that it was given. Since x is a polynomial function of y, there is no difficulty interpolating it exactly by a polynomial. The inverse function is a different story. With the given restriction on the interval of definition, there is no doubt that a function has been defined, but there is no useful formula for that function. Tabulating that function at equally spaced points would be a chore, but that is not necessary. The given values were easily obtained, and you would have no difficulty getting more exact pairs of values if you needed them. What we have here is a suggestion that interpolation can be used as a rootfinding method. With a little more work, you could use further implicit differentiation to estimate the error term in the interpolation formula, but a different way to estimate the error is proposed. First, you need to get a value to test. The nature of the error term for interpolation suggests that it is always best to use a few values close to the value of x at which the function is to be evaluated. This means that you should ignore most of the table. In the days before calculators, you evaluated standard functions to 5 decimal places, by using linear interpolation on a table. You simply found the nearest values to the number you were given and used one or another version of the linear interpolation formula to estimate the value of the function at the value you had. With a little practice, this could be done without any written scratch work because the calculation was easy. A table could extend for several pages, and the page containing values close to the one you had was always considered more relevant for getting an answer than the first page of the table. For the given values of this function, calculus assures us that the difference of x values is a slight magnification of the difference of the y values. Computing (exactly, because it only involves evaluating a polynomial) the x corresponding to your approximate y and comparing it to the given value is a reasonable estimate on the error. Knowing the order of the interpolation error allows you to modify this to get compute this function to any desired degree of accuracy.
Problem 3. The last term in the displayed equation was, as usual the error term when the expression on the left is approximated by the first three terms on the right side of the equation. For all x less than .12, this shows that the truncation error is as small as required. In particular, this means that the truncation error can be ignored in using this formula to compute the function at the given values to 10 decimal places. Also, the roundoff in computing a polynomial with only three terms will be entirely due to the register size of your calculator. Since the terms are of vastly different sizes, the worst that will happen is that later terms have no effect when added to earlier ones. For very small x, this means that you will only get the limiting value of .5. Unfortunately, the given selection of values only illustrated effects that were easy to miss. You should try smaller values. You may never trust your calculator again! Direct evaluation of the expression on the left involves an approximate computation of the cosine and storing the result. However accurate this computation was, when it is subtracted from 1, there will be a significant roundoff error, and you will never regain the accuracy. Any difference between the first 10 decimal places of the two expressions in the given region is entirely due to this roundoff.
Problem 4. Using the numerical differentiation formulas is easy, but you were supposed to interpret the answers. In particular, you told to assume that the five point formula was much more accurate than the three point formula. What this was intended to mean is that the difference between the two formulas for the derivative at 0.5 could be taken as a good guess of the true error in the less accurate three point formula. Since the truncation error is second order, dividing the step size by 10 should divide the truncation error by 100. It's as simple as that! Roundoff error behaves differently. The simple combinations of function values in the numerator of the differentiation formula will have the same 6 decimal place absolute accuracy as the given data, but dividing by the step size will reduce the accuracy by one decimal place in the given table and two decimal places in the hypothetical refined table. The competition between these two errors to spoil your day is the main theme of the study of numerical differentiation.
Problem 5. It is fairly easy to differentiate the integrand in this integral and put it in the error estimate for any of the composite rules discussed in the text. Once you know how many terms you need, it is easy to make an excuse for not completing the computation using one of those rules. However, if you have practiced using the integration routines in your calculator, you may find that there is no difficulty letting the firmware do the walking through 128 or 256 function evaluations. The Romberg method was suggested as an alternative because it is selfestimating. All you need to know is that all the error estimates behave reasonably, and that is easy to see in this case, and you know that answers that agree to 8 decimal places are both correct to that accuracy. Unfortunately, many attempts to use the Romberg method ignored the extrapolation step and simply calculated a few trapezoidal rules with unimpressive results. Four rows of a complete Romberg table should be adequate. This means only 16 function values and a handful of simple extrapolation steps.
Problem 6. As in problem 4, the intent here was to have a comparison between a second order method and a fourth order method on the same interval. The exact solution could also be used to confirm that the fourth order method was much more accurate. The order of the method tells you the power of the step size appearing in the error term, so multiplying the step size by a factor multiplies the error by a known power of that factor. If you know what size you need to be, you can find the factor you need. In addition, once you select a particular method, you know the effect of halving the step size, so you know what difference between those values is small enough to make the more accurate one within your requirements.
The textbook exercises often emphasized working with the formulas at a level that could be seen not to give useful answers. This did not seem to be in the spirit of a mathematics course, so supplementary problems were also provided. A better way to enforce a deeper study of the topics in the course needs to be developed. I thank all who participated in class. Your efforts will help improve the future of this course.