A pseudo-diary for Math 291:01, spring 2003


As I remarked on other pages, I taught Math 291 during the fall 2002 semester. I wrote a daily diary of what I covered during the meetings of the course. The diary sometimes included additional material, interesting pictures, and useful links. What I will do here is just provide a link for each meeting of our class to the corresponding entry of the diary last semester. This will not be totally satisfactory, since at times I will cover material differently (rarely better!). I will try to include useful comments if I think that the differences are marked. Many of the pictures displayed were made by Maple. In each case I give the command which created the picture. This may be useful in your own work, in this course and at other occasions.

Lecture date
with link

(as appropriate)
Comments
with links

(as appropriate)
Monday, May 5 I hope to discuss the derivation of the heat equation today. Most of this is available here. I hope to add material on the fundamental solution of the heat equation: K(x,t)=1/(4Pi t)3/2exp(-||x||2/(4t)) and on the "transmission speed" of heat, and thus its macroscopic suitability as a model. There are 76,200 google references for the phrase "fundamental solution of the heat equation"!
Wednesday, April 30 I haven't bothered to keep up with this pseudo-diary, since much of what I have done is a duplicate of last semester's presentations. However, the discussion on flow boxes to illustrate the definition of divergence is new, and I also promised an image of a complicated surface. So here is a picture of Alexander's horned sphere (what's inside of it?). More information about insides and outsides is available.
Monday, March 31 A wonderful polar coordinate integral and computation of (1/2)!.
Change of variable in polar coordinates.
Thursday, March 27 Conversion of double integrals to polar coordinates; the gravity of a flat earth.
Wednesday, March 26 The instructor survived his experience.

Please see the diary entry for last semester. A useful assertion was made: the existence of the double integral and the truth of Fubini's Theorem (conversion of double integrals into iterated integrals) holds not only for functions which are continuous in rectangles, but also for functions which have discontinuities on lower-dimensional sets. Several examples were done using this fact.

This material is covered in section 15.3 of the text. I will ask students to do one homework problem from this section tomorrow in class. There will be opportunity for students to ask questions before the request.

Monday, March 24 I reviewed Riemann sums, partitions, sample points, and the definition of the definite integral for one variable. I pointed out a scheme which would allow approximation of the definite integral with an error bound for definite integrals of differentiable functions.

We then extended integration to two variables, and described the formal setup for integrating a function defined over a rectangle in R2. This is section 15.1, the definition of the double integral. I even tried to estimate a definite integral of a "horrible" function over a rectangle. Now something new does come in: Fubini's Theorem asserts that under certain general circumstances, this double integral is equal to iterated (repeated) integrals in each of the variables separately. I then did a problem from section 15.2.

Monday, March 10
Wednesday, March 12
I worked on Lagrange multipliers. I mostly did examples much like those done in the previous semester. Please see the lines to the right. I gave out an analysis of two non-routine problems, along with some pictures.

We did several problems from 14.5 in preparation for the exam tomorrow.

Thursday, March 6
I would like people to work on:
1. "Their" review problems. Please send me plain text e-mail. You certainly may request hints. I will be happy to supply some.
2. Look through the textbook problems for sections 14.1-14.7, and send me e-mail about any you would like me to go over.

I discussed the definitions of {local|absolute|strict} m{axi\ini}ma. (That's a total of six definitions!) I stated the second derivative test for functions of two variables today and tried to do two examples. Let me tell you about them.

Example 1 "Leonhard Euler (1707--1783) was a great and very prolific mathematician. He published Institutiones Calculi Differentialis (Methods of the Differential Calculus) in 1755. It was an influential text, and was the first source of criteria for discovering local extrema of functions of several variables. In it Euler investigated the following specific example:"
V=x3+y2-3xy+(3/2)x.
He asserted that V has a minimum both at x=1 and y=3/2 and at x=1/2 and y=3/4. Was Euler correct?

In fact, this was a problem on my first exam in Math 291 in the fall 2002 semester, and a complete solution is available. It turns out that Euler was wrong. One of the critical points is a local minimum and the other is a saddle. The picture to the right is the output from Maple of the following sequence of instructions:

with(plots):
V:=x^3+y^2-3*x*y+(3/2)*x;
                           3    2
                     V := x  + y  - 3 x y + 3/2 x
plot3d(V,x=-2..3,y=-1..5,color=red,axes=normal);
If you try this in a Maple window, you should be able to rotate the graph enough to perhaps convince yourself that the assertion about the critical points is possibly true. Graphs of surfaces can be subtle.

I tried another example, which I had found in a calculus book.

Example 2 If f(x,y)=(x2+2y2)e-x2-y2, locate and characterize (max, min, saddle) the five critical points of f.
In fact, I was able to locate the five critical points. I found the partial derivatives:
fx=2xe-x2-y2+(x2+2y2)e-x2-y2(-2x)
fy=4ye-x2-y2+(x2+2y2)e-x2-y2(-2y)
Since I wanted to know when both of these were 0, I could divide by the exponential (always positive, never 0) and collect terms to get the following polynomial equations:
2x(1-(x2+2y2)=0
2y(2-(x2+2y2)=0
Since we have a product of two factors in each equation, and I need both equations to be 0, we just need to examine 4 cases. One can't happen (the second factors can't both be 0 because of the 1 and the 2). However, the other combinations lead to these critical points: (0,0), (0,1), (0,-1), (1,0), and (-1,0).
I then tried to compute the Hessian by hand: fxxfyy-(fxy)2 and discovered that it was a mess. Or, at least, I could not see any way to simplify the computation. I had Maple draw the graph. You might be able to see that two of the c.p.'s are saddles, two are local maxima, and one is a local minimum.

So I asked Maple to do the work.

solve({diff(W,x),diff(W,y)});
  {y = 0, x = 0}, {y = 0, x = 1}, {y = 0, x = -1}, {y = 1, x = 0},{x = 0, y = -1}

H:=simplify(diff(W,x$2)*diff(W,y$2)-(diff(W,x,y))^2);
                  2      2        2  2       4  2       2  4
  H := -4 exp(-2 x  - 2 y ) (-26 x  y  + 10 x  y  + 16 x  y  - 2

               2       2       4      4      6      6
         + 14 y  + 11 x  - 24 y  - 9 x  + 8 y  + 2 x )

eval(subs(x=0,y=0,H));
                                  8
eval(subs(x=0,y=0,diff(W,x$2)));
                                  2
According to the second derivative test, I have verified that H has a local max at (0,0) with this computation.

The next topic is constrained max/min in several variables. Here the 1-dimensional case is considerable easier than the several variable case. So first I tried to review the 1-dimensional case.

Suppose we are given a function of 1 variable on the interval [a,b]. Where should we search for the absolute maximum and the absolute minimum of the function? This is a well-known topic in 1-variable calculus, and most students have an almost Pavlovian response: look for critical points inside the interval (where the derivative is 0 or doesn't exist). Examine the values of the function at these points, and compare the values at the end-points of the interval. This list of values will contain the max and the min of the function.

Of course, that might bring up the question, "Does a function on a closed, bounded interval always attain its maximum and minimum?" Here are a few examples.
f(x)=1/x if x is not 0, and f(0)=36 on the interval [0,1]. This function does not attain its maximum. For example, 36 is not its maximum, since f(1/(1/37))=37>36. And if x>0, f(x)=1/x is not its maximum, because f(x/2) is greater than f(x). ("attain" the max is the important thing here: f has no value which is largest.).
Here is an example which is quite a bit more subtle. f(x)=x if 0<x<1, and f(0)=f(1)=1/2. The graph of this function has "holes" at 0 and at 1. On the interval [0,1], the function neither attains a max nor a min.

The missing word here is continuity. A continuous function on [a,b] always attains its max and its min. For many "nice" functions, then, checking to get the max and min means checking a short list of interior critical points and the endpoints, a and b. In several variables the situation is dominated by the following theoretical idea:

Theorem Suppose we look at a region in Rn which is bounded (that is, can be put inside a large ball) and which also contains all of its boundary. Then any continuous function on the region must attain its maximum and its minimum.
A proof of this takes considerable theory, and is not the business of this course. Learning some ways of finding such max and min values is some of the business of the course. If the max/min are "interior", then we already have made a start: the max/min must occur at critical points. But max/min can also be found on the boundary, and here is where the higher dimensional situation is far more "interesting" than the 1 dimensional one. The boundary in one dimension is a rather short list (just a and b!). In higher dimensions, the boundary can itself be quite complicated, and locating boundary "extrema" can be very difficult. This is the next topic, and the last one in chapter 14.

Wednesday, March 5 I first discussed Taylor's Theorem in one variable. A discussion similar to this was done last semester and I refer to that. I then tried to discuss Taylor's Theorem in two variables. This calls for some imagination. I wanted to consider F(a+h,b+k) and did it by looking at the one-dimensional "slice" function, f(t)= F(a+ht,b+kt). So f(0)=F(a,b) and f(1)=F(a+h,b+k). To apply the one-variable Taylor's Theorem, I need to compute f'(0) and f''(0), etc. So with some luck, using the Chain Rule very carefully, we did this:
f'(t)=D1F(a+ht,b+kt)h+D2F(a+ht,b+kt)k so that
   f'(0)=D1F(a,b)h+D2F(a,b)k
f''(t)=D1D1F(a+ht,b+kt)h2+D2D1F(a+ht,b+kt)hk+D1D2F(a+ht,b+kt)kh+D2D2F(a+ht,b+kt)k2.
   f''(0)=D1D1F(a,b)h2+2D1D2F(a,b)hk+D2D2F(a,b)k2.

The last equation results from the equality of "mixed partial derivatives" and the commutitivity of multiplication (hk=kh). In fact, one can now see the general pattern. The same formation rule as comes into effect for the Binomial Theorem occurs here. Every time we differentiate a derivative of F, the Chain Rule takes effect. It "spits out" an h with every D1, and a k with every D2. The binomial coefficients occur because the rule of formation is the same as in Pascal's Triangle. In any case, what we have is (when t=1):

Taylor's Theorem, in a sort of two variable version):
F(a+h,b+k)=sumj=0n(1/j!)(sumi=0j(partiali)/((partial x)j-i(partial y)i)F(a,b)(j!/(i!(j-i)!))hj-iki+Error, where the Error depends on F and a and b and h and k and n, and goes to 0 faster than order n as h and k go to 0.

I very quickly reminded people of what local max and min were, and the fact that such must occur at critical points. Again, please see last semester's notes. A critical point occurs where the gradient is 0 or where the gradient doesn't exist (such can happen: the vertex of a cone, for example). But if we are at a point where grad F = 0, how can we tell if we are at a local max, a local min, or a saddle?

<

Sometimes you can just "see" it. For example, we considered the almost silly example a44+b144+c20+d100+e3. The only critical point of this function is (0,0,0,0,0) (the only point where all of the 5 first partials are 0), which must be a saddle because of the odd degree in the variable e.

But let us look at a classical example, two variables. Here a critical point (if the function is differentiable) is where the two partial of F are 0. So let us assume that is true at the point (a,b). How does the behavior of the second-degree Taylor polynomial influence the local "geometry" of the graph of the function. I restricted myself to initially asking how to get the picture II, a local minimum. The second-degree Taylor polynomial is:
(partial2)/(partial x2)F(a,b)h2+ 2(partial2)/(partial x partial y)F(a,b)hk+ (partial2)/(partial y2)F(a,b)k2
which I abbreviated to Ah2+2Bhk+Ck2.
Mr. Elkholy made the suggestion that certainly A>0, because a 1-dimensional slice with fixed y would imply that the second x-derivative should be positive (think of x2). Then I asked how I can get Ah2+2Bhk+Ch2>0. Well, I can't always get that: for example, if (h,k)=(0,0). But except for that case, I would like to get it to be positive. If k is 0 and h is not 0, then it is guaranteed to be positive by A>0. Now what happens if k is not 0? We could divide by k. We would get:
A(h/k)2+2B(h/k)+C. If h/k is a new variable (joke [heh heh] since I used the Greek letter nu in class, I think) w, we then want to know when Aw2+2Bw+C is always positive. We did something much the same when we proved the Cauchy-Schwarz inequality (there are only a few great tricks, really). The graph of this function of w is a parabola. Since A>0, it opens up. I can't cross the horizontal axis, since otherwise it would change sign. So it can't have any real roots! That means the discriminant (the thing under the square root sign in the quadratic formula) must be negative (so there will be no real roots). What is the discriminant? It is 4B2-4AC, and this must be negative.

Wow! We have verified/motivated part of what is called the second derivative test for two variables. I will discuss this further tomorrow, but: if A>0 and if AC-B2>0, then the function has a local maximum. Wow!

Of possible interest is this web page from Georgia Tech, discussing Taylor's Theorem in two variables.

I tried to find a "dictionary" of pictures on the web of what happens in two variables when one considers cubic polynomials but I haven't been able to find a good link. I will continue looking. The pictures can get quite complicated.

Monday, March 3 I first explained a part of a Maple dialog:
showtime(true); Turns on timing information.
vv:=x^2*arctan(exp(sin(3*y))); Defines a function.
                        2
                       x  arctan(exp(sin(3 y)))
time = 0.00, bytes = 62706Time/storage needed.
vvd:=diff(vv,y$50): Asks for 50 y derivatives.
time = 4.19, bytes = 114534450"Expression swell."
diff(vvd,x$3); Three x derivatives.
                                  0
time = 0.28, bytes = 3794046
diff(vv,x$3); The three x derivatives first.
                                  0
time = 0.00, bytes = 6086
off; Turns off timing information.
So the order that derivatives are taken doesn't seem to matter. This is called Clairaut's Theorem in your text. If partials exist and are continuous, then the order in which they are taken does not matter (at least theoretically!). Of course, what I tried to show above is that such things might well matter computationally! The verification of Clairaut's Theorem is in an appendix in your text, and uses the Mean Value Theorem a few times. I also tried to give an analogy with taking marginal "things" with respect to different variables on a spreadsheet, but I am not sure I was too successful.

Then I discussed traveling through a nebula in a rocket ship, and the temperature that was record, and the rate of change of the temperature. The most important chain rule computation is to "decouple" the effect of the nebula's temperature change and the rocket ship's travel. Discussion similar to what we did today is recorded here.

Thursday, February 27 Much the same as the corresponding meeting the previous semester.

The first exam will be in two weeks from this class. Review material and other information will be handed out next week.

Wednesday, February 26 Please look here for a discussion fairly similar to what I did today. It is complicated stuff, and really discussed some material which does not occur in 1-dimensional calculus.

Applications include "linearization" and equations for the tangent plane.

I tried to find an equation for a plane tangent to z=4x2+3xy when x=2 and y=4 (there z=40). The normal vector is supposed to be -dz/dxi-dz/dyj+k. I would like to show you that what I found algebraically "looks" correct. So here are the output of some Maple commands. First, I typed
with(plots):
RED:=plot3d(4*x^2+3*x*y,x=-2..6,y=0..8,axes=normal,color=red):

The first instruction loaded the plots package, which has many different plotting routines, including ways of plotting space curves and surfaces. The instruction beginning RED asked Maple to plot z=4x2+3xy with -2<=x<=6 and 0<=y<=8. Note the colon (:) at the end of the instruction, and the RED:= at the beginning. This asks Maple to (temporarily) store the picture in the variable RED and not (with :) to display it. The "axes=normal" requests that Maple add in to the picture the axes shown more or less the same as in our text. The "color=red" asks that the graph be colored red, instead of a certain kind of variable coloring which I don't think shows up very well.
I then typed
BLUE:=plot3d(28*(x-2)+6*(y-4)+40,x=-2..6,y=0..8,axes=normal,color=blue):
Of course, this is the candidate for the tangent plane. (z is not needed -- Maple assumes for this command that it is plotting a function of x and y and the value is the third coordinate). The tangent plane is plotted in blue, and stored in the variable BLUE (I am not very imaginative). I finally plotted both with
display3d({RED,BLUE});
and then I exported what was shown. These graphs take some educated intuition to appreciate. Two views are shown. This is actually a fairly "average" three-dimensional situation. The plane really is tangent at the point, but it cuts through the surface at the point -- we will understand this better in a little bit. I tried to show a more sideways view and a slightly oblique view. I urge you to try this yourself and rotate the graph and understand it.
I also remark that the exported colors are somewhat unreliable!

Monday, February 24 I remarked that this week students should read and do problems in sections 14.1-14.5. Please.

Example I wanted to do one more example about continuity, and it will be a bit more complicated than the others. The example is the following: f(x,y)=2x+3y if x>=0 and f(x,y)=0 for x<0. The picture shown here is an effort to graph this function, which sort of looks like two mutually tilted half planes. There is half a plane (where z=0) over the "left" half of the xy-plane, and a tilted plane (part of z=2x+3y) over the right half of the xy-plane.

I asked where this function should and should not be continuous. We decided that it would be continuouis where x>0 and where x<0. On the line x=0, it would not be continuous everywhere except (0,0).

We first explored what happened at (-3,7). In order to verify that f is continuous there, we wanted to restrict input values so that |f(x,y)-f(-3,7)| would be small. Well, since f(-3,7)=0, the simplest restriction would be one which would guarantee that f(x,y) also would be 0. But for that to be true, (x,y) would have to be in the left half-plane, where x<0. So the input tolerance, H, should be 3. Then any point (x,y) of R2 inside the circle of radius 3 centered at (-3,7) would have to be in the left half-plane, where x<0, so f(x,y)=0. Thus if we always take H=3, then |f(x,y)-f(-3,7)|=0 and this is less than any positive K. That's pretty easy. In fact, to show that f is continuous at (x0,y0) where x0<0, just take H=|x0|, and a similar logic applies to show that |f(x,y)-f(x0,y0)|=0<K for any positive K. This is definitely the easiest case.

Then we looked at (3,7). Here and at nearby (x,y)'s, f(x,y) is given by the formula 2x+3y. We would expect that the correspondance we considered last time (take H=(1/sqrt(13))K) would play some part. It does, but the logic is a bit tricky. In order to apply the previous result, we need to be sure that every point inside the circle with radius H is in the right half-plane (mirror image of the previous consideration!) But that means here that H should never be bigger than 3. So, golly, H should be the minimum of 3 and (1/sqrt(13))K: H=min(3,(1/sqrt(13))K).

There was a slight amount of discussion about min(A,B). I defined it by the following: min(A,B)=B if A>=B and =A if A<B. I also observed that min(A,B)=(A+B-|A-B|)/2 if this "formula" pleases you (?). There is a similar formula for the function max.

Anyway, if the distance from (x,y) to (3,7) is less than H, then x>0, so f(x,y)=2x+3y. Then |f(x,y)-f(3,7)| is covered by the discussion last time, so that this difference is less than (1/sqrt(13))K·sqrt(13)=K, and we have verified continuity. Indeed, if (x0,y0) is in the right half-plane (so x0>0) then the specification H=min(x0,(1/sqrt(13))K) for the input tolerance will guarantee the output will be within K of f(x0,y0).

What happens when x=0 is more subtle. We looked first at (0,7). Here we suspected f was not continuous. We computed f(0,7)=21. Mr. McGowen suggested that if we tried K=20, we would not be able to find any H which would guarantee "if ||(x,y)-(0,7)||<H, then |f(x,y)-f(0,7)|<K" (that is, 20). In fact, if H>) (as it is supposed to be) then (-H/2,7) will be inside the circle, and since -H/2<0, f(-H/2,0)=0 so that |f(x,y)-f(0,7)|=|0-21| which is not less than 20, and we have shown that the definition of continuity cannot be fulfilled. This "negation" argument is quite complex logically. It needs to be thought about. And the specification of K=20 is very neat, of course. I did not look at the lack of continuity of f at (0,y) (with y not 0), generally. The next part was worrying me!

f is continuous at (0,0). Given an "ouput" tolerance K>0, how should we specify H? Here we decided that H=(1/sqrt(13))K. We must analyze |f(x,y)-f(0,0)|. Since f(0,0)=0, we don't need to worry about the (x,y)'s with x<0, because then f(x,y)=0, so |f(x,y)-f(0,0)|=0<K always. But when x>=0, we need "yesterday's result (the one that needed the Cauchy-Schwarz inequality), and that will actually verify what we want. The whole story of continuity is very involved. Many special techniques have been invented to deal with it. I've just looked at the beginning!

Then we discussed differentiability. There is similar material here and here. Please look at these entries.

Thursday, February 20 Stirling's formula allows one to approximate factorials. Here's a link with fairly readable exposition. One version of the formula is:
For large N, N! is approximately (N/e)^N*sqrt(2*Pi*N).
This is a really handy fact. For example, Maple tells me in another window that the exact value of 100! divided by the approximation above (with N=100) is about 1.000833661 -- quite close to 1. Stirling's formula can be obtained by considering an improper integral: int0infinity xn e-x dx which equals n!. The equality is a nice exercise using mathematical induction and integration by parts.

Much more than almost anyone would want to know about binomial coefficients is just a click away!

Mr. Bergknoff gave a nice presentation of problem #4 of workshop #2. This is an intricate problem, and I thank him for his effort. After this I tried to give some background on the problem.

Now how about continuity? Consider the following statement statement
Fix (x0,y0) in R2. If any K>0 is given, then there is a positive real number H, which may depend on (x0,y0) and on K, so that if |(x,y)-(x0,y0)|<H, then |f(x,y)- f(x0,y0)|<K.
This is an intricate definition. It essentially asks that the limit of f(x,y) as (x,y)-->(x0,y0) exist, and that this limit equal f(x0,y0). (That is, we can evaluate the limit by "plugging in".) This is what is meant by "f is continuous at (x0,y0)." The K is usually called epsilon and the H is usually called delta. I think of K as a specified output tolerance, and K as an input tolerance chosen so that the inputs which satisfy it will automatically be within the output tolerance of the desired value of f. The way to begin to understand this complicated implication is with some examples.

Example Suppose f(x,y) assigns the number 1 if (x,y)=(0,0) and otherwise its value is 0. Where is f continuous and where is f not continuous?
We decided that f should not be continuous at (0,0). To verify that f is not continuous, we must "exhibit" some K>0 so that no H>0 will make |f(x,y)-f(0,0)|<K when sqrt((x-0)2+(y-0)2)<H. That is, if the "output error" is K, there is no input tolerance which will force the outputs to be within K of the ideal value, f(0,0). We decided to try K=.5 (this was motivated by the difference between f(0,0)=1 and f anywhere else, which was 0). So we would like |f(x,y)-f(0,0)|<.5, and since f(0,0)=1, this is |f(x,y)-1|<.5 but this will be impossible if we can specify a pair (x,y) satisfying sqrt((x-0)2+(y-0)2)<H which has f(x,y)=1. But if, say, (x,y)=(H/2,H/2), we have such an (x,y). I commented that this choice of (x,y) is not the only one which will work, but it does work. Since H is not 0, this (x,y) is not (0,0) and thus f(x,y)=0, and the difference |f(x,y)-1| is 1, and 1 is greater than .5, and we have verified the claim that f is not continuous at (0,0).

What about at other points? I first looked at (3,4). Well, if we get (x,y) close to (3,4) (what does "close" mean?) then probably f(x,y) and f(3,4) are both equal to 0, so that |f(x,y)-f(3,5)| will be 0 which is certainly less than ... well, less than any output tolerance K. Wow! So the K won't matter here, but a good choice of H is important. That is, we want H so that if sqrt((x-3)2+(y-4)2)<H, then f(x,y) will be 0. But the only way f will not be 0 is if (x,y)=(0,0). So we need to choose H so that (0,0) is "avoided". Here, with (3,4), "clearly" (well, almost clearly if you take the distance to (0,0)!) if H=5, we will avoid (0,0).

More generally, I would like to prove that f is continuous at (x0,y0) when (x0,y0) is not (0,0). Well, if I choose H=sqrt(x02+y02), then H is positive (since the point is not (0,0)!) and then |f(x0,y0)-f(x,y)|=|0-0|, and this will always be less than any positive output tolerance, K.

Example I considered the more complicated function f(x,y)=2x+3y and asked where this function is continuous. Almost everyone agreed that it would be continuous at every point. I decided to try to verify that the function was continuous at (4,5), so f(4,5)=23. Therefore I need to check the following intricate logical statement:
Given K>0, there is H>0 so that if sqrt((x-4)2+(y-5)2)<H, then |f(x,y)-f(4,5)|<K.
I asserted that I would like to rewrite f(x,y)-f(4,5) a bit. So it becomes 2x+3y-23=2x+3y-(2·4+3·5)=2(x-4)+3(y-5). This last formula gives me a breath of hope, because it shows that if x is "close to" 4 and y is "close to" 5, then I can hope/expect f(x,y) to be "close to" 0. Of course, the whole purpose of the definition is to make all the "close to" phrases precise. I notice that I can use the Cauchy-Schwarz Inequality, with v=2i+3j and w=(x-4)i+(y-5)j. (This is not an accident: it is one reason I chose to work with this function!) Then |v·w|<=||v|| ||w|| becomes exactly |=2(x-4)+3(y-5)|<=sqrt(13)sqrt((x-4)2+(y-5)2). Well, now we can put the pieces together. I claim that if "you" give me K, then if I take H=(1/sqrt(13))K, the implication I want will be true. And the reason is that the Cauchy-Schwarz Inequality allows me to "control" |f(x,y)-f(4,5)| by sqrt(13) multiplied by the distance between (x,y) and (4,5). So corresponding to a given "output tolerance" is the "input tolerance" which is just 1/sqrt(13) multiplied by the output tolerance. I remarked that I bet if K=1/10, then the input tolerance H=10-1,000 would work, but that almost everyone who actually uses or works with such things would find such a specification of H to be ugly or wasteful or both.

If we understand sufficiently, we can now see how to prove that this f is continuous at any point (x0,y0). Given a K>0, the same choice of H which work at any point. So this function is actually in a certain logical sense "simpler" than the previous example, where H needed to change from point to point.

We'll do more next time, before moving on to derivatives.

Wednesday, February 19 Again discussed limits. One idea I tried to suggest is analyzing functions by "reducing dimension". One way is with contour lines, and another way is by slicing. We looked at a sequence of increasingly frustrating examples. The idea is to contrast the two-dimensional limit of f(x,y) as (x,y)-->(x0,y0) with the existence of various 1-dimensional limits.

So we might look at the limit as t-->0+ of x=x0+at and y=y0+bt for various values of a and b: these are one-dimensional slices at various "angles". If a=0 this is the slice with only y "moving" and if b=0 this is the slice with x "moving".

If the two-dimensional limit exists, then the one-dimensional slice limits must all exist and must all be the value of the two-dimensional limit. But we saw that there are examples of functions with two-dimensional domains whose slices had various different values, and there are even examples of functions whose one-dimensional limit slices all exist and all have the same value, but which do not have a two-dimensional limit. So things can really be weird.

Please look at the diary entries here and here. There are Maple pictures in each, and each picture is accompanied by the Maple command which produced it.

Thursday, February 13 I again stated the exact definition of limit and worked with it a bit. This material is covered more or less, here. We also analyzed the alien limit problem and decided that the limit of f(x) (after the values are changed) always exists, and is always x. To me this fact is not totally obvious.

Then I discussed graphs of functions whose domains are in R2 and whose ranges are in R. The link just given has Maple pictures and commands which may prove useful. Students should look at them.

I began the study of limits in R2, and contrasted the 2-dimensional limits with limits of "slices" of functions, where only one of the variables is allowed to change and the others are fixed.

Examples included the function which is always 0 for (x,y) in R2 except for (0,0) where it is 1, and the function which is always 0 for (x,y) in R2 except for where xy=0 (the coordinate axes) where it is 1. I will continue with further examples next time.

Wednesday, February 12 Mr. Brown presented problem 3 of workshop 3 at the board. I thank him for his efforts.

I tried to encourage people in this rather small class to

  • Make oral presentations
  • Proofread their "teammates"'s papers.
  • Write clearly on workshop problems.
All of these skills are valuable in almost any workplace I know.

I presented problem #4 of workshop #3 because no students did it, and I think that it is irritating enough to be useful to see.

I did some problems from chapter 13.

A review of limits. I stated the limit definition for real-valued functions of one real variable:
We say limx->af(x)=b if given any eps>0 there is delta>0 so that if 0<|x-a|<eps, then |f(x)-b| I asked how people computed limits in most calculus courses, and mentioned two strategies:

  1. Plug in. So limx->2x3 was 8 (=23). This idea is valid (heck it is the definition!) if the function involved is continuous at a. Many functions defined by simple formulas satisfy this.
  2. L'Hopital's rule. If a quotient has a specific "indeterminate form", such as 0/0, we can evaluate the limit if a corresponding limit, using the quotient of derivatives, itself exists. So limx->Pi(3x-3Pi)/sin(x) can be evaluated this way, as limx->Pi(3)/cos(x)=-3. One "problem" is that the hypotheses must be verified (the indeterminate form checked!) otherwise it is easy to get bad mistakes.
One can also try, as Mr. Brown suggested, to recognize certain limits as coming from the definition of, say, derivative (or integral, also!) and then "evaluate" these limits by using the appropriate derivative algorithm.

I posed the following question for students to think about:

The aliens arrive!

Consider the function f(x)=x. When the aliens arrived, what they did was permute exactly 10,000 values of this function, and left all the other values fixed. (Further explanation: if the aliens had, for example, permuted 2 values of f they could have had f(x)=x except for x=sqrt(2) and x=Pi, and then f(sqrt(2))=Pi and f(Pi)=sqrt(2).) All that you know is that 10,000 values are affected. You know nothing else, and nothing about these 10,000 numbers.

Question for tomorrow For which numbers a does limx->af(x) exist, and what is the value of the limit for those numbers?

Monday, February 10 I stated a general theorem about curves being determined (roughly) by their curvature and torsion, and then remarked that was quite difficult to determine interesting or useful "properties" of the curve from just this information.
For example, although the theorem I described was first proved around 1850 (actually, a little bit later), a useful application describing a necessary condition for a closed curve to be a knot was only obtained about a century later. This is a result called the Fary-Milnor Theorem, and states that a curve whose total curvature (integrate the curvature around the whole curve, using an idea of integration over a curve which we will see later in the course) is less than 4Pi must be unknotted. There has, in fact, been much more "progress" in recent years. Since detailed knowledge of space curves has turned out to be important in robotics, control theory, material science, and molecular biology, lots of new things have been studied.

I did some problems from chapter 12.

Thursday, February 6 Much as I did it last semester. I didn't quite finish, though, so I will have more to say Monday.
Wednesday, February 5 I computed a simple example of a tangent line to a curve. I also mentioned that actually "computing" the arc length of a curve by nice antiderivatives would likely fail much of the time.
The balance of the lecture duplicated most of what the link has, except for the last few sentences, which I will begin with next time.


Maintained by greenfie@math.rutgers.edu and last modified 2/6/2003.