Math 152 diary, spring 2007 |
---|
In reverse order: the most recent material is first. |
Wednesday, April 4 | (Lecture #20) |
---|
Google gives more than four million pages in response to the word fibonacci. The highest ranking page has a great deal of information. Fibonacci's original publication was in 1202. The web site does not seem to discuss how the numbers occurred a thousand years earlier in Sanskrit literature, where they were used in the analysis of how long and short vowel sounds can be combined to form phrases!
This concludes presentation of material which will be included on the exam next week. But perhaps the most important single result of the course will now be presented, Taylor's Theorem.
I know about a half-dozen ways to discuss this material. The method I used in class, and will discuss here, is very slick and sneaky. Its chief advantage is that it is very fast, but its chief disadvantage is that students may just be astounded. The discusssion is not well motivated (why the heck is he writing that thing?). Right now, I think the speed is useful. I will use the time "saved" to present many computations using Taylor's Theorem. So here we go.
Choose a and x, two numbers, and then define a constant K by the equation
f(x)=f(a)+f´(a)(x-a)+(f´´(a)/2)(x-a)2+K(x-a)3/6.
So K is the number which makes that equation true. If a is not equal to x, then
a-x is not equal to 0 and you can solve for K in the equation, so there will be
exactly one K satisfying the equation.
The object of what we're doing is to discover another description
of K. Well, now define a function F(w) by the equation
F(w)=f(w)+f´(w)(x-w)+(f´´(w)/2)(x-w)2+K(x-w)3/6
and w is a variable between a and x. The colors in the pieces
of the formula (if you can see them) are used so that the next step
may be more understandable.
Differentiate this function F(w)
with respect to the variable w. Be sure to differentiate
everything that contains a w. The product rule must be used
twice, and the Chain Rule gives some minus signs.
F´(w)=f´(w)+f´´(w)(x-w)-f´(w)+
(f(3)(w)/2)(x-w)2-(f´´(w)/2)·2(x-w)2+K[3(x-w)2(-1)]/6
There are f´(w) and -f´(w) which cancel. And there are two
more terms which also cancel:
f´´(w)(x-w) and -(f´´(w)/2)·2(x-w)2 (since the two 2's cancel!). F´(w)$ is
actually given by
F´(w)=(f(3)(w)/2)(x-w)2-K3(x-w)2/6=(f(3)(w)/2)(x-w)2-K(x-w)2/2.
Let's learn a bit more about F(w). If we substitute x for w, we
get:
F(x)=f(x)+f´(x)(x-x)+(f´´(x)/2)(x-x)2+K[(x-x)3]/6=f(x)
because of the various appearances of (x-x).
Now consider F(a), where we substitute a for w in the formula for
F:
F(a)=f(a)+f´(a)(x-a)+(f´´(a)/2)(x-a)2+K(x-a)3/6
But we chose K so that this would be f(x).
Therefore we have a function, F(w), so that F(a)=f(x) and
F(x)=f(x). F(w) has the same value at both a and x. Let's use
the Mean Value Theorem. It states that there is some c between a and x
so that
F´(c)=[F(x)-F(a)]/[x-a].
Since F(x) and F(a) have the same value, then there is some c between
a and x so that F´(c)=0.
But F´(w) is
described above so:
F´(c)=[f(3)(c)/2](x-c)2-K(x-c)2/2.
This is 0. We can solve for K now. Do it (multiply by 2, divide by
(x-c)2, put the K on the other side of the equation. This
gets us exactly what we wanted, another description of K. So
K=f(3)(c) and now we know that
f(x)=f(a)+f´(a)(x-a)+[f´´(a)/2](x-a)2+[f(3)(c)}6](x-a)3
This is a version of Taylor's Theorem. Here is a more general statement.
Taylor's Theorem with Lagrange's form of the remainder
There is some number c between a and x so that f(x)=j=0n[f(j)(a)/j!](x-a)j + [f(n+1)(c)/(n+1)!]{(x-a)n+1.
This is neat and easy to remember because the remainder or error term looks almost like the next piece of what we are adding up. There is, of course, yet more vocabulary.
How to say it with infinite series | The Taylor language |
---|---|
(Here a is fixed and x is a variable and n is a positive integer.) j=0n[f(j)(a)/j!](x-a)j is the partial sum of j=0infinity[f(j)(a)/j!](x-a)j, a power series centered at a. |
(Again, a is fixed, x is a variable, and n is a positive integer.) j=0n[f(j)(a)/j!](x-a)j is the nth degree Taylor polynomial for f centered at a. The series j=0infinity[f(j)(a)/j!](x-a)j is the Taylor series for f centered at a. |
j=n+1infinity[f(j)(a)/j!](x-a)j is the infinite tail of the infinite series, and represents the difference between the sum of all the terms (a limit) and the sum of finitely many terms. | [f(n+1)(c)/(n+1)!]{(x-a)n+1 (for some c between a and x) is the difference between f(x) and the value at x of the nth degree Taylor polynomial centered at a. It is called the remainder or error term. |
How is a proof or idea discovered?
So how did the original ideas about Taylor's Theorem occur? What I
just showed you is very contrived ("showing effects of planning
or manipulation"). Certainly it seems to need knowledge of the final
statement in order to prove the statement! I suspect that many
specific examples were considered when people wanted to find "nice"
polynomial approximations. I also suspect that reasoning using
l'Hopital's rule occurred. I will discuss this next time and try to
show how one could "guess" Taylor's Theorem, maybe.
History
I am not an expert. Brook Taylor lived in England from 1685 to
1731. Here
is a link
to a biography of Taylor, and here is one paragraph from that
biography:
We must not give the impression that this result [Taylor's Theorem and Taylor series] was one which Taylor was the first to discover. James Gregory, Newton, Leibniz, Johann Bernoulli and de Moivre had all discovered variants of Taylor's Theorem. Gregory, for example, knew thatI did not know until fairly recently that, five hundred years ago, there was detailed knowledge of Taylor series away from western Europe. A friend of mine, Professor David Bressoud of Macalester College in Minnesota, wrote an article whose title is Was Calculus Invented in India? (in the College Math Journal, 33 1, Pages 2-13, 2002). Here is the opening paragraph of his article:
arctan x=x-(x3/3)+(x5/5)-(x77)+...
and his methods are discussed in [13]. The differences in Newton's ideas of Taylor series and those of Gregory are discussed in [15]. All of these mathematicians had made their discoveries independently, and Taylor's work was also independent of that of the others. The importance of Taylor's Theorem remained unrecognised until 1772 when Lagrange proclaimed it the basic principle of the differential calculus. The term "Taylor's series" seems to have used for the first time by Lhuilier in 1786.
No. Calculus was not invented in India. But two hundred years before Newton or Leibniz, Indian astronomers came very close to creating what we would call calculus. Sometime before 1500, they had advanced to the point where they could apply ideas from both integral and differential calculus to derive the infinite series expansions of the sine, cosine, and arctangent functions:The article concludes with the following paragraphs.
sin x=x-(x3/3!)+(x5/5!)-(x7/7!)+...
cos x=1-(x2/2!)+(x4/4!)-(x6/6!)+...
arctan x=x-(x3/3)+(x5/5)-(x77)+...
The traditional introduction of calculus is as a collection of algebraic techniques that solve essentially geometric problems: calculation of areas and construction of tangents. This was not the case in India. There, ideas of calculus were discovered as solutions to essentially algebraic problems: evaluating sums and interpolating tables of sines.Geometry was well developed in pre-1500 India. As we will see, it played a role, but it was, at best, a bit player. The story of calculus in India shows us how calculus can emerge in the absence of the traditional geometric context. This story should also serve as a cautionary tale, for what did emerge was sterile. These mathematical discoveries led nowhere. Ultimately, they were forgotten, saved from oblivion only by modern scholars.
There is no evidence that the Indian work on series was known beyond India, or even outside Kerala, until the nineteenth century. Gold and Pingree assert ... that by the time these series were rediscovered in Europe, they had, for all practical purposes, been lost to India. The expansions of the sine, cosine, and arc tangent had been passed down through several generations of disciples, but they remained sterile observations for which no one could find much use.I think it is useful to recognize that all sorts of human beings have intelligence and talent. Some study of history makes this clear.No. Calculus was not invented in India. Much of what we call calculus had been discovered, but the context for understanding these discoveries was never constructed. I am left wondering how much important mathematics today is known but not yet understood, passed among a coterie of tightly knit disciples as an intriguing yet seemingly useless insight, lacking the context, the fertilizing connections, that will enable it to blossom and produce its fruit.
Let me use Taylor's Theorem. So:
Let's see some numbers.
The error overestimate is |x|n+1/(n+1)!. This depends on
both x and n, and you can see the effect of the dependence in the
computations above when n=11
and x=.5 and x=2. What is perhaps more interesting is the following,
which I certainly hope you believe by now:
If x is fixed, and n-->infinity, then |x|n+1/(n+1)!-->0.
This is true because eventually n gets larger than x, and then the
effect of multiplying by |x| on top compared to multiplying by stuff
bigger than x on the bottom shows that the terms must approach 0.
It may not be clear what the heck is going on. I promise that I
will do many examples and applications. But here are three conclusions
which I can write:
Monday, April 2 | (Lecture #19) |
---|
A collection of examples
Power series | Coefficients | Interval of convergence | Radius of convergence |
---|---|---|---|
n=0infinityxn/nn | cn=1/nn | All real numbers: (-infinity,infinity) | R=infinity |
One way to see this is to use the Root Test. Then the nth root of |xn/nn| is |x/n| and as n-->infinity, this-->0 for all x. | |||
n=0infinityxn/n2 | cn=1/n2 | [-1,1] | R=1 |
The Ratio Test results in considering |x|[n2/(n+1)2] and as n-->infinity, this-->1. So we get convergence for |x|<1. If x=+/-1, the series is a p-series with p=2>1, so it converges. | |||
n=0infinityxn/n | cn=1/n | [-1,1) | R=1 |
Use the Ratio Test again. The resulting ratio is |x|[n/(n+1)]. As n-->infinity, this-->1. So we get convergence for |x|<1. If x=1, divergence (Harmonic Series). If x=-1, convergence (Alternating Harmonic Series). | |||
n=0infinityx2n/n | c2n=1/nn and codd=0 | (-1,1) | R=1 |
The Ratio Test. Now when x=+/-1, both endpoints give the Harmonic Series, so the power series diverges at both endpoints. | |||
n=0infinitynnxn | cn=nn | Only {0} | R=0 |
Use the Root Test. |
It turns out that the qualitative behaviors displayed above are all that is possible for power series. That is, they must converge inside an interval, which can be all large as all of R or as small as the center of the power series. They may or may not, depending on the specific coefficients, converge or diverge at the end points of the interval. But no other types of behavior are possible.
Powerful general facts
These results say that power series, inside their intervals of convergence, can be treated, for the purposes of calculus, just like "big" polynomials: differentiationk, integration, etc.
"Reverse engineering" the result of the previous lecture
Last time we needed to compute a "payoff" or expectation or fair entry
fee for a gambling game: n=1infinityn/2n. How can we
think about this? I'd like to show you how I "invented" what I did last time.
Here is what I would do. I'd say to myself that this is a series which results from substituting x=1/2 into the power series n=1infinitynxn=x+2x2+3x3+4x4.... but this series is the result of multiplying the series 1+2x+3x2+4x3+... by x. So what is the sum of 1+2x+3x2+4x3+...=n=1infinitynxn-1? Well, nxn-1 is the derivative of xn, so the series 1+2x+3x2+4x3+... is the derivative of x+x2+x3+x4+... which is a geometric series with first term=x and ratio=x, so that its sum is x/(1-x).
Now go backwards! The sum of the series n=1infinityn/2n is the result of differentiating x/(1-x), then multiplying the result by x, and then substituting in x=1/2. And that's what we did.
And now another one ...
A slightly more involved process would get the sum of n=1infinityn2/2n. Here
look at the power series n=1infinityn2xn. Pull
out an x to get n=1infinityn2xn-1
which is the derivative of n=1infinitynxn. But, hey, we
just got the sum of that series (look at the previous paragraph). So
we can get the sum of this series.
Connecting the function and the coefficients
If f(x)=n=0infinitycnxn=c0+c1x1+c2x2+c3x3+c4x4+... then:
Set x=0 and get f(0)=c0.
Differentiate the previous series. The result is f´(x)=n=0infinityncnxn-1=0+c1+2c2x+3c3x2+4c4x3+... then:
Set x=0 and get f´(0)=c1.
Differentiate the previous series. The result is f´´(x)=n=0infinityn(n-1)cnxn-2=0+0+2c2+6c3x+12c4x2+... then:
Set x=0 and get f´´(0)=2c2 so that c2=f´´(0)/2.
Differentiate the previous series. The result is f(3)(x)=n=0infinityn(n-1)(n-2)cnxn-3=0+0+0+6c3+24c4x+... then:
Set x=0 and get f(3)(0)=6c3 so that
c3=f(3)(0)/6.
By now most people in the class recognized the pattern:
cn=f(n)(0)/n!
Please note that most people like this formula and
therefore accept that 0! should be 1, and that f(0), the
zeroth derivative of f, should be the original function. Then the
formula, as stated, is certainly correct for n=0.
This says several things. First, if a function has a power
series expansion, then it has exactly one power series expansion,
because the coefficients are given by that formula. This means that
any (valid!) way we get the power series expansion, the result must be
the correct answer. Any trick, any contrivance is good.
How useful is this formula? For certain functions, it can be very
useful. But, in general, if you give me a "random" function (say, a
quotient or a composition) then computing high derivatives is tedious
and difficult because the expressions for the derivatives begin to
expand more and more ("expression swell"). So maybe for such functions
the formula just given is not so useful.
Some examples
We do know one nice formula for a series and the sum of a series. That's:
a/(1-r)=a+ar+ar2+ar3+..., the geometric series.
If we can "rearrange" a function so that it fits the geometric series
template, then maybe we can get a geometric series. Here are some examples.
1. f(x)=3/(x+5). Actually, 3/(x+5)={3/5}/(1+{x/5}) (dividing the top
and bottom by 5) and this is the same as {3/5}/(1-[-{x/5}]). Now I
hope you can see the a and the r: a={3/5} and r=-{x/5}. So
f(x)={3/5}-{3/5}{x/5}+{3/5}{x/5}2-{3/5}{x/5}3+...=
n=0infinity(-1)n+1(3/5n+1)xn.
2. (problem #8 in section 11.9) f(x)=x/(4x+1). Only a little bit of
"rearrangement" is needed: x/(4x+1)=x/(1-{-4x}) so a=x and r=-4x.
f(x)=x-4x2+42x3-43x4+...=n=1infinity(-1)n+14n-1xn.
3. (problem #9 in section 11.9) f(x)=x/(9+x2). Here look at
x/(9+x2)=(x/9)/(1+{x2/9})=(x/9)/(1-[-{x2/9}])
so that a=x/9 and r=-{x2/9}.
f(x)=x/9-x3/92+x5/93-x7/94+...=n=1infinity(-1)n+1x2n+1/9n.
A real example
I began a discussion of the Fibonacci
numbers, which occur almost everywhere.
Wednesday, March 28 | (Lecture #18) |
---|
Computation
How to actually compute values of functions is a serious question, and
ideas concerning computation are the object of much pure mathematics
and most of applied mathematics. It's generally agreed that
polynomials are computable. Evaluating a polynomial involves
multiplication and addition. Just to be clear, a polynomial is a
function defined by a formula such as
22.45-9x2+101x5. It is a sum of monomials ("x to
a positive integer power"), each possibly multiplied by some
coefficient. The degree of a polynomial is the highest integer
exponent where the coefficient is not zero. So the degree of the
polynomial just written is 5.
A polynomial solution to an initial value problem
Since polynomials are easy to compute, we should try to use them in
calculus as much as possible. In calculus we integrate and
differentiate. Certainly we should try to solve Initial Value Problems
(IVP's) for Ordinary Differential Equations (ODE's) with Initial
Conditions (IC's). Here is a simple example:
ODE: dy/dx=y
IC: y(0)=1
Let's suppose that
y=A+Bx+Cx2+Dx3+Ex4+Fx+... is a
solution. What can be said about the coefficients A, B, C, D, E, F,
...?
So the IC, y(0)=1 says that when we insert x=0, the formula should
give us the value 1. But all of the terms with x's in them are 0, so
that 1=A.
What does the differential equation tell us? The formula for y tells
us that y´=0+B+Cx+2Dx+3Ex2+4Fx3+... so we
can compare the two "presentations". Since two polynomials will be
equal exactly when their coefficients agree, we learn that:
A=B (the constant term), which (using the previous value of A) tells
us that 1=B.
B=2C (the coefficient of the linear or x term), which (using the
previous value of B) tells us that 1/2=C.
C=3D (the coefficient of the quadratic or x2 term, which
(using the previous value of C) tells us that 1/6=C.
D=4E (the coefficient of the cubic or x3 term, which
(using the previous value of D) tells us that 1/24=D.
I'll bet that the alert student could answer this question:
ETC.
That is, I hope you (by now!) recognize the pattern of the
coefficients, so that
y=1+1x+(1/2)x2+(1/6)x3+(1/24)x4+(1/120)x5+...
Right now I will define 0! to be 1, so that my notation will be
easier to use. With the definition (or understanding!) y can be
written as n=1infinity(1/n!)xn.
We learned a while ago that solutions for Initial
Value Problems should be unique. So we see clearly that
ex=n=1infinity(1/n!)xn.
This is certainly not clear. I sort of weaseled (?) a reason why the
infinite sum should be considered a solution of the IVP. So maybe the
infinite sum "is" such a solution. But there is no reason to suspect
that it has to represent ex. In fact, everything I've done
is actually correct, and the implied conclusions (that the series
converges, and that the sum of the infinite series is ex,
and that computing partial sums can be efficiently done and provides a
great way to compute values of the exponential function) are all
totally correct.
Definition of power series
A power series centered at a is a series of the form
n=0infinitycn(x-a)n.
Here the numbers cn are the coefficients of the series, and
the whole thing looks like some sort of infinitely long polynomial.
An example
Here is a power series centered at a=0:
n=0infinityxn/(3n+1)
For which x's does this series converge? Well, we just return to the
thinking mode of the previous lecture, with
an=xn/(3n+1), and we try to think of a strategy
to investigate convergence. The prominence of xn in the
formula for an makes me want to use either the Ratio Test
or the Root Test. I always feel more at home (?) with the Ratio Test
(personal preference) so I'll try that.
|x|n+1 |an+1| -------- 3(n+1)+1 |x|n+1(3n+1) (3n+1) ------ = -------------- = --------------- = |x|· ------- |x|n {3(n+1)+1}|x|n (3n+3) |an| ------ 3n+1Now we need to have n-->infinity. The result is |x|, which is what we called L in the previous lecture. The Ratio Test gives this information:
What happens if |x|=1? We need to use other methods. There are two x's which must be considered.
To the right is a picture (yes, I work diligently to "translate" almost any information into pictures, so just ignore if you are not a picture person!) of the real line. I have tried to indicate how the line is divided into two collections of points. The red points (with D) are where this series diverges, and the green points (with C) are where this series converges. The behavior at the boundary of the intervals is sort of difficult to draw (!) but I hope the use of ] and [ and ( and ) help a bit.
Another example ...
Here is a power series centered at a=4:
n=0infinity(x-4)n/(3n+1)
For which x's will this series converge?
My hope was that students
would make their own mental connections between the work needed to
detect where this series converges and the work we just did. For
example:
The chunk of text on the left should change to the
chunk of text on the right.
... Well, we just return to the
thinking mode of the previous lecture, with
an=xn/(3n+1), and we try to think of a strategy
to investigate convergence. The prominence of xn in the
formula for an makes me want to use either the Ratio Test
or the Root Test. I always feel more at home (?) with the Ratio Test
(personal preference) so I'll try that. | |x|n+1 |an+1| -------- 3(n+1)+1 |x|n+1(3n+1) (3n+1) ------ = -------------- = --------------- = |x|· ------- |x|n {3(n+1)+1}|x|n (3n+3) |an| ------ 3n+1Now we need to have n-->infinity. The result is |x| ...
... Well, we just return to the
thinking mode of the previous lecture, with
an=(x-4)n/(3n+1), and we try to think of a strategy
to investigate convergence. The prominence of (x-4)n in the
formula for an makes me want to use either the Ratio Test
or the Root Test. I always feel more at home (?) with the Ratio Test
(personal preference) so I'll try that. | |x-4|n+1 |an+1| -------- 3(n+1)+1 |x-4|n+1(3n+1) (3n+1) ------ = -------------- = --------------- = |x-4|· ------- |x-4|n {3(n+1)+1}|x-4|n (3n+3) |an| ------ 3n+1Now we need to have n-->infinity. The result is |x-4| ... |
The logic should go totally in parallel, with x changing to x-4. Another way of seeing what's going on is to consider the series n=0infinity(x-4)n/(3n+1) and to ask, as I tried to do, what's the simplest (?) value of x to worry about? I claimed that this value of x is 4 because x=4 makes all of the (x-4)some power equal to 0. So I hope this these considerations make the following conclusion believable.
The series converges if |x-4|<1, which is the same as -1<x-4<1, which is the same as 3<x<5. The series diverges if |x-4|>1, which is the same as x<3 or x>5. The series diverges for x=5 and converges for x=3.
And maybe another ...
How about n=0infinity(x+20)n/(3n+1)?
I hoped that students would "see" the answer, which is shown
graphically to the left.
With more definiteness, I declare that this
series converges if |x+20|<1, which is the same as
-1<x+20<1, which is the same as -21<x<-19. The series diverges
if |x+20|>1, which is the same as x<-21 or x>-19. The series
diverges for x=-19 and converges for x=-21.
Actually, the coefficients know it all
The information about how a series n=0infinitycn(x-a)n
converges is really contained in its collection of coefficients, the
cn's. The "center" x=a just moves it around. Therefore here
(and in almost all of the applications I know!) I will almost always
consider series whose center is 0, n=0infinitycnxn,
and not even worry about the x=a case. If we need to be concerned with
a=500 or a=-sqrt(2), well, then, I will do the computations around x=0
and then just translate the results to
make things work.
The example is (more or less!) generally correct
I'll try to illustrate with some general reasoning here why people
like power series. I'll give an argument to show that convergence
questions follow some straightforward logic. Suppose that I have
a power series n=0infinitycnxn
which converges at x1, shown on the real line to the
right. What can I say about convergence at x2, also shown
to the right. All that I know is that x2 is closer to the
origin than x1. And I do not tell you anything more about
the coefficients of the series. What follows is a very important and
also slightly tricky bit of reasoning.
I know n=0infinitycnx1n converges. Then (look at the first entry in the left-hand column of the table which began the previous lecture!) cnx1n-->0 because the individual terms in any series which converges must get smaller. But this means that there is some (maybe big!) number M so that |cnx1n|<M (if there were not such an M, some collection of the terms of the series would grow and grow, and they couldn't approach 0).
I want to know if the series converges at x2. I'll consider
absolute convergence. Maybe that's a stricter condition, but it
is easier to work with. Look:
n=0infinity|cn|·|x2|n=n=0infinity|cn| |x1|n(|x1|/|x2|)n<n=0infinityM(|x1|/|x2|)n.
The first equality occurs because
|x2|=|x1|·(|x2|/|x1|)
and the second inequality occurs because we're overestimating
by the M on each term. Please realize that these sorts of estimates
are commonly used in real computations! But the series we ended up
with is a geometric series with ratio equal to
|x1|/|x2| and this
is less than 1 and therefore it must converge. The Comparison
Test then implies that the series we began with, the power series at
x2, must also converge. The only relationship we needed
between x1 and x2 to make this work is that the
distance to 0 of x2 is less than the distance to 0
of x1. The x2 can be on the other side of the
origin (because only absolute values appear in the discussion!).
If you'd like to put the idea in terms of disease or contagion or
something, then all of the x2's closer to the origin than
x1 "catch" convergence from x1. Maybe the
picture to the right sort of illustrates the situation. (The only
peculiarity is that the "opposite" point, -x1, may not have
convergence, but that's because a geometric series with ratio=1 does
not converge, so the argument presented here does not allow any conclusion.)
Back to the example
In the case of the example just presented, n=0infinityxn/(3n+1), the
interval of convergence is -1<=x<1, with 0, the center of the
power series, sitting in the middle of the interval.
What sort of function is the sum?
It turns out that the function which is sum of the power series is
very nice inside the interval of convergence: it is continuous (no
breaks or jumps) and it is differentiable (very smooth). But before this is discussed in detail, I wanted to show people ...
How to gamble
What I discussed is presented here.
Monday, March 26 | (Lecture #17) |
---|
Convergence of series | Facts to know |
---|---|
If an converges, then
an-->0. So if the limit is not 0 or if the limit does not exist, the series must diverge.
Series with positive terms
Alternating series Absolute convergence implies convergence.
Ratio Test
Root Test |
Sequence facts rn-->0 if |r|<1. a1/n-->1 if a>0. n1/n-->1. (1+{1/n})n-->e. Series facts
Geometric series |
Remarks
What was done
I believe we discussed
11.6: 6 (look also at 5), 8, 12, 23, 27 (look also at 25) and
11.7: 1, 6, 13, 15, ...
We tried to determine if these series converge absolutely, converge
conditionally, or diverge.
The QotD was 11.7: 17: does 1infinity(-1)n21/n converge or diverge?
A possible vocabulary word: retrograde, meaning "move backward in an orbit" (under certain circumstances, planets being observed seem to move backwards in their orbits) and then also "move in a direction contrary to the usual one".
Wednesday, March 21 | (Lecture #16) |
---|
Comparison
We've been dealing with results which really depend on the terms of
the series being positive. For example, the Comparision
Test. Here's a version:
Hypothesis Suppose 0<=aj<=bj.I tried to be more poetic (?) in class. Somehow I said something about child and parent, but now when I think about the analogy, it seems rather silly. Perhaps this is true about many of the things I say in class.
Conclusion If j=1infinitybj converges, then j=1infinityaj converges.
If j=1infinityaj diverges, then j=1infinitybj diverges.
Another comparison result
Please look in the text for some good examples using the Limit
Comparison Test which can be useful. There are numerous textbook
questions which you can try. Here is a statement:
Hypothesis Suppose aj and bj are both always positive, and that the limj-->infinityaj/bj exists and is positive.The limit hypothesis states that, more or less, when j is very large, the terms aj and bj have the same size. To me this makes the conclusion believable, since convergence/divergence is really about whether the infinite tails are finite or not, and that depends about the sizes of the terms when j is very large. The limit comparison test serves as motivation for some further results we will see.
Conclusion Either both j=1infinityaj and j=1infinitybj converge or
both j=1infinityaj and j=1infinitybj diverge.
Series with terms of varying signs
So now we will complicate things a bit, and look at series whose signs
vary. Let me start really easily but things will get more intricate
rapidly.
(Varying
stop signs)
1-1+1-1+1-...
This is just about the simplest example I could show. We got a formula
for the jth term. We need the sign to alternate, and that
will be given by (-1)something here. The sign will
alternate if the "something here" is either j or j+1. The first
term will be +1 and the second term will be -1 if we use j+1. So an
explicit formula is aj=(-1)j+1. Next I asked
about convergences of the series
j=1infinity(-1)j+1. For this
we must consider the sequence of partial sums.
s1=1; s2=1-1=0;
s3=1-1+1=1; s4=0, etc.
It isn't too difficult to see that seven integer=0 and
sodd integer=1. The partial sums flip back and
forth. This is exactly the kind of behavior we did not get when
we considered series with all positive terms. There the partial sums
just traveled "up", to the right. Well, this particular infinite
series does not converge, since the partial sums do not
approach a unique limit.
j=1infinity(-1)j+1 diverges.
2-(1{1/2})-(1{1/3})+(1{1/4})-(1{1/5})+...
This is a more complicated series. I suggested that we try to "guess"
a formula by first getting a formula for the sign, and then a formula
for the absolute value. In this case, the sign is surely given by
(-1)j+1, just as before. The magnitude (these are
1-dimensional vectors, after all!) or absolute value is 1+{1/j} (some
students suggested {j+1}/j, which is also a good formula. So putting
these together, aj=(-1)j(1+{1/j}). And now we
looked at the {con|di}vergence of
j=1infinity(-1)j+1(1+{1/j}).
The partial sums are more complicated and more interesting.
s1=2;
s2=2-(1{1/2})-{1/2}=.5;
s3=2-(1{1/2})+{1{1/3})=11/6=1.8333;
s4=2-(1{1/2})+(1{1/3})-(1{1/4})={7/12}=.58333
This is where I stopped computing in class, but, golly, I have a friend who
could compute s17 either exactly ({4218925/2450448}) or
approximately (1.72169). This is nearly silly. Richard
Hamming, one of the twentieth century's greatest applied
mathematicians, remarked that
From s1 to s2, we move left since the
second term in the series is negative. From s2 to
s3 we move right, because the third term in the series is
positive. But notice that we don't get to s1. because the
jump right has magnitude 1{1/3} and this is less than 1{1/2}the
magnitude of the previous jump left.
I hope you are willing to believe that what's described persists in general.
Does this series converge? Students in both classes had interestingly
varied opinions about this, and I will admit that I tried to encourage
discussion. But the collection of partial sums does not
approach a unique limit. Why? Well, the distance between any odd
partial sum and any even partial sum will be at least 1, since the
magnitude of the jth term is 1{1/j}, which is certainly
>1. The partial sums can't get close.
j=1infinity(-1)j+1(1+{1/j}) diverges.
1-1/2+1/3-1/4+1/5-...
Here aj has sign (-1)j+1 again, and the absolute
value or magnitude is 1/j. Does j=1infinity(-1)j+1(1/j) converge?
The partial sums are more complicated and more interesting.
s1=1;
s2=1-(1/2)-1/2=.5;
s3=1-(1/2)+{1/3)=5/6=.8333;
s4=1-(1/2)+(1/3)-(1/4)=7/12=.58333
Here's a picture of these partial sums. Things are a bit more crowded
(that's good for convergence!) than in the previous picture.
The previous three qualitative properties still hold. Since the signs
alternate, the partial sums wiggle left and right. Since the absolute
values decrease, the odd sums are less than the even sums, and all of
the even sums are less than all of the odd sums. But now the distance
between the odd and even sums-->0 since the magnitude of the terms is
1/j, and this-->0. So here is a rather subtle phenomenon:
j=1infinity(-1)j+1(1/j) converges.
The theorem on alternating series (Alternating Series Test)
The following is the major result of section 11.5 of the text.
Hypotheses Suppose thatThis is a very nice result, and is useful for some special series. The most famous example is the alternating harmonic series, j=1infinity(-1)j+1(1/j), which we just saw. There are other examples in section 11.5.The terms of a series alternate in sign (+ to - to + etc.). The absolute value or magnitude of the terms decreases. The limit of the absolute values of these terms is 0.
Conclusion The series converges.
Finally, here is some "experimental evidence" which might help you believe that the alternating harmonic series converges.
Some partial sums of the alternating harmonic series |
---|
s10=0.6456349206 s100=0.6456349206 s1,000=0.6881721793 s10,000=0.6926474305 s100,000=0.6930971829 s1,000,000=0.6931466807 |
In fact, as several people guessed, the sum of the alternating harmonic series is ln(2). But the convergence is actually incredibly slow. The one millionth partial sum (which took almost 8 seconds for a moderately fast PC to compute) only has 5 accurate decimal digits. This is not the best and fastest way to compute things!
But what if ...
The sign distribution of terms in an infinite series could be more
complicated. I suggested that we consider something like
7cos(36j7-2j2)+2sin(55j+88) j=1infinity ---------------------------- 2jHere the sign distribution of the top of the fraction defining aj is quite complicated. The first 20 signs are here:
Please notice that with a few modifications, the corresponding question can be answered very easily. Look at:
7|cos(36j7-2j2)|+2|sin(55j+88)| j=1infinity ---------------------------------- 2jAbsolute values signs have been put around the cosine and sine functions. Now the series has all non-negative terms and we can use our comparison ideas. How big is the top? Since the values of both sine and cosince are in [-1,1], the top can't be any bigger than 9. The bottom is 2j. Therefore each term of this series is at most 9/2j. But this larger series is a geometric series with ratio 1/2<1 and therefore it must converge.
Proof via manipulative
There was a spectacular demonstration in class! It wa inspired by
thinking about old-fashioned folding carpenter's rulers. If we have an
infinite series j=1infinityaj, we could
consider the associated series j=1infinity|aj|,
where we have stripped off the signed of the terms, and are just
adding up the magnitudes. This is sort of like an unfolded carpenter's
rule, stretched out as long as possible. It may happen that the series
of absolute values, a series of positive terms, may converge. So when
"the ruler" is stretched out as long as possible, it has finite
length. Well, if we then fold up the ruler, so some segments point
left (negative) and some point right (positive) then the resulting
length is also positive.
The picture to the left as an attempt to show this statement and to
duplicate the physical effect of what I displayed in class. The top
has the segments stretched out as far as possible. The next picture
shows some of the segments rotating, aimed backwards (negatively). The
last segment shows in reg segments which are negative and in green the
other segments, oriented postively. I hope this makes sense, and
justifies the following:
If j=1infinity|aj|
converges, then j=1infinityaj must converge
also (and, actually, |j=1infinityaj|<=j=1infinity|aj|).
Proof via algebra
There is a verification of these statements in the textbook, using
algebra, on p.741 in section 11.6, if you would like to read it. Sigh.
And conversely?
Notice that the converse of
the assertion about absolute values and series may not be
correct. That is, a series may converge, and the series of absolute
values of its terms may not. The simplest example, already verified,
used the alternating harmonic series, divergent, and the harmonic
series, convergent.
Vocabulary
A series j=1infinityaj which has j=1infinity|aj|
converging is called absolutely convergent. Then the correct
implication above is:
If a series is absolutely convergent, then it is a convergent series.A series for which j=1infinityaj and j=1infinity|aj| diverges is called conditionally convergent. The alternating harmonic series is conditionally convergent.
Another example
Consider j=1infinity{sin(5j+8)}37/j5. I
don't know very much about {sin(5j+8)}37 except that, for
any j, this is a number inside the interval [-1,1]. Therefore j=1infinity|{sin(5j+8)}37/j5| has terms which are all smaller than j=1infinity1/j5 (a p-series
with p=5>1, so convergent)> The comparison test asserts that j=1infinity|(sin(5j+8)37/j5| converges, and therefore j=1infinity{sin(5j+8)}37/j5
itself must be a convergent series.
Given a series, take absolute values
The result just stated is a very powerful and easily used method. If
you "give" me a series with random signs, the first thing I will do is
strip off or discrad the signs and try to decide if the series of
positive (non-negative, really) terms converges.
Many of the classical "tests" are about how the series resembles the geometric series. So we think about how aj might look like arj for j large, and make conclusions based on this. Let's look at an example.
An example from the text
Here is problem #3 from section 11.6. It asks if this series is absolutely convergent, conditionally convergent, or divergent:
n=0infinity(-10)n/n!.
The text uses n as the index of summation here, which shouldn't be too
distressing. The text starts the sum from n=0 rather than n=1, but,
again, this shouldn't be too bad, since {con|di}vergence depends on
the infinite tail, and the initial terms don't affect that at
all. More subtly, if you inspect the formula for the term in the
series, the text uses 0!, which I don't think has been previously
mentioned. 0! means 1, and the reason for that understand will be
given later.
The answer It turns out that this series does converge
(absolutely!) and, actually, its sum is e10. This is not
obvious, and, again, the reasoning will be given later. Forget this
premature statement of the conclusion and try to look at the sum,
first using a "primitive" approach.
Let's strip off the signs. Then
|an|=(10)n/n!. For example,
|a7|=107/7!=10,000,000/5040=1984
(approximately). This is large. But what if we consider how
|a20| and |a21| are related? Then we need to
investigate how (10)20/20! changes into
(10)21/21!. The top multiplies by 10, of course, but the
bottom changes by a factor of 21. The change from |a20| to
|a21| is a multiplication by 10/21. And further thought
should show that the next change involves multiplication by 10/22, and
then 10/23, etc. All of the changes after n=20 use multiplication by
numbers less than 1/2. Certainly the series converges (compare with a
geometric series whose ratio is 1/2<1). There is a way to
"mechanize" these comparisions and it is called the Ratio Test.
The Ratio Test
Suppose we are given an infinite series with nth term
an and suppose that
limn-->infinity|an+1/an| exists. If
this limit is less than 1, then the original series converges
absolutely and therefore must converge. If the limit is greater than
1, the original series diverges. If the limit happens to be equal to
1, the Ratio Test does not supply any information about convergence or
divergence.
Return to the example from the text
Let's use the Ratio Test on the series n=0infinity(-10)n/n!. So we
need to consider a limit. Here it is:
|(-10)n+1| |--------| | (n+1)! | 10n+1n! 10 limn-->infinity----------- =limn-->infinity---------=limn-->infinity-----= 0 |(-10)n| (n+1)!10n n+1 |------| | n! |Since this limit exists and is 0, and 0<1, the original series converges absolutely, and must therefore converge.
QotD
Simplify:
23(n+1) --------- ((n+1)!)3 ------------- 23n ------ (n!)3I noted that all of the parentheses were important -- they could not be ignored in the simplification process. The answer should be 8/(n+1)3.
Monday, March 19 | (Lecture #15) |
---|
Series with positive terms
Today we will consider series whose terms are positive or, at worst,
non-negative. (In the next lecture we'll discuss what happens if we
allow different signs, but for now, only +'s.) What can we say in
general about series with positive (or even just non-negative) terms?
Well, the sequence of partial sums is increasing, since
we're just adding more and more non-negative terms. What can happen?
One thing is that the sequence of partial sums can tend to infinity
(hey, this is what happens to the silly infinite series j=1infinity1:
the sequence of partial sums is unbounded and the series
diverges. Another thing that can happen to a positive series is
that the sequence of partial sums can tend to a limit (a non-negative
finite limit). This happens, for example, with positive geometric
series with ratio less than 1. Then the sequence of partial
sums is bounded and the series converges. This is a
consequence of the fact that "Bounded monotonic sequences converge".
This theoretical alternative is everything.
Comparing series with non-negative terms
Suppose we have two infinite series, j=1infinityaj and j=1infinitybj. We also will
assume that both of the series have non-negative terms. This means
that all of the ajj's and all of the bj's are
>=0.
Hypothesis Suppose we know that the terms of one series are always smaller than the terms of the other series. So, specifically, suppose we know that for all j, aj<=bj.
Conclusion Suppose sn=j=1naj of the series with smaller terms, and tn=j=1nbj is the nth partial sum of the series with larger terms. Then sn<=tn.
Comparing the series and inheriting {con|di}vergence
Easy examples
Making the bottoms bigger in fractions shrinks the value. So the
fraction 1/(56+7j+2j) is always less than
1/2j. I know that j=1infinity1/2j converges,
and therefore the series
j=1infinity(56+7j+2j) must also converge.
Making the tops larger makes the value larger. Since I know that j=1infinity1/j, the harmonic series, diverges, I must know that j=1infinity[3+sin(j)]/j also diverges. This is because 3+sin(j)>=2 always.
The prototypes
Bounds on tails with integrals
The examples you need to know are these:
Geometric series a+ar+ar2+ar3+ar4+... converges if |r|<1. When |r|>=1 (and a is not zero!) then the series diverges.
p-series 1+1/2p+1/3p+1/4p+1/5p+1/6p+... converges if p>1 and diverges if p<=1.
You should be able to do most of the problems in section 11.4 with
these in mind. You should practice. I want to
discuss some useful but more complicated applications.
Look at j=1infinity1/(arctan(j)+j3). This is a
horrible series. I don't know much about arctan(j) when j is a
positive integer except that the values are between 0 and Pi/2.
Compare this series with j=1infinity1/j3 whose
individual terms are each larger than the original series. The
integral test tells me that this series converges since 1infinity(1/x3)dx is
finite. (Its value is 1/2: please check this!).
So I know that j=1infinity1/(arctan(j)+j3)
converges. Well, that's fine but suppose I really need to know
what the sum is? For example, suppose I want to know the sum to 3
decimal places (error less than 0.001). Here is a strategy. I write
the infinite sum as a sum of two pieces, a partial sum and an infinite
tail.
j=1infinity1/(arctan(j)+j3)=j=1n1/(arctan(j)+j3)+j=n+1infinity1/(arctan(j)+j3)=sn+the nth "infinite tail"
The infinite tail is j=n+1infinity1/(arctan(j)+j3). But
each piece of that sum is less than j=n+1infinity1/j3. Look
again at the picture here. So we know:
0<=j=n+1infinity1/(arctan(j)+j3)<j=n+1infinity1/j3<ninfinity(1/x3)dx.
Here's a computation of the improper integral:
ninfinity(1/x3)dx= limA-->infinitynA(1/x3)dx= limA-->infinity-1/(2x2)]nA= limA-->infinity-1/(2A2)+1/(2n2)= 1/(2n2).
So the error between the true sum and the nth partial sum
is less than 1/(2n2). How can I get this less than
0.001=1/1,000? Let's take n to be, say, 25. Then
2(25)2=1,250. That works.
Therefore the sum of the series j=1infinity1/(arctan(j)+j3) is the same as j=1251/(arctan(j)+j3) to 3 decimal digit accuracy. The following Maple command and its answer took less than a hundredth of a second:
add(evalf(1/(arctan(j)+j^3)),j=1..25); 0.7440955743I think the sum is approximately 0.744.
Bounds on tails with geometric series
1 2.5 1 + ------ 1! 2.5 (2.5)2 1 + ------ + ------ 1! 2! 2.5 (2.5)2 (2.5)3 1 + ------ + ------ + ------ 1! 2! 3! 2.5 (2.5)2 (2.5)3 (2.5)4 1 + ------ + ------ + ------ + ------ 1! 2! 3! 4! 2.5 (2.5)2 (2.5)3 (2.5)4 (2.5)5 1 + ------ + ------ + ------ + ------ + ------ 1! 2! 3! 4! 5!I hope you get the idea. The infinite tail corresponding to this series is j=n+1infinity(2.5)j/j!.
The sum of the terms up to and including the term with
(2.5)5/5! is 11.6705. The sum of the terms up to and including the term with (2.5)10/10! is 12.1817. The sum of the terms up to and including the term with (2.5)15/15! is 12.1824. The sum of the terms up to and including the term with (2.5)20/20! is 12.1824. |
I bet this series converges, and that its sum is 12.18 (two decimal place accuracy). I'll verify this by looking at the tail j=11infinityaj. Here aj=(2.5)j/j! The first term is a11 which is 0.000597. I computed this -- the value is not "obvious" from the formula. This is not too interesting, because I need information about all of the terms, and there are infinitely many. Let me investigate how the terms are related to one another in the following way:
(2.5)j+1 ------ aj+1 (j+1)! (2.5)j+1j! (2.5)j! 2.5 ------ = ---------- = ----------- = --------- = ----- aj (2.5)j (2.5)j(j+1)! (j+1)j! j+1 ------ j!There's a whole bunch of algebraic things happening in this collection of equations. First, when we write the formulas for the terms, we get a compound fraction (a fraction of fractions) which then is converted (carefully!) into a simple fraction (just one division). Then the powers mostly drop out. Finally, we need the definition of factorial in order to see how (j+1)! relates to j!.
If j is at least 11, then 2.5/(j+1) will be at most 2.5/12 and this is less than 0.209. Therefore we know that if j is at least 11, then aj+1/aj<0.209, so that aj+1<aj(0.209). We can replace each term by 0.209 multiplied by the term before it, and this is an overestimate.
Now what? Look at the tail more closely:
j=11infinityaj=a11+a12+a13+a14+a15+a16+a17+...<
a11+a11(0.209)+a12(0.209)+a13(0.209)+a14(0.209)+a15(0.209)+a16(.209)+...<
a11+a11(0.209)+a11(0.209)2+a12(0.209)2+a13(0.209)2+a14(0.209)2+a15(0.209)2+...<
a11+a11(0.209)+a11(0.209)2+a11(0.209)3+a12(0.209)3+a13(0.209)3+a14(0.209)3+...<
a11+a16(0.209)+a11(0.209)2+a11(0.209)3+a11(0.209)4+a12(0.209)4+a13(0.209)4+...<
a11+a11(0.209)+a11(0.209)2+a11(0.209)3+a11(0.209)4+a11(0.209)5+a12(0.209)5+...<
a11+a11(0.209)+a11(0.209)2+a11(0.209)3+a11(0.209)4+a11(0.209)5+a11(0.209)6+...<
ETC.
Here what I mean by "etc." is that we can keep pushing the subscripts
"down" until they reach 11, each time "paying" by multiplying by
another power of 0.209. And these are successive
overestimates. So we finally end up with a geometric series
whose first term is a11=0.000597, with ratio=0.209. This
series has sum equal to 0.000597/(1-0.209) which is about
0.00076. Hey! This number is less than .001, and I bet that the true
sum of the series therefore has leading decimal
expansion 12.181 or 12.182.
Another example
Here we go:
1 9.8 1 + ------ 1! 9.8 (9.8)2 1 + ------ + ------ 1! 2! 9.8 (9.8)2 (9.8)3 1 + ------ + ------ + ------ 1! 2! 3! 9.8 (9.8)2 (9.8)3 (9.8)4 1 + ------ + ------ + ------ + ------ 1! 2! 3! 4! 9.8 (9.8)2 (9.8)3 (9.8)4 (9.8)5 1 + ------ + ------ + ------ + ------ + ------ 1! 2! 3! 4! 5!I hope you get the idea. The infinite tail corresponding to this series is j=n+1infinity(9.8)j/j!.
The sum of the terms up to and including the term with
(9.8)10/10! is 10965.326. The sum of the terms up to and including the term with (9.8)20/20! is 18011.195. The sum of the terms up to and including the term with (9.8)30/30! is 18033.744. The sum of the terms up to and including the term with (9.8)40/40! is 18033.744. |
I bet this series converges, and that its sum is 18033.74 (two decimal place accuracy). I'll verify this by looking at the tail j=31infinityaj. Here aj=(9.8)j/j! The first term is a31 which is 0.000650. I computed this -- the value is not "obvious" from the formula. This is not too interesting, because I need information about all of the terms, and there are infinitely many. Let me investigate how the terms are related to one another in the following way:
(9.8)j+1 ------ aj+1 (j+1)! (9.8)j+1j! (9.8)j! 9.8 ------ = ---------- = ----------- = --------- = ----- aj (9.8)j (9.8)j(j+1)! (j+1)j! j+1 ------ j!There's a whole bunch of algebraic things happening in this collection of equations.First, when we write the formulas for the terms, we get a compound fraction (a fraction of fractions) which then is converted (carefully!) into a simple fraction (just one division). Then the powers mostly drop out. Finally, we need the definition of factorial in order to see how (j+1)! relates to j!.
If j is at least 31, then 9.8/(j+1) will be at most 9.8/32 and this is 0.306 (exactly! wow!). Therefore we know that if j is at least 31, then aj+1/aj<0.306, so that aj+1<aj(0.306). We can replace each term by 0.306 multiplied by the term before it, and this is an overestimate.
Now what? Look at the tail more closely:
j=31infinityaj=a31+a32+a33+a34+a35+a36+a37+...<
a31+a31(0.306)+a32(0.306)+a33(0.306)+a34(0.306)+a35(0.306)+a36(.3)+...<
a31+a31(0.306)+a31(0.306)2+a32(0.306)2+a33(0.306)2+a34(0.306)2+a35(0.306)2+...<
a31+a31(0.306)+a31(0.306)2+a31(0.306)3+a32(0.306)3+a33(0.306)3+a34(0.306)3+...<
a31+a36(0.306)+a31(0.306)2+a31(0.306)3+a31(0.306)4+a32(0.306)4+a33(0.306)4+...<
a31+a31(0.306)+a31(0.306)2+a31(0.306)3+a31(0.306)4+a31(0.306)5+a32(0.306)5+...<
a31+a31(0.306)+a31(0.306)2+a31(0.306)3+a31(0.306)4+a31(0.306)5+a31(0.306)6+...<
ETC.
Here what I mean by "etc." is that we can keep pushing the subscripts
"down" until they reach 31, each time "paying" by multiplying by
another power of 0.306. And these are successive
overestimates. So we finally end up with a geometric series
whose first term is a31=0.000650, with ratio=0.306. This
series has sum equal to 0.000650/(1-0.306) which is about
0.000936. Hey! This number is less than .001, and I bet that the true
sum of the series therefore has leading decimal
expansion 18033.744 or 18033.745.
Random (?) facts |
---|
e2.5 is approximately 12.18249396 e9.8 is approximately 18033.74493 |
Not at all random, and this will be explained later.
QotD
The series j=1infinity1/(j5+3j)
converges. I told students that j=1101/(j5+3j)=0.27951. Then
I asked that students find an error estimate. So some sort of
overestimate of j=11infinity1/(j5+3j)
is needed. Several strategies can be used to get valid answers. Here
are two strategies.
Compare to a p-series and then estimate with an integral
Certainly for any positive integer j,
1/(j5+3j)<1/(j5).
Therefore
j=11infinity1/(j5+3j)<j=11infinity1/(j5). This sum
is, in turn, less than the improper integral 10infinity(1/x5)dx. Computation
of the integral:
10infinity(1/x5)dx=limA-->infinity10A(1/x5)dx=limA-->infinity-1/(4x4)10A=limA-->infinity-1/(4A2)+1/(4·104)=1/(4·104).
This is a fine answer just as it is!
Compare to a geometric series and then get the sum of the geometric series
Certainly for any positive integer j,
1/(j5+3j)<1/3j. Therefore j=11infinity1/(j5+3j)<j=11infinity1/3j. But this
infinite series is a geometric series whose first term is
1/311 and whose ratio is 1/3. The sum of this series is
{1/311}/(1-{1/3}). This is a fine answer just as it is!
Remarks about the QotD solutions
Wednesday, March 7 | (Lecture #14) |
---|
Again, a series is a formal sum, a1+a2+a3+... Your text writes this using sigmas, j=1infinityaj.
One comment Please realize that the j doesn't really matter. It is what is called a dummy variable. The j is a logical placeholder. The sum k=1infinityak represents the same thing, and the sum q=1infinityaq represents the same thing, and so, I suppose, the sum which follows represents the same thing: TOAD=1infinityaTOAD (why can't we use words like TOAD as index variables?).
Associated with each series are two different sequences. One sequence is the sequence of terms of the series: {aj}. Another sequence is the sequence of partial sums of the series: this sequence looks like this: a1, a1+a2, a1+a2+a3, a1+a2+a3+a4. So the 400th partial sum would be j=1400aj.
A specific example: the harmonic series
The harmonic series 1+1/2+1/3+1/4+1/5+...=j=1infinity1/j
has these two series associated with it:
The sequence of individual terms
The first five terms: 1, 1/2, 1/3, 1/4, 1/5,
.... I hope that the asymptotic behavior of this sequence whose
jth term is 1/j as j gets large is obvious: the sequence of
individual terms -->0.
The sequence of partial sums
The first five terms: 1, 1+1/2=3/2 (1.5), 1+1/2+1/3=11/6 (about
1.8333), 1+1/2+1/3+1/4=25/12 (about 2.0833), 1+1/2+1/3+1/4+1/5=137/60
(about 2.2833). The asymptotic behavior of this sequence is
certainly not obvious or clear or ... We used some quite tricky
reasoning last time to see that this sequence of partial sums is not
bounded. They get really really large. I'll try to show this again
later in today's lecture using a different approach which is maybe
more systematic.
A series converges if ...
A series converges if the sequence of partial sums converges.
The limit of the sequence of partial sums is frequently called the
sum of the series.
It can be difficult to decide if a series converges (heck, the harmonic series does not, even though the terms go to 0). Even if you know a series converges, it can be difficult to approximate the sum of the series. This is interesting, because most of the computations that are commonly done using calculators and computers (such as sin(.567) or e3.45) consist of numerical approximations to sums of infinite series. So what we are doing is interesting and useful.
The infinite tail
An infinite series j=1infinityaj can be thought
of as j=1naj+j=n+1infinityaj. So there is
the nth partial sum plus the "other" terms of the series. I
like to think of this maybe as some sort of animal. The partial sum is
the body, and the infinite tail is ... well, the tail. The question of
whether the series converges or not maybe is analogous to whether the
weight of the animal is finite (this is a good analogy only with
series whose terms are all positive -- we will deal later with series
whose terms change sign). The weight will be finite exactly when the
infinite tails-->0 as n-->infinity. In fact, the first "few" terms of
a series have nothing to do with convergence!
For example, we already know that the harminic series 1+1/2+1/3+1/4+1/5+... does not converge. Then the series 56+37.8+409+1/4+1/5+1/6+... (all other terms are the terms in the harmonic series, unchanged) also does not converge. The sequence of partial sums has some change (1-->56, 1/2-->37.8, 1/3-->409) but the fact that the partial sums eventually do not approach a one fixed finite number is still correct.
In a few minutes (geometric series) we will see that the series 1/2+1/4+1/8+1/16+... (here the nth term is 1/2n) does converge, and its sum is 1. If I change the first term to 15 and the third term to 88, the series still will converge. Its sum will be 1+(15-1)+(88-1/8) (the changes are made to the old sum).
So really when we study convergence we are looking at infinite tails.
Geometric series
One kind of series has nice, simple formulas for partial sums and for
sums, and that's geometric series. A geometric series consists of some
sort of starting term, and each further term is formed by multiplying
the previous term by some constant factor. So:
Formula for the nth partial sum of a geometric series |
---|
a-arn sn= -------- 1-r |
This is a neat formula which is sometimes very useful. Please realize
that an explicit formula for the partial sum of a series is very, very
rare. It should not be expected.
We saw last time (look in section 11.1, page 707) that if |r|<1,
then rn-->0 as n-->infinity. Therefore:
Formula for the sum of a geometric series when |r|<1 |
---|
a s = ------- 1-r |
Use #1 of geometric series
Here is something which many people are supposed to see (!) before
college. The infinite repeating decimal 0.731731731...
represents a rational number (a quotient of integers). What rational
number does it represent?
Here the most interesting problem is recognizing the implied geometric
series. Decimal notation is very clever, and conceals some true
subtleties. So 0.731 itself means 731·(.001) which is
731/1000. What about 0.000731? This is 731·(.000001) which is
731·(.001)·(.001). That is,
10-3·10-3=10-6. Therefore
0.000731 is 731/(1000)2. And similarly
0.000000731 is 731/(1000)3. So we see (maybe not so
"clearly"!) that:
0.731731731...=[731/1000]+[731/(1000)2]+731/(1000)3]+....
We therefore recognize that the repeating decimal indicates an
infinite series whose first term, a=731/1000, and whose constant ratio
between successive terms is r=1/1000.
The sum is then
a/(1-r)=[731/1000]/(1-[1/1000])=[731/1000]/[999/1000]=731/1000.
Digression: how maybe this is done in earlier "grades"
A teaching might say the following:
Let's consider the number Q=0.731731731... and try to
figure out another way of looking at Q. Well,
1,000Q=(1,000)·(0.731731731...)=731.731731731...
so then:
1,000Q=731.731731731... and subtract
Q=0.731731731...
The result is 999Q=731, so that Q=731/999. Ain't that nice! (Thanks to
Ms. Panova for pointing out this approach.)
My "excuse" for pointing out the geometric series approach is that I
want to show you a use of geometric series, and also to maybe expose a
bit of the structure of the decimal system, which is actually a very
clever intellectual construction.
Use #2 of geometric series
A square of side length 5 has another square whose side length is half
of that, placed outside but so that corners and an edge
coincide. Another square whose side length is half of that, placed
outside of both squares but so that corners and an edge coincide. And
...
My language is perhaps not too precise. A sort of picture of this
object (just the first 6 squares) is
shown to the right. The object is an example of a fractal. General
information about fractals is here and a source
which is very accessible is here
The question What is the total area of all of the squares?
The first square has area 5·5=52. | The second square has area (5/2)·(5/2)=52/4. | The third square has area (5/2/2)·(5/2/2)=(5/22)·(5/22)=52/42 |
---|---|---|
The pattern may convince you that the total area is the sum of
52+52/4+52/42+...
This is a geometric series whose first term is a=52, and the
constant ratio between successive terms is r=1/4. The sum is then
a/(1-r)=52/(1-[1/4])=100/3. This is the total area.
I remarked to students that questions about some other geometric
quantities can easily be asked. For example,
The question What is the total perimeter of all of the squares?
The first square has perimeter 4·5=20. | The second square has perimeter 4·(5/2)=20/2. | The third square has perimeter 4·(5/2/2)=4·(5/22)=20/22 |
---|---|---|
The pattern may convince you that the total perimeter is the sum of
20+20/2+20/22+...
This is a geometric series whose first term is a=20, and the
constant ratio between successive terms is r=1/2. The sum is then
a/(1-r)=20/(1-[1/2])=40. This is the total perimeter.
And here's another question. We could think about the region inside all of the squares as one part of the plane. Then the boundary between this region and the remainder of the plane could be analyzed. It has a border which is one relatively long horizontal line segment on the bottom, and one vertical line segment on the left. The region is shown to the right. I hope you "see" that it sort of fades off in a complicated way to the right. There's lots of border turnings on the right.
The question What is the total length of the border of this region?
One way to analyze this is to take the total perimeter of the squares, 40, which we got above, and to subtract the interior border lengths. Look at these pictures.
The first inner wall has length 5/2. | The second inner wall has length (5/2)/2=5/22. | The third inner wall has length (5/22)/2=5/23. |
---|---|---|
The pattern may convince you that the total length of the inner walls
is the sum of
(5/2)+5/22+5/23+...
This is a geometric series whose first term is a=(5/2), and the
constant ratio between successive terms is r=1/2. The sum is then
a/(1-r)=(5/2)/(1-[1/2])=5. This is the total length of the
inner walls, so the perimeter of the region is 40-5=35. Whew! I think
that's enough for this.
Use #3 of geometric series
Bruno and Igor have a loaf of bread. Bruno eats half the loaf and
passes what remains to Igor. Igor eats half of what he is given and
passes what remains to Bruno. Brundo eats half of what he is given and
passes what remains to Igor. Igor eats half of what he is given and
passes what remains to Bruno...
The question How much bread (the total amount) does Bruno eat?
As I mentioned in class, I know people who can somehow "solve" these problems by inspection, that is, they read or listen to the problem, and ZAP!!! the answer is clear. (The same is true for the geometric problems mentioned in Use #2.) I am not one of these "zap" people -- some of the students seem to be! I would probably solve the problem by computing the amount of bread Bruno and Igor eat, for at least a few rounds. I would try to discover the pattern, and then I'd use the observed pattern.
Round # | Bruno eats | Igor eats |
---|---|---|
1 | 1/2 | 1/4 |
2 | 1/8 | 1/16 |
3 | 1/32 | 1/64 |
I filled out this table dynamically in class, with explanations being given as I did it. For example, I remarked that after Bruno ate half the loaf, Igor would receive the other half loaf. Igor would eat half of that, which is 1/4 loaf, and pass 1/4 loaf to Bruno. Bruno would eat half of a 1/4, which is 1/8 loaf, and pass the remaining 1/8 loaf to Igor, etc. It seems apparent ("clear") that Bruno eats 1/2+1/8+1/32+..., a quantity which we recognize as a geometric series. The first term, a, is 1/2, and the constant ratio between successive terms is 1/4. Therefore Bruno must eat a/(1-r)=(1/2)/(1-[1/4])=2/3 of the loaf. Poor Igor will eat 1-2/3=1/3 of the loaf.
It is easy to change this problem. You could imagine the named people eating different quantities, or you could imagine there being more people, etc. Sums like this do arise in real applications, and I hope that you will be able to recognize them and cope with them.
Integrals and sums
Section 11.3 contains a full presentation of what I'll discuss here. I
want to study the harmonic series again, and this time really
convince you that it diverges. After that I want to study another
series, convince you it converges, and then actually get a fairly good
decimal approximation for its sum.
The harmonic series again
Let's look at 1+1/2+1/3+1/4+1/5+... but accompany this with a picture.
In fact, let me begin with a specific partial sum, the fifth partial
sum: s5= 1+1/2+1/3+1/4+1/5 (look, there are no
...'s!). I've sketched part of the graph of y=1/x to the right. I only
sketched the graph on an interval beginning a bit less than 1 and
ending a bit more than 6. The function f(x)=1/x is strictly
decreasing. If I draw the boxes shown (this is the Riemann sum
approximation to the area under the curve from x=1 to x=6 using a
partition equally spaced with 5 subintervals and sample points at the
left-hand endpoints, it just happens (NO!
This is arranged very precisely.) that the Riemann sum
approximation is equal to the fifth partial sum: the boxes all have
widths equal to 1 and they have heights 1, 1/2, 1/3, 1/4, and
1/5. Therefore 161/x dx<s5. But I
"know" the integral. It is ln(x)]16=ln(6) since
ln(1)=0. Therefore ln(5)<s5.
If you are curious, ln(6) is approximately 1.791 and s5 is approximately 2.28333. So the arithmetic reinforces the picture.
Now I would like you to imagine a similar picture for y=1/x, where the interval goes from slightly to the left of 1 up to slightly to the right of n+1. Again, think of boxes over the graph, indicating a Riemann sum with left-hand endpoints at the integers, a bunch of boxes with width equal to 1. The function is still strinctly decreasing, so the tops of the boxes are above the graph. The sum of the areas is sn=j=1n1/j, the nth partial sum of the harmonic series. Therefore sn>1n+11/x dx=ln(n+1). But as n-->infinity, ln(n+1)-->infinity, and since sn is bigger than ln(n+1), sn-->infinity also, and the harmonic series diverges.
I asked in class how big n should be for the nth partial sum of the harmonic series to be larger than, say, 200. I can be sure that sn>200 if I know that ln(n+1)>200 since sn>ln(n+1). But ln(n+1)>200 happens when (exponentiate both sides of the inequality) n+1>e200. How big is e200? It is about 7·1086. If you could do 1010 additions each second, then (there are 31,556,926 seconds in a year) to compute this partial sum would take about 2·1069 years. One estimate for the age of the universe is 13.7 billion years, or 1.37·1010 years. You'd need a lot of universes. I think it is amazing we can estimate such a partial sum.
A convergent series
Let's look at
1+1/24+1/34+1/44+1/54+....
It will turn out that this series converges. Let me first convince you
of this with a geometric argument whose computations are based in
calculus. So let me look at the graph of y=1/x4 on an
interval from slightly to the left of 1 to slightly to the right of
6. Now let me consider Riemann sums with right-hand endpoints. Again
each of the widths is 1 and the heights come from f(x)=1/x4
at x=2, 3, 4, 5, and 6. What we have (again, f(x) is strictly
decreasing) is 161/x4dx>1/24+1/34+1/44+1/54+1/64. Again,
let's compute the integral:
161/x4dx=-1/(3x3)]16=-1/(3·63)+1/(3·13)=0.33179.
The sum
1/24+1/34+1/44+1/54+1/64
is 0.08112. This isn't exactly s6 for this series, since we left out the first term, which is 1. In fact, though,
s6<1+161/x4dx
A (possible) complaint!? |
---|
This time we put the boxes under the curve. Before we put the
boxes on top of the curve.
To overestimate an infinite tail with an integral, put the boxes
underneath the curve. |
Now imagine the situation from 1 to n+1 with the graph of
f(x)=1/x4. Similar consideration of boxes under the graph leads to
sn<1+1n+11/x4dx
Now 1n+11/x4dx=-1/(3x3)]1n+1=-1/(3(n+1)3+1/3
As n-->infinity, this-->1/3. Therefore all of the sn's are
bounded above by 1+(1/3). Since the sn's form an increasing
sequence (hey, everything we're adding is positive!) we know
("Bounded monotonic sequences converge") that the sequence of partial
sums {sn} converges, and therefore the infinite series converges!
Yeah, but what is the sum?
It happens that with more advanced methods the sum j=1infinity1/j4 can be
computed exactly. You won't believe me but the value is
Pi4/90, approximately 1.0823! The sums of most
series can't be computed exactly in terms of "standard" constants (for
example, if I changed the 4 to a 3 the series will still converge, and
no one knows the exact value of the sum). Let me show you how to
estimate an infinite tail.
The partial sum j=1n1/j4 has an infinite
tail beginning with the term 1/(n+1)4 and going on and on
from there. If you consider the picture to the right, the infinite
tail is overestimated by the improper integral ninfinity1/x4dx=(I will omit
the details!)=1/(3n3).
The infinite tail is the error between the partial sum and the true
sum. If I want the partial sum to be accurate to 3 decimal places, we
just need the infinite tail to be less than .001. I should find an n
so that 1/(3n3) be less than 1/1,000. This will certainly
happen when n=3. Therefore the true value of the sum will be
approximated to 3 decimal places by s10. My silicon friend
can compute this easily, and the value is 1.082.
The integral test and p-series
The textbook discusses the integral test in section 11.3. Here is a
version of the integral test.
The integral test |
---|
Suppose f(x) is a positive decreasing function, defined for x>=1. Then the series j=1infinityf(j) converges exactly when the improper integral 1infinityf(x) dx converges. |
The logic behind the integral test is used to underestimate partial sums and to overestimate infinite tails, as previously discussed. The best-known application of the Integral Test is the following fact:
p-series |
---|
Consider the infinite series |
When p=1 this is the harmonic series which diverges. We analyzed the series for p=4 and showed that it converged.
Monday, March 5 | (Lecture #13) |
---|
Definition of the sequence | First five elements of the sequence | Limits and behavior |
---|---|---|
an=1/n | 1., 0.5000000000, 0.3333333333, 0.2500000000, 0.2000000000 | |
an=(-1)n | -1., 1., -1., 1., -1. | |
an=(1/2)n | 0.5000000000, 0.2500000000, 0.1250000000, 0.06250000000, 0.0312500000 | |
an=10n | 10., 100., 1000., 10000., 100000. | |
an=[-1/2]n | -0.5000000000, 0.2500000000, -0.1250000000, 0.06250000000, -0.03125000 | |
an=51/n | 5., 2.236067977, 1.709975947, 1.495348781, 1.379729661 | |
an=n1/n | 1., 1.414213562, 1.442249570, 1.414213562, 1.379729661 | |
an=n! | 1., 2., 6., 24., 120. | |
an=(50)n/n! | 50., 1250., 20833.33333, 260416.6667, 2604166.667 | |
Big ideas
Sequence
Convergence and limit
Increasing
Decreasing
Bounded
Monotonic & bounded implies ...
Series
The sequence of partial sums ...
The harmonic series
The partial sums
1., 1.500000000, 1.833333333, 2.083333333, 2.283333333
Primitive idea #1
It diverges because we're adding up infinitely many numbers, and
therefore things get to be too darn large.
Primitive idea #2
It converges because, although we are adding up lots of numbers, the
steps between them get smaller and smaller and smaller, and so the sum
can't get very large.
So what does happen?
Wednesday, February 28 | (Lecture #12) |
---|
There will be a quiz tomorrow. It will cover improper integrals
and differential equations.
Exponential decay
dy/dt=ry
y=Aert
(here r is negative!)
radioactive tracers in blood
radiocarbon dating
http://njpaleo.com/articles/article14.html
Exponential growth dy/dt=ry y=Aert (here r is positive!)
QotD Bacteria
Exponential growth with a "carrying capacity"
Consider the differential equation
dy/dt=y(2-y). If y is close to 0 but positive, then ... If y is close
to 2 but less than 2, then ...
Example of the Logistic differential equation, numbers selected
to make everything easy.
Let me try to solve this equation with the initial condition y(0)=1.
Separate and integrate. The result (after use of partial fractions!)
is
-(1/2)ln(2-y)+(1/2)ln(y)=x+C
Here C=0 (wow!). And then ln(y/[2-y])=2x, so y/[2-y]=e2x and then
y=(e2x)[2-y]=2e2x-e2xy, so
(1+e2x)y=2e2x and, finally,
y=[2e2x]/[1+e2x]. This may be in a form that "real people" can understand.
A picture of this curve is shown to the right.
For large x,
For x negative,
Direction fields, a useful qualitative tool
This is the direction field | This is the direction field with some solution curves. |
---|---|
This is the direction field | This is the direction field with some solution curves. |
This is the direction field | This is the direction field with some solution curves. |
This is the direction field | This is the direction field with some solution curves. |
This is the direction field | This is the direction field with some solution curves. |
Monday, February 26 | (Lecture #11) |
---|
The 20% of the students who were not in class can come to my office before or after class to pick up your exam. They also can come to my office during an office hour to pick up your exam. Finally, they can come to my office at any other mutually convenient time to pick up your exam. (I did spend more than 12 hours this weekend grading the exams, in order to get them back to you in a timely manner.)
Differential equations
Differential equations, more of a definition
Sample scenarios
First story Salt dissolves in water. Put more salt in the
water. It will dissolve, although perhaps less rapidly. Add more and
more salt. Eventually the solution becomes saturated, and the salt
just drops to the bottom (precipitates?). Differential equations can
model the salt/water interaction.
A second story Probably we have all been told that bacteria
(usually) reproduce by, say, binary fission. This is more or less
correct, and more or less the fact means that the rate of increase of
bacteria at any time is directly proportional to the number of
bacteria at that time. So twice as many bacteria "now" means that
twice as many bacteria are being born now. This is certainly
dreadfully simplified, but this approximation works in many
circumstances. I wondered, when I first heard this fact, why, if, say,
E. coli doubles rather rapidly, shouldn't the world be covered very
soon by a layer of E. coli which is 40 feet thick? In fact,
A single cell of the bacterium E. coli would, under ideal circumstances, divide every twenty minutes.But of course anything growing so rapidly in the real world (mold in a petri dish) enters a situation where the growth challenges the ability of the environment to support the thing. Most environments have a carrying capacity -- some sort of upper limit to the amount of the thing which can live in the environment. Differential equations can model this sort of situation fairly well, combined with the "exponential growth". We will see this.
(From Michael Crichton (1969) The Andromeda Strain, Dell, N.Y. p.247)
What is a solution of a differential equation?
Solving an example
What is going on?
Separable differential equations and another example
Initial conditions and initial value problems
A solution
Is it useful?
One example
dy/dx=e(x2)
Another example
dy/dx=2x with the initial condition (0,0): when x=0, then y=0.
A last example (today!)
dy/dx=xy2 with the initial condition y(0)=1.
Monday, February 19 | (Lecture #10) |
---|
Two silly (?) formulas
The object of this lecture is to tell you about two formulas, one for
arc length (section 8.1) and one for surface area (section 8.2). I
called the formulas silly because of their limited usefulness, at
least limited in the sense that "hand computation" is not very
practical. Both arc length and surface area will be revisited in calc
3, where much better perspectives can be given for both.
The philosophy behind the definite integral and its use
Maybe the formulas are not totally silly. Both of them are
illustrations of how definite integrals can be used to compute various
quantities. The procedure (which we have already used in various area
and volume situations, and also with work) represents an attempt to
compute "something" complicated:
Arc length
We're given a function, f(x), defined on the interval [a,b].
The quantity to be computed is the length of the graph, the curve
y=f(x). This is called arc length. Here is the idea.
Break up [a,b] into many little subintervals, whose length we will
call dx (or delta x). "Above" each little subinterval is a little
piece of the curve. The usual name for a little piece of curve is
ds. If you magnify the little piece, as shown, well, the result is
almost a right triangle. The curve length is still somewhat curvy,
but, well, maybe I can approximate it by a straight line segment. The
resulting picture is just about a right triangle. dy is the change in
y (the function) when the input variable, x, Pythagoras then
declares that (ds)2 should be the same as
(dx)2+(dy)2. Therefore
ds=sqrt{(dx)2+(dy)2). Let's rewrite what's
inside the square root:
(dx)2+(dy)2=(dx)2(1+{dy/dx}2).
So sqrt(=(dx)2(1+{dy/dx}2))=dx·sqrt(1+{f´(x)}2).
Now we should add up these pieces and take limits. In this context, this is all done by writing a definite integral. So the arc length formula is absqrt(1+[f´(x)]2)dx. This is the official formula. Let's see how well it works with some examples.
Line segment
Maybe the simplest curve is a straight line segment. Let me "find" the
length of the line segment joining (1,1) and (4,3). This should be the
same as the distance from (1,1) to (4,3), which is (square root of the
sum of the squares!) sqrt(13). Let's find this number using the
calculus formula above.
We need a formula for the line segment. The slope will be (3-1)/(4-1) which 2/3. So f(x)=(2/3)x+something. What will the "something" be? Since the line should pass through (1,1), when we put x=1, the result should be 1. Therefore (2/3)(1)+something=1, so something is 1/3. The formula is f(x)=(2/3)x+(1/3). The derivative is f´(x)=(2/3). Now the arc length is absqrt(1+[f´(x)]2)dx which is 14sqrt(1+[2/3]2)dx. The integrand is a constant, so the result is sqrt(1+[2/3]2)x]14=sqrt(1+[2/3]2)4-sqrt(1+[2/3]2)1=sqrt(1+[2/3]2)3. This is the same as sqrt(13).
Circle
Maybe the next curve to look at is a circle, but we need the graph of
a function so let's try to find the arc length of a
semicircle. The semicircle I considered in class was the upper
semicircle, radius 5, center at (0,0). For this curve,
f(x)=sqrt(52-x2). Now I need
sqrt(1+[f´(x)]2). So:
f´(x)=(1/2)(52-x2)-1/22x using
the Chain Rule. The 2's cancel, and we need to square the derivative,
so:
(f´(x))2=(52-x2)-1x2
but this is the same as
x2 ----- 52-x2to which we must add 1:
x2 52-x2+x2 52 1 + ----- = --------- = ------ 52-x2 52-x2 52-x2Finally we supposed to take the square root of this result, so that the integral we need to compute is -55 5/sqrt(52-x2)dx.
This should look slightly familiar. The trig substitution x=5sin() makes this integral into 5d=5arcsin(x/5)+C. I am skipping the details because I've done many of these integrals already. Now evaluate the definite integral: 5arcsin(x/5)]-55=5arcsin(1)-5arcsin(-1), and (since I know arcsin(1)=Pi/2 and arcsin(-1)=-Pi/2) this works out to 5Pi, which is indeed half the circumference of a circle of radius 5.
Problems in the book
These two curves work out fairly well. But let's look at section 8.1,
and some of the problems there. The problems mostly have the form,
"Find the length of the graph of the function defined by the following
formula" and here are some of the formulas:
(2/3)(x2-1)3/2
[x3/6]+[1/(2x)]
[x5/6]+[1/{10x3}]
[x2/2]-[ln(x)/2]
ln([ex+1]/[ex-1])
These are all very weird and ludicrous formulas. Let me try one, and maybe you will see some reason for the structure of the formulas.
Doing a book problem
Let's compute the length of f(x)=[x3/6]+[1/(2x)] as x goes
from 1 to 3. We will need sqrt(1+[f´(x)]2). So:
f´(x)=[x2/2]-[1/(2x2)]
(f´(x)}2)=[x2/2]2+2[x2/2](-[1/(2x2)])+[1/(2x2)]2=[x4/4]-1/2+[1/(4x4)]
You should examine this computation with a bit of suspicion. The
"middle" term has lots of coincidences. (The original formula of the
function is designed for so these "coincidences" happen!) 2's cancel
and x2's cancel. Now look:
1+(f´(x)}2)=1+[x4/4]-1/2+[1/(4x4)]=[x4/4]+1/2+[1/(4x4)]
The darn original -1/2 has somehow changed to +1/2. And the -1/2 came
from 2[x2/2](-[1/(2x2)]) so the +1/2 could be
replaced by 2[x2/2](+[1/(2x2)]). A "miracle" has
occurred! (Ehhh ... not very much of a miracle. It is really a
designed algebraic event.) Therefore, let us replace the +1/2 by the
stuff suggested and then see:
1+(f´(x)}2)=
[x4/4]+1/2+[1/(4x4)]=[x4/4]+2[x2/2](+[1/(2x2)])+[1/(4x4)]=the square of [x2/2]+[1/(2x2)]
Therefore, sqrt(1+[f´(x)]2) becomes (take the square
root of the thing that is squared):
[x2/2]+[1/(2x2)].
To find the arc length, integrate this from x=1 to x=3. The
antiderivative is (relatively!) easy, and the result is
[x3/3]-[1/(2x)]]13 etc. etc. (I'm really
not too interested in the actual numbers here!)
A parabola
Let me find the arc length of y=x2 from x=0 to x=1. Here
the arc length formula, absqrt(1+[f´(x)]2)dx
becomes (since f(x)=x2 and f´(x)=2x) 01sqrt(1+4x2)dx. I can
compute this, sigh, using a trig substitution. I could "try"
2x=tan() etc., etc. Indeed, I am advised by a friend that the antiderivative is
2 1/2 x (1 + 4 x ) 2 1/2 --------------- + 1/4 ln(2 x + (1 + 4 x ) ) 2I think you know the friend I mean. Actually, since we are sophisticated now, if you are curious you can spot the ln(sec+tan) in that mess, etc. I don't feel like finishing. Let's try another example.
I decided to save on pictures. The picture shown here will work just as well for the example below. They look sort of the same.
Cubic curve
Let me find the arc length of y=x3 from x=0 to x=1. Here
the arc length formula, absqrt(1+[f´(x)]2)dx
becomes (since f(x)=x3 and f´(x)=3x2) 01sqrt(1+9x4)dx. Now for the
Hot News: there is no antiderivative
of this function in terms of the standard functions. This actually
can be proved. Therefore, I can't go any further "by hand". If I
actually wanted to know the length, I would need to approximate
the definite integral using some sort of numerical technique.
"Truth"
The truth for arc length is that, more or less, the computability of
the arc length integral using FTC is impossible almost all of
the time! Therefore, from the elementary, student point of view, maybe
this is all a waste of time. But, really, it isn't. As soon as you
give me a definite integral and want to approximate the values, there
are all sorts of strategies. So what's important is that arc length
can be computed by a definite integral, and what's important for you
to try to understand is the philosophy of going from the vague idea of
arc length to the integral formula for the arc length. And that
philosophy will now be displayed again as we get an integral formula
for a certain type of surface area.
Surface area
We will get our formula using the same philosophical approach. We can
chop up [a,b] into many little pieces, each having length, say,
dx. Then (the picture!) the little piece of arc length lieing over dx,
which we called ds, will be revolved around the x-axis. This gets us a
sort of ribbon. What is the area of that ribbon? We won't be able to
compute it exactly, but maybe we can approximate the area of the
ribbon nicely. Well, we can take the
magic scissors (hey: I was able to draw the darn scissors almost
correctly this time!) and cut the ribbon and then, sort of, almost,
lay it out flat. The result will sort of, almost, be a rectangle. What
are the dimensions of this rectangle? One side is the length of the
piece of arc, ds. The other side is the circumference of a circle
whose radius is f(x), the height of that part of the curve away from
the x-axis. (The reason for the repeated "sort of, almost" is that
this is actually a distortion of the true value - the ribbon really
would not lie flat, and the ribbon really would not be more than an
approximate rectangle. I will try later to address these sorts of
slight (?) distortions.) So a piece of the surface area is
2Pi f(x) ds. We use a definite integral to get the total
surface area and add everything up. The result for the area when the
curve is revolved around the x-axis is img src="gifstuff/is11.gif"
width=8>ab2Pi f(x)sqrt(1+[f´(x)])dx.
Notice that sqrt(1+[f´(x)])dx (this uses what we had for ds).
Sphere
We need to compute ab2Pi f(x)sqrt(1+[f´(x)])dx.
Notice that sqrt(1+[f´(x)])dx is what we called ds before, and we
did compute ds in a previous example. We
saw that ds was equal to 5/sqrt(52-x2)dx. But
f(x)=sqrt(52-x2) so, wow! (yeah, wow) there is
cancellation and the arclength becomes
-55(2Pi)5dx which does indeed work
out to 100Pi as it should.
Parabola
Cubic curve
More "truth"
QotD Maintained by
greenfie@math.rutgers.edu and last modified 2/20/2007.
Suppose we are again given a function y=f(x) defined on an interval
[a,b]. I would like to "compute" (the quotes are because we will get a
definite integral formula which will share the benefits and defects of
the previous result) the surface area which results when the
graph of y=f(x) is revolved around the x-axis.
Here is a result from a long time ago: the surface area of a sphere of
radius R is 4Pi R2. (This is the area of four "great
circles" of the sphere, circles made by intersecting a plane with the
center of the sphere.) I would like to verify this result using the
surface area formula. I'll use the same semicircle as before:
f(x)=sqrt(52-x2), with a=-5 and b=5. Please note
that revolving this semicircle around the x-axis gets the area of the
whole sphere of radius 5, so that the answer should be
4Pi(52).
We can try to find the surface area which happens when y=x2
from x=0 to x=1 is revolved around the x-axis. So the formula ab2Pi f(x) sqrt(1+[f´(x)]2)dx
becomes 012Pi x2 sqrt(1+4x2)dx
and, oh my goodness! I can find an antiderivative of this. It is 2Pi multiplied by the following: 2 3/2 2 1/2
x (1 + 4 x ) x (1 + 4 x ) 2 1/2
--------------- - --------------- - 1/64 ln(2 x + (1 + 4 x ) )
16 32
(Maybe you can tell where I got this from! If you wish, again the
substitution 2x=tan() will
"work") I don't feel like finishing.
The picture supplied is sort of the same both for this example and for
the next one. (I can't tell the difference too well!)
We can try to find the surface area which happens when y=x3
from x=0 to x=1 is revolved around the x-axis. So the formula 012Pi f(x) sqrt(1+[f´(x)]2)dx
becomes ab2Pi x3 sqrt(1+9x4)dx
and, oh my goodness! I can find an antiderivative of this. If
u=1+9x4 then du=36x3dx
so (1/36)du=x3 and we have
(2Pi/36) u1/2du=(2Pi/36)(2/3)u3/2+C=
(2Pi/36)(2/3)(1+9x4)3/2+C. Then the surface area is
(2Pi/36)(2/3)(1+9x4)3/2]01 and won't bother to finish.
x2 and x3 are two of only a few simple powers of
x which give me integrands in the surface area formula that I can find
antiderivatives of. (That's a horrible sentence!) If I want to compute
surface areas for almost any "random" function defined by a formula,
I'll need to use numerical approximations.
I asked people to write an integral for the arc length of something
like f(x)=5x7+2x as x goes from 1 to 3: just the integral,
and do nothing with it.