0:00

Welcome to calculus.

I'm professor Ghrist, and we're about to begin lecture 27 on improper integrals.

The fundamental theorem of integral calculus is great, but

it's not without its limitations.

In this lesson, we'll consider what happens when we encounter a difficulty

with limits in a definite integral.

The fundamental theorem of integral calculus is great, but

it does have its limitations.

There are a few things that you must be careful about.

0:49

And within this statement lies two dangers.

The first is that of continuity.

A discontinuous integrand can cause problems.

Here's an example.

Consider the integral as x goes from -1 to 1 of (1/(x squared))dx.

If we simply apply the fundamental theorem and

take the antiderivative, negative 1 over x, and

evaluate that at 1, what do we get?

-1.

Then we subtract what we get when we evaluate at -1.

-1- 1= -2.

Perfect.

Except for the fact that our integrand is 1 over x squared.

All those terms are positive, and if we go to the definition

of the definite integral as a Riemann sum, adding up the values,

there's no way that we can add up positive values to get a negative integral.

The problem is this integrand is not continuous.

It's not even defined at x equals zero.

The second hypothesis is that of having

an interval from a to b, a finite interval.

Consider the integral as X goes from negative infinity

to positive infinity of (2x/(1+x squared)) dx.

Clearly, the antiderivative is log of (1+x squared).

What happens when we evaluate this at the limits?

You might think we get infinity minus infinity, which is not defined.

Or you might think, well, this is an odd integrand

over a symmetric domain, therefore it must be zero, which is correct.

These improper integrals are dangerous.

3:04

In all cases, we're going to use the technique of taking

a limit to make sense of these integrals.

There are two cases.

The first we might call a blow-up.

This is what happens when you have an integrand

that is not well defined at some point.

Let's say at one of the endpoints, A.

In this case, the way to evaluate the limit is to integrate

from some constant, t, to the other endpoint.

And then take a limit as T goes to the singular input.

4:14

The following class of examples are crucial to this subject.

These are the p integrals, or

the integral of ((1/x) to the p)dx.

Let's consider first the example of a tail singularity,

where we integrate, lets say from X equals one to infinity.

Of course the value is going to depend on P, but lets do all of them at once.

If we integrate (x to the -p)dx, that is easy enough.

That's going to equal, as long as p is not equal to 1,

(x to the 1- p)/(1- p).

We need to evaluate this at the limit x goes from one to t.

And then take the limit as t goes to infinity.

That is, we're taking the limit.

This T goes to infinity of (T to the 1- p)

/ (1- p)- (1 / (1- p)).

Now this limit is going to be infinite sometimes.

But when p is bigger than 1, then the first term goes to 0 and

we're left with 1 / (p- 1).

If p is less than one, then that first term dominates and

goes to infinity, and we say the integral diverges.

6:19

Let's do it the other way and look at a blow-up instead of a tail singularity.

Consider the same integrand x to the -p, but

now integrated as x goes from zero to one.

Well, doing the integral yields the same anti-derivative,

we just need to evaluate our limits differently.

So, evaluating as x goes from t to one, then taking a limit as T goes to zero.

This will break up into two cases when p is not equal to one, or

when p is equal to one.

6:59

In these cases, again, we get sometimes convergence, sometimes divergence.

But note what is happening to the p's.

When p is bigger then one, we get a divergent integral.

When p is less than one, then the (T to

the 1-p)/(1-p) term drops out and

we're left with an answer of 1/(1-p).

Again, in the case where p equals one,

log of T is not gonna be convergent as T goes to zero.

7:39

Let's summarize what we've found with these p integrals.

If we integrate along a tail going to infinity,

then the p integral converges when p is strictly bigger than one.

It diverges when p is less than or equal to one.

For a blow up singularity at zero, these are reversed.

And we get a convergent integral when p is less than one, and

a divergent integral when p is bigger than or equal to one.

Note, that at the particular value of one, it's always divergent.

No matter whether you're going from zero to one or from one to infinity.

Now you don't need to remember the actual values of the convergent p integrals.

But you do need to remember this chart, this listing

of when the integral converges and when is diverges.

And the reason you need to remember this is because it will help

you determine convergence or divergence of other integrals.

Integrals whose anti-derivatives who may not be so easy to compute.

Let's look at an example.

Consider the integral of dx/(square root of x

squared + x) as x goes from zero to one.

This is a finite domain, however there is a singularity, or

a blow up, at x equals zero.

So how shall we proceed?

I don't know the anti-derivative to this.

It doesn't look like it's going to be terribly easy.

So, let us consider what the leading order behavior for this integrand is.

We're going to think in terms of Taylor series.

Now, this integrand doesn't have a well-defined Taylor series at x equals

zero, since the function is not even defined, and it blows up.

But notice that if we factor out a square root

of x from the denominator, then we're left with

1/(square root of x+1) as a factor.

Now let's rewrite that,

thinking that we are going to be looking at what happens as x is near zero.

I can write this as (x to the -1/2) times quantity (1+x) to the -1/2.

And this then is helpful, why?

Because the binomial series says that whenever you have (1+x) to the alpha,

as x is small, then this is of

the form one plus something in big O of x.

Now if we apply that to the (1+x) to the -1/2 term,

then we get the integral as x goes zero to one of x to the -1/2

times quantity 1 + something in O(x).

And we see that the leading order term

in this integrand is x to the -1/2.

So I'm going to split this up into two integrals.

The first is a p integral with p equals 1/2.

The second is an integral whose precise form I haven't written down,

but it's something that is in O(square root of x), and indeed, is bounded.

And I'm integrating it over the domain from zero to one.

The integral on the right definitely converges.

What about the integral on the left?

Well remember, I said you had to memorize some of these.

Let's go back and recall that when p equals one half,

for a blow up p integral, it converges.

Therefore, we know that this entire integral converges.

We may not know the value, but we know it converges.

12:10

Now, what happens if we take that same integrand and

integrate as x is going to infinity?

For very large x, we need to

consider that x squared term in the denominator as the lead.

Therefore, factoring out, we get (1/(square root of x

squared)) times (1/(square root of 1+(1/x))).

To simplify that a little bit, we see

that we have again something to which the binomial series applies.

Namely, (1+(1/x)) to the -1/2.

Expanding that out gives (1 + something in O(1/x)).

And now, splitting this up into two integrals,

we see that the leading order term Is a p integral, with p equals one.

And I don't think I need to remind you that when p equals one,

you always have divergence.

Therefore, because one piece of this integral diverges,

the entire integral diverges.

Same integrand, different behavior as you go to infinity.

13:52

Let's see what happens in this case.

We already know that the anti-derivitave is log of quantity (1 + x squared).

And if we evaluate that at negative T and T, well,

we get something that cancels to zero.

And so it would seem as though our limit is zero.

That is not correct, and we have failed to be careful.

What do we need to do?

We need to take two independent limits.

One, as say, s, is going to negative infinity, the integral from s to zero.

The other, as T, goes to positive infinity of the integral from 0 to T.

When we have two tails, we need two limits.

And I don't think it's too hard to see

what the behavior of each of these is going to be.

If we consider what happens when x gets very large,

then the leading order term is 2/x.

And what is left over is 1/(1+(1/x)) squared.

That means that using, say, the geometric series,

we see that the leading order term is a p integral with p equals one.

That connotes divergence.

And this means that each of these limiting integrals diverges.

And hence the net integral does not converge to zero, it diverges.