0:00

Welcome to Calculus. I'm Professor Greist.

Â We're about to begin Lecture 7 on Limits. >> In many respects, calculus can be

Â defined as the mathematics of limits. In this lesson we'll review the concept

Â definition of a limit. Consider a few examples and see that one

Â of the most effective tools for computing a limit involves Taylor series.

Â >> In your previous exposure to calculus, you have certainly seen limits.

Â But what does it mean to say the limit, as x approaches a of f of x equals L.

Â Well, I'm sure you have an image in your head, that as x gets closer and closer to

Â a, f of x gets closer and closer to L. Perhaps you remember that it doesn't

Â matter whether you approach from the left Left or from the right.

Â Perhaps you remember that it doesn't matter what the actual value of the

Â function is at x equals a. What matters is the limit.

Â Well this picture is the intuition behind the limit, but it is not the definition.

Â The definition is another thing all together.

Â What is it? The limit of f of x, as x goes to a,

Â equals L. If, and only if, for every epsilon

Â greater than 0 there exists some delta greater than 0.

Â Such that, whenever x not equal a is within delta of a, then f of x is within

Â epsilon of L. That is a bit of a mouthful, and a lot of

Â students have difficulty with it. Why?

Â Because as a logical statement, it is complex.

Â As a grammatical statement, it is complex.

Â How does one make sense of this? Well, a picture is not a bad way to go.

Â If we think of L as a target that we are trying to hit, then we are allowed some

Â tolerance on the output. This tolerance is in the form of an

Â epsilon. You have to get within epsilon of L.

Â Using your function, you can set the input to be as close to a as you like.

Â But there's going to be some tolerance on the input.

Â Some degree of error that is bounded by delta.

Â In order to have the limit of f of x as x approaches a equals L, then anything

Â within the input tolerance has to hit the target within the output tolerance.

Â 3:09

This must be true no matter how small the output tolerance epsilon is.

Â You can find some sufficiently small input tolerance to guarantee always

Â striking within range of the limit. In the context of an actual function f,

Â one can visualize this delta epsilon definition as follows.

Â You choose an output tolerance, epsilon. Then, there must be some input tolerance,

Â delta, so that any input within delta of a has an output within epsilon of L.

Â Now, many students get confounded here, trying to find the optimal delta.

Â It does not need to be optimal. You can choose something smaller, that is

Â not a problem. The critical part of the definition is,

Â that as you change epsilon, you need to be able to update delta.

Â If you make epsilon smaller still and decrease your level of acceptable error

Â on the output, you need to find some amount of acceptable error on the input.

Â And this has to continue for every possible non-zero value of epsilon.

Â That is what captures what the limit is. This view of the definition is extendable

Â to other context. Consider the limit as x goes to infinity

Â of f of x. What does it mean for that to be equal to

Â L? Well, we're going to think about infinity

Â as something like an end point to the real line, modifying its topology so that

Â it looks like a closed interval. Now, this is a dangerous thing to do if

Â you don't know what you're doing. But let's think about it from the

Â perspective of our interpretation of a limit.

Â Given any output tolerance, epsilon, there must be some tolerance on the input

Â that guarantees striking within epsilon of L.

Â Now, how do we take a neighborhood of infinity?

Â How do we talk about a tolerance on that input.

Â Well, what it becomes in this context is sum lower bound M, so that, whenever your

Â input is greater than M, then your output is within epsilon of L.

Â As before, this must be true no matter what epsilon you choose.

Â If you make your tolerances on the output, tighter and tighter, then, we can

Â make the tolerances on the input tighter and tighter.

Â In this case, instead of talking about being within delta of infinity, since we

Â are only looking at a one sided limit. We can speak in terms of an explicit

Â lower bound on x, the same intuition and picture holds.

Â To be sure, not all limits exist. Not all functions, well behaved.

Â There are several ways in which things can go wrong.

Â You could have a discontinuity at a function.

Â The limit would not exist at that point. You could have what is called a blow up,

Â that is the function goes to infinity as x gets closer and closer to a.

Â Or, worse still, the limit can fail to exist because of an oscillation, where

Â the function oscillates so badly that the limit at a does not exist.

Â On the other hand, most of the time you're not going to have to worry about

Â this because most functions are continuous.

Â And we say that f is continuous at an input a, if the limit, as x goes to a, of

Â f of x exists and equals f at a. We say that f is continuous everywhere.

Â If this statement is true for all inputs a in the domain.

Â Now most of the functions that we're used to seeing are continuous functions and

Â one doesn't have to worry so much about limits in this case.

Â There's a bit of a technicality. One has to be very explicit about which

Â points are in the domain of the function. Some functions which look discontinuous

Â may actually be continuous. If the discontinuous looking point is not

Â actually in the domain, if however, the function is defined there, then, the

Â discontinuity presents itself. There are certain rules associated with

Â limits that you may know by hand, even if you don't remember them.

Â If the limit of f of x, as x approaches a, and the limit of g of x, as x

Â approaches a, both exist, then the following rules are in effect.

Â There's a summation rule that the limit of the sum of f plus g is, in fact, the

Â sum of the limits. There is, likewise, a product rule, that

Â the limit of the product of f and g is, in fact, the product of the limits.

Â There is, likewise, a quotient rule, that the limit of f divided by g is the limit

Â of f divided by the limit of g. Now, at this point, you've gotta be a bit

Â careful if that denominator is zero. Well, then this limit may not exist.

Â There's, likewise, a chain rule or a composition rule that says the limit of f

Â composed with g, as x go to a, can be realized as f of the limit of g as x

Â approaches a. Now once again, this too has some

Â conditions. f, in this case, needs to be continuous

Â at the appropriate point in order for this to hold.

Â Now, at this point, I think we're going to have a little quiz to test your

Â knowledge of limits. What is the limit, as x approaches zero,

Â of sine of x over x? And this is a quotient.

Â Can we apply our quotient rule for limits?

Â No, I'm afraid we cannot because the denominator is going to 0 and so is the

Â numerator and 0 over 0 presents some difficulties.

Â Now, I bet that most of you know the answer is 1 but why do you know this?

Â Well, you may say, I remember this. This is something that I had to memorize

Â when I took high school calculus very useful on exams, I just know it.

Â Well, that's not a very satisfying answer is it?

Â Some of you may say, I wield the mighty sword of L'Hopital's Rule and I know that

Â if I differentiate the top and the bottom, then I get one.

Â That's great, and I'm glad you remember L'hopital's rule.

Â But do you know why it works? Do you have a good reason for your belief

Â in this rule? Well, if not, then let's take a method

Â that we do trust. Namely, Taylor series.

Â If we consider the limit, as x goes to 0 of sine of x over x.

Â We know what sin of x is. That's x minus x cubed over 3 factorial

Â plus higher ordered terms. Now, thinking of this, as we do, as a

Â long polynomial. What are we tempted to do?

Â Well, I look at that and say hey, we could factor out an x, from the numerator

Â and cancel that with the x in the denominator.

Â Yielding the limit as x goes to 0, of 1 minus x squared over 3 factorial plus

Â higher order terms in x. Sending x to 0 gives us an answer of 1,

Â and the limit makes perfect sense. Likewise, you might recall, that the

Â limit as x goes to zero of 1 minus cosine of x over x, is, now what was what?

Â Oh well, I don't remember, but I do remember what cosine of x is.

Â And I note that here, the ones cancel. And I'm left with the limit of x squared

Â over two factorial minus x to the fourth over 4 factorial, plus higher order

Â terms. When I divide that by x, I get the limit

Â of x over 2 factorial, minus x cubed over 4 factorial, plus higher ordered terms.

Â There's no 0 over 0 ambiguity any more. This limit is precisely 0.

Â Now, one of the wonderful things about this Taylor series approach to limits is

Â that it works even in cases where you might not have memorized the limit and

Â where the limit is, indeed, not so obvious.

Â Well, let's look at the cube root of 1 plus 4x minus 1 over the fifth root of 1

Â plus 3x minus 1. It is clear that evaluating at zero is

Â not going to work, that yields 0 over 0. So, what do we do?

Â Well, rewriting this a little bit allows us to use the binomial series.

Â With alpha equal one third in the numerator, and one fifth in the

Â denominator. Applying that gives us 1 plus one third

Â times 4x plus higher order terms. Subtract 1.

Â In the denominator 1 plus one fifth times three x plus higher of order terms,

Â subtract 1. Those subtractions, get rid of the

Â constant terms, were left with terms that all have an x in them.

Â We factor that out, and then the leading order terms are four thirds in the

Â numerator and 3 5ths in the denominator. Yielding an answer of 20 9ths.

Â That is beautiful. >> There's a vast gulf between knowing

Â how to compute something and knowing what that thing is.

Â