0:00

Welcome to calculus. I'm Professor Greist.

Â We're about to begin Lecture 6 on Expression Points.

Â We've seen that the Taylor series provides a good approximation to a

Â function near zero. What happens if we wish to focus our

Â attention at some other point. In this lesson, we'll consider changing

Â the expansion point. This will lead us to a broader definition

Â and interpretation of Taylor series. As we have seen, Taylor expansion gives

Â an excellent way to approximate a function for inputs near zero.

Â However, in many applications, zero is not the most interesting input that you

Â can think of. Many examples of this, besides finance

Â and economics, where we care about nonzero values.

Â What would be nice is a way to Taylor-expand a function that works well

Â for inputs that are not necessarily close to zero.

Â There certainly is. This is an important definition.

Â The Taylor series of f at x equals a, is the sum, k goes from 0 to infinity, the

Â kth derivative of f, evaluated at a, divided by k factorial times quantity x

Â minus a to the k. This is not a polynomial in x, but rather

Â a polynomial in the quantity x minus a, where the coefficient in front of each

Â monomial term is the kth derivative of f evaluated at a, and then divided by k

Â factorial. A little bit of terminology is in order.

Â The constant term is called the zeroth order term.

Â Next, comes the first order term, the second order term, the third order term,

Â and so on. These correlate to the degree of the

Â quantity x minus a. If we were to change things, perform the

Â change of variables defining h to be the quantity x minus a.

Â Then the above series would become a polynomial series in h.

Â However, it would not be giving you f of h, but rather f of x, which is a plus h.

Â This series in h is telling you that if you want to know values of f that are

Â close to a, then you can substitute in a small value for h.

Â And if you're looking for an approximation, you can ignore some of the

Â higher order terms. Let's look at an example.

Â Compute the Taylor series of log of x. Now, we know that we cannot do that about

Â x equals 0. So let us do it about a more natural

Â value. Let's say about x equals 1.

Â To compute this Taylor expansion, we're going to need to start taking some

Â derivatives. If we looked at the function log of x,

Â the zeroth order term is obtained by evaluating at x equals 1, and that gives

Â us, simply 0. The derivative of log of x, you may

Â recall is one over x. Evaluating that of x equals 1 gives us a

Â coefficient 1. Now, taking the derivative of one over x,

Â gives us minus x to the negative 2. We evaluate that and continue

Â differentiating. As we go, evaluating each term at x

Â equals 1. After computing sufficiently many

Â derivatives, you start to see a pattern. It requires a little bit of thinking, and

Â a particularly formal way of thinking called induction.

Â But, with a bit of effort, one can conclude that the kth derivative of log

Â of x is negative 1 to the k plus 1 times k minus 1 factorial times x to the

Â negative k. Evaluating that at x equals 1 gives us

Â simply the coefficient, negative one to the k plus 1 times k minus 1 factorial.

Â Now, to get the full Taylor Series, we need to divide this coefficient by k

Â factorial. In so doing, we see a not too unfamiliar

Â series. For log of x as x minus 1, minus x minus

Â 1 squared over 2, plus x minus 1 cubed over 3, et cetera.

Â The coefficient in front of the degree k term is negative 1 to the k plus 1, over

Â k. And our summation goes from 1 to

Â infinity. Indeed, if we let h be x minus 1, then we

Â obtain the very familiar series, log of 1 plus h equals sum k goes from 1 to

Â infinity, negative 1 to the k plus 1, h to the k over k.

Â We've seen that before with an x instead of an h, but it works the same.

Â Do keep in mind that Taylor Series are not guaranteed to converge everywhere.

Â Indeed, if we look at the terms for log of x and take a finite Taylor polynomial,

Â then the higher and higher degree terms only provide a reasonable approximation

Â to log of x within the domain of convergence.

Â We know from our last lesson that that domain is going to be for values of x

Â between 0 and 2. Outside of that, these terms are

Â providing worse and worse approximations of log of x.

Â What do we do if we want to approximate log of x outside of this domain?

Â Well, you need to do a Taylor expansion about a different point, someplace close

Â to where you want to approximate. It's a bit easy to get confused with all

Â of the different notation associated with Taylor series.

Â So, let's review. One way to think about the Taylor

Â expansion about x equals a is to write f of x as a series in the quantity x minus

Â a. Another way to do it is to write it as a

Â series in the quantity h, where h is equal to x minus a.

Â In which case, you're coming up with an approximation for f of a plus h.

Â One way to think about this is that the more derivatives of f you know at a point

Â a, the better an approximation you get at x or at a plus h.

Â This is a perspective that you're going to want to keep with you for the

Â remainder of this course. The principle is that successive

Â polynomial truncations of the Taylor series approximate increasingly well.

Â We've seen that before. In this lesson, the point is that where

Â you do the Taylor expansion matters. If you are expanding way over here and

Â trying to get information about way over there, then you're going to need a lot of

Â derivatives to do that. On the other hand, if you approximate

Â about the correct expansion point, you might not need so many derivatives in

Â order to get the job done. Let's look at an explicit example.

Â What would it take to estimate the square root of 10?

Â Well, we would have to look at the Taylor series of the function square root of x.

Â If we expand that about a point, x equals a, then I'll leave it to you that the

Â first few derivatives work out to a Taylor series of root a plus one over 2

Â root a times quantity x minus a minus 1 over 8, times the square root of a cubed,

Â times quantity x minus a squared plus some higher-order terms in x minus a.

Â Well, to compute the square root of 10, what are we going to do?

Â Let's say I expand about 1 because 1 is a simple value.

Â I know the square root of 1, that is simply 1.

Â That makes the coefficients of the Taylor expansion easy to compute.

Â On the other hand, when I'm actually trying to estimate the square root of 10,

Â based on this, then I get 1 plus one half times 9 minus one eighth times 9 squared

Â or 81, plus higher order terms. How good of an approximation is that?

Â Well, that gives me a value of negative 4.125.

Â Now, I know this is not the square root we're looking for.

Â That is a bad approximation. So, what if we were to compute an

Â expansion about x equals 9? Then, this is something for which I also

Â know the square root. The square root of 9 is 3.

Â The coefficients are easy to work with. And if I plug in a value of x equals 10

Â into this Taylor series, I get an approximation of 3 plus one sixth minus 1

Â over 216. This gives an approximation of 3.1620

Â some other stuff. The true answer agrees with this up to

Â the first four digits. The expansion point, in this case,

Â certainly matters. Now, one thing to be cautious of, is that

Â if you're computing a Taylor series of a composition, you must expand about the

Â correct values. If you have a function, f composed with

Â g, and you want to expand it about some input x.

Â Then, you must expand g about x. But you must expand f, not about x, but

Â about g of x. And it is that term in particular that

Â causes problems. In an explicit example, we'll be able to

Â see how this works. Compute the Taylor series of e to the

Â cosine of x about x equals zero. Well, even the cosine x is a composition.

Â What do you what do you do first? First, you take the cosine of x, then you

Â exponentiate it. If we are to compute the Taylor series

Â about x equals 0, then we must expand cosine about 0, but not e to the X.

Â Cosine of X about 0 is very simple. This one, we know.

Â However, for the second term, the exponential, we must expand that about an

Â input of 1 because that is what gets fed into it.

Â Cosine of 0 is 1. Well, the Taylor series of e to the u

Â about u equals 1 is easy to compute. There's nothing difficult there.

Â However, what we must then do is substitute into it this series for cosine

Â of x. That is u is 1 minus x squared over 2

Â plus x to the 4th over 4 factorial plus higher ordered terms.

Â That is [COUGH] a little bit too much algebra to fit on this slide, and so I'll

Â leave you to do that. Though I might point out a little bit of

Â help here, that this can be rewritten as e times e to the cosine of x minus 1.

Â You'll find that to be a little bit easier to compute the Taylor series of.

Â