0:30

In this lesson,

Â we'll continue with our applications of discrete calculus to the differential.

Â This time, in the context of the definite integral.

Â Instead of using the fundamental theorem of integral calculus,

Â we sometimes need to take a different approach since,

Â as you know, some integrals are hard to compute.

Â However, even the very definition of the definite integral in terms

Â of a limit of Riemann sums invokes the language of

Â discrete calculus using what looks like a forward difference.

Â Let's explore how sequences enter the picture.

Â Let's assume that one is given a sequence of X values,

Â X sub N going from say, X nought to X capital N.

Â This is bounding our domain of integration.

Â Then, one is given a sequence, f, of function values,

Â f sub n that is thought of a sample

Â of some smooth or continuous function, f.

Â Now, the question is, given only these values,

Â how can we approximate the integral, even though we do not know

Â the actual function, F, but only the sample values?

Â Well, of course we can't get the exact answer, but we can approximat.

Â The simplest approximation is the Riemann sum itself.

Â If we consider our sequence of X values, and our sequence of f

Â values as determining the partition and the function values.

Â Then we can write out a Riemann sum of the form sum,

Â n goes from 0 to capital N- 1 of f sub n times delta X n.

Â That is the forward difference of X.

Â We might call this the left Riemann sum.

Â Because if we take a partition, starting with X nought, and

Â using the forward difference of our sequence X, then of course,

Â the sample point that we have chosen for this partition is the leftmost point.

Â Notice that the sum is only going up to N- 1, and

Â we do not include that right endpoint.

Â 3:07

Now, we can reverse things and use what we might call a right Riemann sum,

Â ignoring the first sample point at X nought, and

Â summing up all the rest using the backward difference of x.

Â As approximations, these Riemann sums are not terribly good.

Â On any given sub-interval, you're probably going to estimate too low or too high.

Â There's a better way to do things.

Â Instead of using rectangles, one could use a trapezoid.

Â This is the basis for what is called the trapezoid rule of numerical integration.

Â What's the area of one of these trapezoidal elements?

Â Well, of course, it's the same as taking the midpoint and

Â evaluating the area of the rectangle defined there by.

Â When one works out what that means in terms of the sequence,

Â it is the midpoint height, that is fn plus one half,

Â delta f at n times the width delta x at n.

Â Where here we're using the forward difference.

Â If we sum this up over all of the intervals,

Â let's say as n goes from 0 to N- 1, then one obtains

Â a much better estimate for the definite integral of f.

Â A few remarks are in order.

Â First, the trapezoid rule is really just an average of the left and

Â the right hand Riemann sums.

Â And second, if you're in the context of a uniform grid,

Â that is, the spacing between the h values is a constant, h,

Â then the trapezoid rule takes on a very nice form.

Â It is this width or step size, h,

Â times the following weighted sum of the function value's f.

Â The first point is weighted with coefficient one half.

Â The last point, f sub N, likewise.

Â All of the other interior sample points have weights one and

Â so one simply takes the sum.

Â This is a very simple formula both to remember and to use.

Â There's an even better approximation that is called Simpson's rule,

Â and its derivation tends to be unexplained in most calculus courses,

Â but we're going to take the time and do it.

Â Simpson's rule is a third order method.

Â That means we approximate the integrand f by a polynomial of degree three.

Â How do you approximate a function with a polynomial?

Â Well, you already know the answer to that.

Â We do a Taylor expansion.

Â Let's set things up so that we're expanding about x = 0 for convenience.

Â We keep all terms of order three and below and

Â pack everything else into big O of x to the fourth.

Â We're going to focus on the range,

Â as x goes from negative h to positive h, where h is our step size.

Â Therefore, the higher order terms can be written as big O of h to the fourth.

Â That's gonna be important later.

Â Now what do we want to do?

Â We want to integrate from negative h to h, and so

Â we can integrate each term in this Taylor expansion.

Â Now keeping in mind that these derivatives are all evaluated at zero and

Â hence constant we see that because we're integrating Over a symmetric domain.

Â All of the odd degree terms in X integrate out to give us 0.

Â We're left with a much simpler integral.

Â How do we do that?

Â Well, the first term is a constant, f evaluated at 0.

Â That integrates to that constant times x.

Â The second term has an x squared in it.

Â That integrates to x cubed over 3 with the constants out in front.

Â The error terms, big O of h to the fourth, when we're integrating from

Â negative h to h, still gives us something that is in big O of h to the fourth.

Â Now when we evaluate from negative h to h,

Â we can simply evaluate from 0 to h, and pull out a constant of 2 if we like.

Â This gives us an expression for the integral that requires knowing

Â the function value at 0, and the second derivative of that function at 0.

Â Now, the first term is gonna be easy.

Â If I'm given sample points, let's say at X n and X n-1 and X n+1.

Â Then evaluating at the middle point, that is, at Xn, is simple.

Â That function is simply f sub n.

Â But how do we estimate the second derivative of our integrand, when

Â all we know is the function value where we are, and to the left and the right?

Â 9:42

Well not exactly, we have to be a little bit careful here.

Â Remember the second derivative is d squared f over d x squared.

Â So to estimate that, we'll use the second central difference of f in the numerator.

Â And in the denominator, we'll use delta x at n squared.

Â Of course that delta x is our uniform step size h, and so

Â we get an h squared in the denominator.

Â Applying the second central difference, we obtain,

Â after a little bit of algebraic simplification,

Â an estimate for the integral from negative h to h as

Â h times quantity one-third f n-1 + four-thirds f n + one-third f n+1.

Â That is the basis of Simpson's rule.

Â That's a lot of work.

Â But let's see how it all comes together.

Â 10:44

We're going to assume that our integration domain is divided into equally

Â spaced subintervals with h.

Â We're going to set it up so that there is an even number of these subintervals so

Â that we can apply Simpson's rule on incident pairs.

Â On each such pair of subintervals, with h,

Â we weight the function values by these coefficients, one thirds, four thirds,

Â and one thirds, that we derived in our previous few slides.

Â Now, we do this on all pairs of incident subintervals,

Â and then add the results of these integrals together,

Â being careful to account for what happens on the overlaps.

Â When we do so, we obtain a system of weights for

Â the function values, and this comprises Simpson's rule.

Â We can approximate the integral of f as h,

Â the width times one-third f nought,

Â plus four-thirds f1, plus two-thirds f2, etc.

Â With two-thirds and four-thirds alternating,

Â until the very end, where the last point fN has weight one-third.

Â Let's summarize our integration methods,

Â under the assumption of a uniform grid of step size h.

Â In this case, all of the methods can be described in terms of the weights

Â that are placed on the sampled function values f sub n.

Â In the left and right Riemann sums, we simply ignore one of the endpoints,

Â whereas in the trapezoid rule, we're averaging things together.

Â Simpson's rule, as we saw, has a more complicated system of weights.

Â Now all of these methods have an order associated to them.

Â We used a third order curve to approximate the integrand in Simpson's rule.

Â The trapezoid rule is a first-order method.

Â We used line segments.

Â The left and right Riemann sums are zeroth-order methods.

Â We just took the constants and built rectangles from them.

Â 13:09

Because all of these are really using some Taylor series to approximate the integral,

Â they all have an error term that can be expressed,

Â in terms of big O, of the step size H, and

Â one can describe the accuracy of the method in terms of big O.

Â Here Simpson's rule is best with the error term being in big O of h to the fourth.

Â The Trapezoid rule, big O of h squared.

Â The Riemann sums left and right have error in big O of h.

Â Let's put these methods to the test, in the context of an example.

Â The answer to which we already know.

Â Consider the integral of 1 over x, as x goes from 1 to e.

Â This is, of course, log of e minus log of 1, that is, 1.

Â If we take our domain and divide it up into six equally spaced subintervals,

Â computing the sample points at each, then how do we proceed?

Â Well, in this case, h is e- 1 divided by 6.

Â For the left Riemann sum, we multiply the sample values f sub n by this h and

Â then add them all up, not taking that last endpoint.

Â When we do so, we get an answer that is not too bad.

Â It's within 10% of the correct answer.

Â If we do a right Riemann sum, then, because of the convexity of this curve,

Â 1 over x, we get a very different value.

Â It is under the true value, instead of over.

Â Again, by about 10%.

Â The trapezoid rule is a great improvement over these methods,

Â returning an answer that is within 1% of the correct answer.

Â And notice that we're using almost exactly the same values,

Â just balancing out what happens at the left and the right hand endpoints.

Â But Simpson's rule beats them all.

Â When we use that system of weights,

Â those third order approximates return a value of that integral that

Â is much better than any of the other methods that we have studied.

Â