0:00

Welcome to Calculus. I'm professor Greist.

Â And we're about to begin Lecture 9, Bonus Material.

Â In our main lesson, we considered growth rates of functions and ordered them in a

Â hierarchy, as x goes to infinity. This led to a language for discussing and

Â controlling that growth, both as x goes to infinity, and as x goes to 0.

Â This language is called Big-O. Before we get to that, there's one thing

Â that we need to reconsider a little bit. One of the things we discussed was the

Â factorial function or factorial growth. But what exactly do we mean by factorial

Â functions? What is x factorial when x is not an

Â integer? Well there is a real definition for it.

Â It's going to look a little complicated, the definition of x factorial Is the

Â integral as t goes from zero to infinity, t to the x times e to the minus t dt.

Â Oh, what in the world is that? Well, we're not going to explain that

Â now. We'll get to that later in this course,

Â maybe in Chapter Three. But, for the moment, one thing to know is

Â that this definition satisfies the property that factorials have, namely

Â that x factorial equals x times quantity x minus 1 factorial, at least for x

Â positive. And also, n factorial for an integer

Â agrees with this definition. Now, I'm not going to prove that now,

Â we'll talk about that later in the course.

Â What I want to argue is that this factorial growth beats exponential

Â growth, in the case where x is an integer, a positive integer.

Â Then, we get x factorial being the familiar, x times x minus 1 times x minus

Â 2, et cetera. All the way down.

Â If we look at e to the x, well, this is some number of copies of e multiplied

Â together. Now for values of x hm, very large, we

Â see that terms in the numerator and terms in the denominator pair perfectly.

Â And almost all of the terms in the numerator are much bigger than the terms

Â in the denominator. And this means that in the limit, as x

Â goes to infinity, we get something that is infinite.

Â That means that x factorial grows much faster than e to the x.

Â If you're curious about this unusual sort of definition, look up information about

Â the gamma function. We'll return to that a little bit later

Â in this course. What I'd like to spend some time on is a

Â bit of discussion about Big-O. How is this language used?

Â Why is this language Used. It is ubiquitous in Applied Mathematics.

Â And you're going to see it at various points throughout this course.

Â So let's take some time and review, and extend what we know.

Â Recall that a function f is said to be in Big-O in the function g.

Â If, in the appropriate limit, f, in absolute value is less than g in absolute

Â value times some constant c, we don't care about what the constant is, we just

Â want to know that there is some upper bound.

Â Again, this holds in the limit as x is going to wherever were interested in

Â looking at. Maybe x is going to zero or maybe x is

Â going to infinity. One of the reasons Big-O is so useful is

Â that it forms an efficient language. For example, f is in Big-O of 1 is means

Â the same thing, as saying that f is bounded in the limit, as x is going to

Â zero or infinity or whatever, there are other applications as well, real

Â applications. For example, in computer science one is

Â often interested in specifying how much effort it takes to execute a certain

Â algorithm. This is the subject of computational

Â complexity. Consider the following.

Â Let's say you have two n digit numbers and you want to multiply them together.

Â How much work are you going to have to do?

Â How many individual mathematical computations is that going to take?

Â Well, let's consider what it takes to multiply two numbers together using the

Â standard algorithm that you learned in primary school.

Â And let's count this as a function of n. And consider what happens as n is going

Â to infinity. To multiply these two numbers together I

Â need to take each digit of the second number.

Â And multiply it by the entire first number.

Â That is n elementary mathematical operations?

Â Well, maybe it's not n. Maybe we should call it 2n because of the

Â fact that you have to multiply 6 times 3, 18, write the 8, carry the 1, et cetera.

Â We'll pick some constant times n. But we need to do that for each of the n

Â digits in the second number. Now, we're still not done because we need

Â to take all of those resulting products, and add them together.

Â And I'll let you try to figure out how much effort is involved in that addition.

Â It's, you know, some number times n, times [INAUDIBLE] maybe n, maybe n minus

Â 1, I don't know. I'm not a computer scientist.

Â I'm a mathematician. But neither I nor the computer scientist

Â needs to worry about these exact constants.

Â If all we care about is what happens as n is going to infinity, we can say that

Â this algorithm is in Big-O of n squared. It's going to take a quadratic amount of

Â effort. So, as your numbers get bigger and

Â bigger, you're doing a lot more work to multiply them together.

Â Is there an algorithm that is better, that is, say, n times log n?

Â Well that is what Big-O is good for. Error analysis in engineering, physics,

Â other sciences, is another excellent application.

Â Let's say that you know the kinetic energy of your body is one half the mass

Â times the velocity squared, but let's say that your measurement of the velocity has

Â some error associated with it. Epsilon, we'll call that error.

Â And we'll assume that epsilon is very small.

Â What is the induced error in our kinetic energy?

Â Well, of course, if we expand that out, one half mv squared plus m times v times

Â epsilon plus something that is in Big-O of epsilon squared, what we really care

Â about is the leading-order term in epsilon.

Â That is going to dictate what the error to the kinetic energy is, and we can see

Â that that is really the momentum, m times v in this case.

Â Now, other examples are relevant to what we began this lesson with.

Â Let's consider x factorial. We argued that its growth rate is pretty

Â big, that x factorial is not in Big-O of e to the x.

Â But we can say more is a wonderfully useful and powerful formula in

Â mathematics called Stirling's Formula that says exactly how fast x factorial

Â grows. One form of this says that log of x

Â factorial equals x times log of x minus x plus some remainder term that is in Big-O

Â of log of x. Perhaps the stronger version of

Â Stirling's Formula will be a bit more transparent.

Â It says that x factorial equals square root of 2 pi x times quantity x over e to

Â the x power times 1 plus something in Big-O of 1 over x.

Â Now you don't need to memorize either of these.

Â They're wonderfully useful in probability in combinatorics in the lots of areas.

Â But what I want you to see is that the appropriate language for expressing how

Â fast x factorial grows as x goes to infinity is Big-O.

Â Pure mathematics uses Big-O as well. If we define the function pi of x to be

Â the number of prime numbers less than x. Let's restrict to the positive primes of

Â course and x is going to be, let's say, integer valued.

Â Doesn't matter. Then, one thing we could say is that the

Â number of primes is Big-O of x. But that's a vacuous statement.

Â Not every integer is prime. We'd like some idea of how many primes

Â there are. Well, of course there are infinitely many

Â primes. But we'd like to know a bit finer

Â information. What kind of density do the prime numbers

Â have within the natural numbers? Well.

Â we can use the language of Big-O to state the prime number theorem, which says that

Â pi of x is to leading order x over log of x.

Â That's not exactly what it is, it's just the leading order term, but pi of x is x

Â over log x times quantity 1 plus something in Big-O, 1 over log of x.

Â And that's one example of how this language appears in everything from pure

Â mathematics to applied mathematics. To the physical and computational

Â sciences. It'll take a little while to get used to

Â this language, and it's going to take a bit of practice.

Â Most students find it a bit unlike things that they've learned in the past, because

Â of this arbitrary constant. But do a few practice problems and you'll

Â see Big-O cropping up later in the course.

Â