0:00
Welcome to calculus. I'm professor Ghrist.
We're about to begin lecture 24 on partial fractions.
>> The previous three lessons have solved integrals by means of running
differentiation rules in reverse. We're going to change our perspective now
and give an introduction to methods based on algebraic simplification.
The broadest and more generally useful such method, is called the method of
partial fractions. >> In this lesson we'll look at certain
algebraic methods for simplifying integrands.
Indeed certain integrals respond well to a pre-processing step.
This is especially true for rational functions ratios of polynomials for
example, if you are asked to integrate 3x squared minus 5 over x minus 2.
Then, naturally, you would perform the polynomial division and rewrite this as
3x plus 6 with a remainder of 7 over x minus 2.
And while the former integral seems intimidating, the latter integral is
simple 3x gives 3 halves x squared. 6 integrates to 6x and 7 over x minus 2
yields 7 log x minus 2 and then add the constant.
Now, this is a trivial example but what happens when you have a higher degree
denominator in your rational integrand. We're going to rely on a certain result
from algebra, which states the following. A rational function P of x over Q of x,
where the degree of the denominator is greater than the degree of the numerator
and for which the denominator has distinct real roots, r sub i.
And that is, we're looking at an integrand of the form of polynomial P
over x minus r1 times x minus r2, times etcetera, all the way up to x minus r sub
n. Where Q has n distinct real roots, then
this can be factored or decomposed as a sum of constants A sub i over quantity x
minus r sub i. This fact is something that we're going
to take for granted, we're not going to prove it but we're going to use it
extensively. This integral of P over Q seems very
difficult in general. However, the right hand side can easily
integrated since all of these a sub i's are constants, then we get simply
logarithms. Now, what do we do to come up with this
decomposition? How do we determine these constants A sub
i. This is sometimes called the method of
partial fractions. Let's look at an example and simplify 3x
minus 1 over x squared minus 5x minus 6. We can factor the denominator as quantity
x plus 1 times quantity x minus 6. By our algebraic fact, we know that this
must be A1 over x plus 1 plus A2 over x minus 6.
For simplicity, let's call these constants A and B instead.
And now to determine A and B, we're going to multiply out the right hand side, put
it over a common denominator. Where we'll obtain a times quantity x
minus 6 plus B times quantity x plus 1 in the numerator.
And now we're going to expand that multiplication out and collect terms so,
that the first degree coefficient is A plus B and the constant coefficient is B
times 1 minus 6 times A. Now, these must match up with the
coefficient of the numerator 3x minus 1, if we have two polynomials that are the
same, then all of the coefficients must match up.
And so we see we're reduced to two equations, in two unknowns mainly A plus
B equals 3 and B minus 6A equals negative 1.
Now, we can solve such a linear system of equations for A and B.
Might take a bit more work than I'm willing to put on this slide.
So, let me show you a more direct method for computing A and B.
If as before, we write out 3x minus 1 equals A times x minus 6, plus B times x
plus 1 then, instead of expanding and collecting, we can do the following.
We can say, this is a true statement and it must be true for all values of x.
Therefore, it must be true if x equals negative 1, and happens when we
substitute in on the left? We get negative 4, on the right we get A
times a negative 7 plus B times 0. And we have eliminated that variable from
consideration. Solving, we obtained A is four seventh.
Likewise, if we substitute x equals 6 into the above equation, we get on the
left hand side, 17. On the right hand side, A times 0 plus B
times 7 we therefore, have eliminated A and we can conclude that B is seventeen
sevenths. And I'll let you check that these two
satisfy the two equations in the lower left hand corner.
Now, lets put this method to work. Compute the integral x squared plus 2x
minus 1 over 2x cubed plus 3x squared minus two x.
7:24
Factoring the denominator gives us x, quantity 2x minus 1, quantity x plus 2
all multiplied together. Now, we hope to split these up into A
over x plus B over 2x minus 1 plus C over x plus 2.
Let's color code these and then multiply everything out and see what we get.
A times 2x minus 1 times x plus 2, plus B times x times x plus 2, plus C times x
times 2x minus 1 must equal the numerator of the integrand x squared plus 2x minus
1. Let's use our direct method of
substitution. First we'll substitute in x equals 0.
When we do that, we eliminate all of the terms except the one involving A.
We obtain negative 2A equals minus 1. Solving, we get a is a half.
Likewise, if we substitute in x equals one half, we eliminate all the terms
except for the B, which we solve to get B as one fifth.
What's our last root? If we substitute in x equals negative 2,
then we obtain a value for C of negative one tenth.
9:03
Substituting these coefficients back in to the integrand gives us for our
integral, the integral of one half over x plus, one fifth over 2x minus 1, minus
one tenth over x plus 2. These are all going to be in logarithms.
We're going to get one half log of x, plus one tenth log of 2x minus 1 being
careful with that one half coefficient, minus one tenth log of x plus 2.
Let's put this to use in a differential equation.
Consider the deflection x, as a function of time t associated to a thin beam that
we've applied force to. Question is how does this deflection
evolve? Well, if the load is proportional to a
positive constant lambda squared, then one simple model for this beam would be
dx dt equal lambda x squared minus x cubed.
We can factor that as x times lambda minus x times lambda plus x.
And now applying the separation method, we get on the left dx over x times lambda
minus x times lambda plus x. On the right, dt.
Integrating both sides prompts the use of a partial fractions method.
I'll leave it to you, to show that the coefficients in front of the terms are
respectively, 1 over lambda squared, 1 over 2 lambda squared and negative 1 over
2 lambda squared. So, that when we perform the integral, we
get a collection of natural logs with coefficients in front namely, log of x
minus one half log of lambda minus x, minus one half log of lambda plus x
equals lambda squared t plus a constant. Where I've moved the lambda squared's to
the right hand side for simplicity. Now, what we would do next is try to
solve for x as a function of t. This looks like it's going to involve a
lot of algebra. And I'm not really in the mood so, let's
use linearization and solve for the equilibrium to see if we can understand
what's happening. If we plot x dot versus x, we see that
there are three roots. One root at zero, one root at negative
lambda and one root at positive lambda. By looking at the sign of x dot or by
linearizing the right hand side of the equation, we can see that zero is going
to be an unstable equilibrium. That makes physical sense, if you're
compressing the beam if it were perfectly symmetric, then it would not deflect.
However, that situation is inherently unstable and if you start off with just a
little bit of deflection in either direction, you will quickly move to one
of the stable equilibria at lambda or negative lambda.
Depending on which side you buckle. Now, there are some complications
involved with applying the method of partial fractions.
We've assumed existence of real distinct roots.
That is, a bit of a luxury, in general you don't have that.
Sometimes you have repeated roots, something of the form of polynomial over
x minus root r to the nth power. This decomposes but it decomposes into
something a bit more complicated. There are n terms, each is a constant
over x minus r to some power ranging from one up to N.
Now all of these individual terms are Integrable, but some might take a little
more work than others. What is potentially worse is complex
roots, where you don't necessarily have real roots.
Let's say, we look at a polynomial over a quadratic, ax squared plus bx plus c.
Then if the discriminant b squared minus four ac is negative, we have complex
roots. This does break up, but it breaks up into
a linear polynomial in the numerator and then the quadratic in the denominator.
And if you have repeated complex routes, well then it gets even more difficult
still. All of these examples are doable, but
very cumbersome. And I'm not going to test you on these in
this course, in the end it's really just algebra.
14:43
And in the end, it's really just roots, the roots control everything whether
they're real, whether they're complex, whether they're distinct or repeated.
As we saw in the example of the buckling beam, the roots have physical
interpretations in differential equation models.
Always pay attention to your roots, not just in calculus when trying to solve an
integral, but in differential equations or even later still when you take linear
algebra, you will see that the root influences everything.
>> We've seen in this lesson how algebraic methods can be incredibly
helpful. And incredibly complicated.
We're not going to descend any further into that underworld of integration
techniques. Rather, we're going to turn our gaze to a
new character in our story. That, of the definite integral.