This introductory physical chemistry course examines the connections between molecular properties and the behavior of macroscopic chemical systems.

Loading...

From the course by University of Minnesota

Statistical Molecular Thermodynamics

143 ratings

This introductory physical chemistry course examines the connections between molecular properties and the behavior of macroscopic chemical systems.

From the lesson

Module 6

This module introduces a new state function, entropy, that is in many respects more conceptually challenging than energy. The relationship of entropy to extent of disorder is established, and its governance by the Second Law of Thermodynamics is described. The role of entropy in dictating spontaneity in isolated systems is explored. The statistical underpinnings of entropy are established, including equations relating it to disorder, degeneracy, and probability. We derive the relationship between entropy and the partition function and establish the nature of the constant Î² in Boltzmann's famous equation for entropy. Finally, we consider the role of entropy in dictating the maximum efficiency that can be achieved by a heat engine based on consideration of the Carnot cycle. Homework problems will provide you the opportunity to demonstrate mastery in the application of the above concepts.

- Dr. Christopher J. CramerDistinguished McKnight and University Teaching Professor of Chemistry and Chemical Physics

Chemistry

Alright, well, I think it's time for us to try to connect Entropy with the

Â Partition Function. So remember that we've derived partition

Â functions for monatomic, diatomic, polyatomic ideal gases.

Â And shown there are many properties of ideal gases that we can compute directly

Â from these partition functions. Let's make a connection between entropy

Â and partition function. And so, let me remind you, that from an

Â earlier video this week. We established that the entropy of an

Â ensemble was Boltzmann constant times a log a, minus sum over j, little a log a.

Â Where that capital a was how many systems are there in the ensemble and the little

Â a was the population of each of those systems.

Â So, that being tho, the definitions, you can also talk about the average entropy

Â of a given system. And that is just going to be the total

Â entropy of the ensemble divided by the number of systems.

Â You can also talk about the probability p sub j of choosing a system in state j,

Â and that is, how many are there in state j divided by how many there are total.

Â So, in that case, I could just say aj is equal to pj times capital A.

Â So let me substitute those expressions into the ensemble entropy.

Â That is, I'll go from little a's to little p's times capital A.

Â So I get a log a minus this expression. I'll expand this out a bit, so multiply

Â through. Both constant a log a minus k a here's

Â this a. This is a log of a product, so I'll just

Â keep playing this game log of a product is a sum of log rhythms.

Â So I'll get a p log p term and I'll get a log A term, A is just a number it's a

Â constant. So what comes out is k A log A and then a

Â sum over j of all the probabilities. But that sum is just the number 1.

Â If I consider the probability over all of the systems that adds up to 1.

Â I will pick a system. So the first term, k A log A, and the

Â last term k A log A, those drop out. And I'm left with the entropy of the

Â ensemble is k times A, p sum over p log p.

Â Okay, and that just expresses, yes indeed, the sum of all those

Â probabilities is one. Now if I were to divide both sides by A,

Â this A drops out on this term. And the entropy of the ensemble divided

Â by the total number of systems in the ensemble, that's the average system

Â entropy. And so, the system entropy is equal to

Â minus k, sum over the individual states, p log p.

Â So this is another way to write entropy. We've seen a lot of ways to write

Â entropy. K log w, k log omega, and here we have k

Â p log p. This is the probability form of the

Â entropy. And a couple of things if you're worried

Â about the fact that the probability could go to zero.

Â And the log of zero is negative infinity and that doesn't seem very good.

Â You can actually use L'Hopital's rule to establish that in the limit as x goes to

Â zero. X times log x is equal to 0, it does not

Â go to negative infinity, so that's nice. You'll also see that if all the

Â probabilities are 0, except for one, then for that single one, it'll be 1 times the

Â log of 1, log of 1 is 0, so I'll get the entropy is 0.

Â And that's what I expect, right? There's no disorder if everything is one

Â thing. In addition, you can show, and you would

Â use this, you'd have to use calculus to show this.

Â And a special little trick that only n minus one of the probabilities are

Â independent. That last probability depends on all the

Â others. But if you play around with that you

Â might be able to prove to yourself, that the entropy is maximized when all the

Â probabilities are equal for all possible states.

Â But what I want to focus on now is that, remember in the nv beta ensemble or nvt

Â remember that beta is just one over kt. In that ensemble, we had a way to define

Â the probability. It's e to the minus beta times the energy

Â sub j divided by the partition function, which is the sum over all possible

Â exponentials, all possible energies that is.

Â And so if I now swap that in for p, I get entropy is equal to minus Boltzmann's

Â Constant, sum over j. Here's my probability, here's the log of

Â my probability. And this is the log of a quotient.

Â So I'll take a difference of logs. So I get minus kb, here's this prefactor

Â term, log of an exponential. That just annihilates both those

Â functions. I'm just left with the argument of the

Â exponential, minus beta E sub j. And then meanwhile, I've got a minus log

Â q here, minus log q. So given this expression for the entropy.

Â I can manipulate it a little bit more, so recall that here I've got a probability.

Â Here I've got a beta, so that's a one over kt.

Â So if I pull all this out front, the ks will cancel and the negative signs

Â cancel, and I'm left with a one over t. And here's this energy term, meanwhile,

Â I've got a k times a log q and it is over q from this term.

Â So you can do the algebra yourself. But, what is this?

Â What is the sum of the probability weighted energies.

Â That is the internal energy. That defines the internal energy.

Â Meanwhile, what's this? Sum e to the beta Ej over all possible

Â j's, that's the partition function Q. So this Q cancels this Q.

Â So, I have this relatively simple expression S is equal to U over T plus k

Â log Q. Now let me write that in a slightly more

Â traditional form, which recognizes that U depends on the partition function that

Â we've already derived. So we get S is equal to kT partial log Q

Â partial T plus k log Q. So probability weighted energy is the

Â internal energy that was a key step we used.

Â This sum is equal to the partition function, a key step we used.

Â And the take-home message, which is particularly important, is that entropy

Â can be computed directly from the partition function.

Â Just as we have been successful with internal energy, with pressure, with heat

Â capacity. So let me consider then the entropy of a

Â monatomic ideal gas. So remember this is Q, capital Q, for a

Â monatomic, ideal gas. It's got something coming from

Â translation, it's got something coming from electronic degeneracy at a ground

Â state, and it's got an N, factorial term. So, if I take, the partial derivative of

Â the log of Q, with respect to T, well when I take the log, all these things

Â will separate out. Because logs take products, and

Â quotients, and make individual terms. The only thing that'll be left is a T,

Â the lo, log of T, and there's a 3N over 2 power.

Â So 3N over 2 will come out. I'll get derivative of log T with respect

Â to T, that's 1 over T. So there's the log term I need to worry

Â about. Meanwhile, log Q itself, that takes a

Â little longer to work with. When I take this log it's convenient to

Â remember that there's a 1 over N factorial term here.

Â So I'll put this over here as minus log N factorial.

Â I'm going to take this N out of these two exponents and multiply the logarithm, and

Â just leave behind this argument. I'll use Sterling's approximation to

Â simplify the log factorial term, and then I will take this log of N.

Â It's minus N log N. Well, I've got a log minus another log.

Â So I can divide by N. They're both already multiplied by N.

Â So that's why N appears here in the denominator.

Â I've put it underneath the volume. So this is a convenient way to have the

Â log expressed. Because now, I can work with this

Â expression for the entropy. Here's my partial log of Q partial T that

Â I'm going to need. Here's my law of Q, that I'm going to

Â need on this side. I've run out of space on this slide, so

Â let me try to pack all that back in on another slide to finish the, the

Â derivation. If, I take the molar entropy, that is my

Â N values here, are going to be my Avogadro's number.

Â Well, then I will get a k times log Q. Well, here's n, Avogadro's number so

Â that's going to introduce some R's. So, if you carry the multiplication all

Â the way out, here's the R times the log of all this quantity.

Â I'll get a k times Avogadro's number. Another factor of R, so that's just a

Â plain old R sitting off by itself. What do I get here?

Â I get 1 over T multiplies kT, the T's go away.

Â So I get Boltzmann's constant, times 3, times Avogadro's number over 2.

Â Well, the Avogadro's number times Boltzmann's constant is R, so I get three

Â halves R. We get three halves R from this term, and

Â another factor of R that came from this term.

Â That's where this five halves R comes from.

Â And in meanwhile, the remaining piece here is this part of log Q being

Â multiplied times K, using Avogadro's number.

Â So Avogadros number is now what appears here in the denominator.

Â So a couple things to notice about this expression.

Â One is, look, so let's pull all the way back to chemistry again and think about

Â concepts. What dictates whether the entropy is

Â large, lots of disorder, or small, not very much disorder?

Â Okay, well, let's just look at some of the terms that can be variable, one term

Â is the mass. So this is the mass of the gas.

Â And what we see here is that if it gets larger, the entropy will be larger.

Â And is that consistent with what we expect?

Â Well, this is actually, if you recall deriving from the translational partition

Â function. As the mass gets larger the density of

Â translational levels becomes greater. The levels get closer and closer

Â together. So they're more accessible.

Â There are more ways to distribute the gas in, the individual molecules that is, to

Â their translational levels, that is greater disorder.

Â So that's consistent with the way we should think about entropy.

Â What else can we control? We can control temperature, so as we

Â raise the temperature the entropy will increase.

Â And once again, the way to think about that is population of levels.

Â Now I haven't changed the spacing between the levels, they are whatever they are

Â for the gas, but by using a higher temperature, I can access more of those

Â levels. Right, the e to the minus something over

Â kT. As t gets bigger, the probability goes up

Â of getting into those levels, more disorder.

Â Volume, if we have a larger volume, the entropy goes up.

Â And so, that again makes sense. The, the spacing between the levels and

Â the particle in a box solution depends on the volume those levels are in.

Â The bigger the volume, the denser the spacing, all right?

Â So this is all consistent. And then finally, a given gas, that may

Â have a larger electronic ground state degeneracy that will also influence

Â things. And that one, in a sense, is, is a little

Â bit more trivial to see. You know, if I've got a ground state that

Â can be, spin up or spin down let's say. Maybe it's the hydrogen atom is an ideal

Â gas, slightly unusual ideal gas, but you can imagine it.

Â So up, down, same energy. There will be two possibilities and

Â that's greater disorder. So the entropy will increase by a factor

Â of R log 2 as opposed to 1 if there is no degeneracy in the ground state.

Â Well, okay, so that is a, look at the monatomic ideal gas.

Â Let's pause here for a moment. I'm going to let you think about

Â implications for a diatomic ideal gas. Alright, hopefully, the concepts we've

Â talked about that tries to tie the partition function and the entropy

Â together. But moreover to weave in the molecular

Â concepts and the molecular behavior of a gas.

Â That's a little more clear, we always should approach these things and ask sort

Â of sanity questions. Does the formula, is it consistent with

Â what I expect? Understanding the physics of the

Â molecules and the way they interact and their chemistry.

Â Next, what I want to do is come back to beta.

Â So we introduced beta some time ago and I told you to just accept on faith

Â essentially that it was 1 over Botlzmann's constant times the

Â temperature. But next I want to effectively prove that

Â finally.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.