0:12

So hello.

Â We continue with our class on simulation modeling of natural processes.

Â And now I'd like to discuss our second module on Monte-Carlo Method and

Â this is what some people call the Markov-Chain Monte-Carlo, MCMC.

Â So the main goal again of the Monte-Carlo method is to sample processes.

Â And, here we will just study a process,

Â a stochastic process, which is actually sampling state space.

Â And we first wanna understand the property of this process and see if we can tune it.

Â I just needed to actually sample something you want.

Â Okay so we consider a stochastic process

Â 1:02

which will move across state

Â space which I'm not specifying precisely here but I will give you example.

Â So what do I mean by exploring?

Â It means that if I have some position x in this state space I will jump

Â to position x prime at the next time step and

Â the probability to pick x prime to jump is given by

Â 1:37

And each time I've done an attempt to jump somewhere,

Â I advanced the system time by one unit.

Â So that's the way time is evolving in this, and for

Â those of you who remember your math class, this is called a Markov-Chain.

Â So just giving a little example, you may have here

Â a physical system of particle, for instance, they are somewhere in space.

Â And basically, you may want to sample the space of the configuration

Â of this particle according to the physics they obeying to and

Â the ideas from this black configuration I could do modification so

Â I can decide to move for instance two particle to the white spot, okay?

Â And so that's typically a jump in my state space.

Â And the question is with what probability should I accept this move, okay.

Â And actually, I have, of course,

Â a constraint to accept or reject this change, is that I would like to

Â accept changes which correspond to a natural movement of this particle.

Â So you know that the gas is of course made of particles.

Â But that's certainly not drastic as my picture suggests.

Â They keep moving, but they don't explore all the configuration,

Â because there typically was a given temperature.

Â 3:18

distribution, which is the energy of the configuration x, which depend on

Â all the position of the particle and the temperature at which the system is kept.

Â So the question is, is there a way to choose this transition probability?

Â So that I can move my system from one configuration to another

Â by still respecting the probability distribution imposed by physics,

Â for instance or another one that I know.

Â 3:47

Okay, so let's do a little bit of math and the probability that our

Â exploration process it has position x in the state

Â space at time t plus 1 is basically probability it was somewhere else,

Â some x prime, at the time t, and then you jump from x prime to x, so

Â that's very simple probability theory for Markov processes.

Â 4:13

So now I will give you a simple example of that.

Â So we'll consider one D space which is just a discretization of

Â the horizontal line, meaning that I may have a particle that can jump from

Â one cell to the next, left, right, or stay at rest.

Â Or we say that the discrete space is the symbol, Z, natural number,

Â and then I want to sample this state.

Â 4:50

which probability I call W+ to go to the positive direction.

Â Or I can move to the left, or negative position as probability W-, or

Â I can stay at rest with property W0 and of course the sum of this thing should be 1.

Â So now, my equation here becomes with this reduced set of jump to

Â simply this one that only three possibility of evolving,

Â so if I'm position x, I time t + 1, I was prior position x- 1,

Â and then jump right, or I was at position x and I didn't move,

Â or I was at position x + 1 and I moved left, okay?

Â So, now suppose I wanna use this

Â stochastic process to sample the diffusion equation.

Â So you will learn more about diffusion equation in the coming weeks.

Â 5:46

But, just for now, let me tell you that mathematically is partial differential

Â equation which is the time directive of the density row is the second spatial

Â directive of the this density time, some diffusion coefficient.

Â 6:01

And you will learn in during week

Â three that you can discretize such equation and find a difference.

Â And it says simply that the evolution in time

Â is a previous value plus some combination of your left and right neighbors.

Â So there's just math translating this equation into a discreet form.

Â Okay?

Â But now, we know that our stochastic process has the following property

Â that tells us how the density probability increase over time,

Â and you see it's very similar to this thing.

Â So basically, you see that this

Â p of xt, if I want p to be equal to raw,

Â I have one contribution here and one contribution here.

Â So it mean that this coefficient 2 times this plus 1 should be equal to w0.

Â And the same, I can match this with this and this with this.

Â So this gives you exactly this condition, so provided that you

Â choose W+ and W- one as this quantity, and

Â W0 as basically the complement, so that's everything sums up to one.

Â You get exactly a process which reproduces the diffusion equation.

Â So it's a stochastic process, you have to run it many times,

Â you have to do statistics, but if you do so you will see that your particle which

Â jumps randomly will just be distributed exactly like as the diffusion equation.

Â What's also interesting in this approach is that you see this condition

Â that tells you that actually since you have probability

Â you have this condition that this quantity should be smaller than one half otherwise

Â you may create negative probability which of course is not very good.

Â It's interesting to know that this condition

Â is also the stability condition of this numerical scheme.

Â So you see there's a nice consistency between the two approaches.

Â 8:07

Okay, so we can actually replace the solution

Â of the diffusion equation by a stochastic process of random walk.

Â Okay, maybe it looks to simple to solve the diffusion equation with random walk.

Â But you should realize in that in your random walk you can easily

Â add obstacles or aggregation mechanics are all kind of, you know,

Â other feature that might be difficult to insert in the differential equation.

Â So maybe the process is actually represented at the level of the particle.

Â It's easier to do it as a Monte Carlo simulation rather than

Â trying to solve a complicated diffusion equation.

Â 8:52

So now, let me go a little bit more into the general case, and

Â I wanna take again the equation telling me how the probability of

Â my explorer evolved in time, so we already saw this equation.

Â And now, I wanna transform it in a very simple way.

Â So, basically I will single out the situation when x prime is equal to x so

Â that this term and of course the rest is everything

Â result x prime equal x, okay?

Â And then I just use this term that I replace by the fact that

Â the sum of all the Ws is 1, so I can really rewrite it this way.

Â 9:36

Okay, and now I group these two sums into one, and I get this guy out of it.

Â So, this here, and so I have this equation which tell me how my probability

Â of my random explorer evolved in time and

Â of course again the goal here is to try to find the W so

Â that this p will be actually identity that you know and you want to sample.

Â 10:04

And if you want to impose that p

Â is equal to some row that is known in a steady state, okay,

Â it's obvious that you need that all this term is equal to zero, okay, so

Â basically to have p equals rho you need to find Ws that will satisfy this equation.

Â 10:35

this element, r0, which is exactly what we see here.

Â And that condition is called in physics,

Â detailed balance because it mean that the priority of being at x and jumping to

Â 10:48

x prime and jumping to x the same as the probability of being at x and

Â jumping to x prime, so that's why it is called detailed balance.

Â And detailed balance will certainly solve this condition.

Â It's maybe a sufficient condition,

Â maybe not a necessary condition, but we'll go with it for now.

Â 11:08

So the famous metropolis rule, the metropolis is this person who used

Â the Monte Carlo approach in the Manhattan Project.

Â He was interested to sample the Maxwell-Boltzmann distribution which

Â again, I am repeating, so it's the energy of the system divided by its temperature.

Â 11:29

And he showed that this transition rule is a good one.

Â It obeys the detail balance and

Â it's known, it's well known as the Metropolis rule.

Â So, say that you go from X to X prime, with probability one,

Â if the the new energy you have is smaller than the previous one so

Â system is happy to go to lower energy.

Â Now if the energy of your new configuration is

Â higher than the previous one.

Â You can still go but with some probability which decreases because of this minus,

Â as the temperature is low, and as the jump of energy is big, okay?

Â But you'll still have a probability to go in a state of higher energy, and

Â at just the right way to sample this distribution from the theory we just saw.

Â So in practice, how would you do that in our system of particle?

Â 12:22

You'd select a particle at random.

Â Let's say this black one here and

Â you decide to move it by a random quantity to this position here.

Â Now since you've changed the position of the particle,

Â you've changed the energy of my gas and I have a new energy E prime.

Â I will accept this change of the particle if

Â 12:44

it corresponds to the metropolis rule, which in practice you can write this way.

Â You can just take a random number between 0 and 1 and

Â if it is smaller than the minimum between 1 and

Â this quantity, you accept the change otherwise you reject the change.

Â So this expression is just a way to express the equation I had before here.

Â Okay, and so what's the benefit of doing this if I

Â can sample my distribution of the gas?

Â Then I can compute some properties like pressure or

Â whatever physical property that the gas takes, and of course,

Â that's what was interesting for the Manhattan Project.

Â 13:40

So if I wanna compute this quantity,

Â so that's my distribution at equilibrium here, okay,

Â times this transition rate, which his given by the Metropolis rule.

Â Okay now I can of course combine the two exponential in one and

Â I get just this, okay.

Â But this is nothing but what we call the density for

Â the contribution x prime times one if you want.

Â But one is exactly now the probability of jumping from X to expand to x

Â because E prime is bigger than E, so then the jump would be with probability one.

Â So you see that, of course, you obey detailed balance.

Â 14:23

And for those of you a bit more curious, you have another way to satisfy

Â the detailed balance, which is called The Glauber Rules.

Â It's simply taking this as an expression.

Â And in the case of an equivalent system you get this as a transition.

Â And it's of course, also a base detailed balance.

Â 14:43

So with this I would like to close my discussion of

Â the Monte-Carlo Markov chain approach.

Â And in the next module, we will discuss what's called as kinetic or

Â dynamic Monte Carlo method.

Â Thank you for your attention.

Â