0:00

Hi. We've been studying Markov models. And we looked at that first model, where

Â there's students that were alert and bored. And then we looked at the more

Â realistic model, where, the more interesting model, countries being free,

Â partly free, or not free. And in each of those cases, what we saw is that the

Â process converged. Right? That it went to a nice equilibrium. What we want to do in

Â these lectures is study something called the Markov convergence theorem. Sounds

Â scary, it's a little bit scary. What it's going to tell us is that, provided a

Â few assumptions are met, and they're fairly mild assumptions, that Markov

Â processes converge to an equilibrium. So this is a powerful result, because it tells

Â us what's going to happen to a Markov process. Now remember this is a statistic

Â equilibrium, right? It's going to keep churning, but the probability that

Â you're in each state will stay fixed. What we want to do is understand, what are the

Â conditions that must hold for that to be true? So let's got back, right and think

Â of our first example, where we had alert students and bored students and we had

Â some p that was the probability that you were alert and what we could do is we

Â could say well .8 p plus .25 1-p has to equal p, that's an equilibrium, and we

Â found that if p was equal to five ninths, then that probability stayed the same.

Â If five-ninths of students were alert, four-ninths were bored, then we stayed in

Â those proportions. That's what we mean by an equilibrium. So what we wanna do, is,

Â we wanna ask, what has to be true of our Markov process for an equilibrium to

Â exist? You know, there's just four assumptions. The first one is, you've

Â gotta have a finite number of states. Well, that's the definition of a Markov

Â process, at least the ones we're considering. So that's always gonna be

Â satisfied. Second is, the transition probabilities have to be fixed. So by

Â that I mean that, from period to period, the probability to move from one state to

Â another doesn't change. Now, we'll talk in a minute about why that might not always be

Â true. But for the moment let's just assume that's the case. Third, and this is sort

Â of a big one: you can eventually get from any state to any other state. So, it may

Â not be that you can get from state A to state C right away, and maybe you have to

Â go through B. But as long as there's some way to get from state A to state C, that's

Â fine, that'll satisfy assumption three. And in the last assumption, the fourth

Â one, this is sort of a technicality, it's not a simple cycle. So if I wrote down a

Â process where you automatically go from A to B, and automatically go from B to A,

Â then the thing would, you know, churn. The thing is, it wouldn't really go to this

Â nice stochastic equilibrium necessarily. It could be all A's, then all B's,

Â then all A's, then all B's, then all A's, then all B's. So if you rule out simple

Â cycles, and just assume finite states, fixed probabilities, can get from any

Â state to any other, then you get an equilibrium. So this is the Markov

Â convergence theorem. Given A1 through A4, the Markov process converges to an

Â equilibrium. And it's unique, so you're gonna go where you're gonna go. No matter

Â whether you start with all bored or all alert, all free, all not free -- you're

Â gonna end up at an equilibrium, and it's gonna be the same one that's determined,

Â sometimes entirely, by those transition probabilities. So if I write some Markov

Â process like this, and I go ahead and solve for it, there's only one answer.

Â There's gonna be a unique answer for what that equibilirium is gonna be. So let's

Â think about what this means, because this is incredibly powerful. The first thing is

Â this initial state doesn't matter. Doesn't matter where I start. If I start with all

Â free, all not free, all alert, all bored, right? Anything that's a Markov process,

Â any Markov process, the initial state will not matter. History doesn't matter either.

Â It doesn't matter what happens along the way. If it's a Markov process, history doesn't

Â matter. What's gonna happen is gonna happen, we're gonna go to that

Â equilibrium. Now, history could depend on, you know, which students move from alert

Â to bored. With the long run percentages of alert and bored, the long percentages of

Â free and not free states is gonna be the same regardless of how history plays out.

Â Intervening that changed the state doesn't matter. So I go in, and change a state.

Â Like, if I go in and said, well, let's just make a country. Move it from free to

Â not free. Well, guess what? In the long run, that's gonna have no effect. Now

Â we've posed all of these as puzzles: The initial state doesn't matter? History

Â doesn't matter? Intervening to change the state doesn't matter? And, and they're

Â puzzles because that doesn't seem to make any sense because if you think about it,

Â we, history matters a lot. Initial conditions matter a lot. Interventions can

Â matter a lot. When you think about, you know, whether you're running a small

Â organization, a big business or a government, you think about, let's come in

Â and intervene here, so we're gonna make the world better. This Markov process

Â seems very deterministic. It's sorta saying: none of these make a

Â difference. It doesn't matter where you start out. What happens along the way

Â doesn't matter. And if you intervene it's not gonna have any effect. Let's see what

Â we mean by that. Suppose you can have, let's just think of the mechanics that work in a

Â relationship. A relationship could be happy. Another relationships could be tense. And

Â suppose we're modeling hundreds of relationships, a whole community of

Â people. And we're just keeping track of how many relationships are happy and how

Â many are tense. Let's suppose that these relationships have a Markov process. So

Â there's fixed probabilities of moving between happy and tense. We might say,

Â "Well, you know, there's a lot of tension in the community right now. So let's just,

Â you know, buy a whole bunch of people dinners." So let's just move 50 couples

Â from the tense state to the happy state by, like, giving them free dinners on the

Â town. Well if you do that, what's gonna happen? Well, for a very short period of

Â time, you'll make more people happy. But there's gonna be that movement back

Â towards tense. And the transition probabilities, if they stay fixed, are

Â gonna take you right back to the same equilibrium as before. So there's gonna be

Â no effect in the long run on the system. So, does this mean in general. So, I mean,

Â we wanna take these models seriously, but not too seriously. Does this mean that

Â interventions have no effect? That interventions are meaningless? And does it

Â mean that we shouldn't even redistribute stuff. If we redistribute happiness, or we

Â give these people meals, you know, to make them happy, that that has absolutely no

Â effect either? Does this mean we shouldn't do these things? Well, let's, let's be

Â careful. There's a number of reasons why, even though the Markov model tells us that

Â history doesn't matter, interventions don't matter, initial conditions matter,

Â don't matter, that it really could matter. And the first one is this. It could take a

Â long time to get to the equilibrium. So let's go back to those happy and tense

Â couples. It could be that if you make those couples happy, some of those tense

Â couples, that yeah, eventually you're going to go back to the old equilibrium.

Â But it could take twenty years. And if that takes twenty years, well those

Â intervening twenty years, there's a lot more happy couples. Or if we think about

Â only, maybe only 60 percent of countries will be free. But if we artificially make

Â too many free, we could have 30, 40, 50 years of a whole bunch of countries

Â remaining free that wouldn't have been free otherwise. So even if in the long run

Â we end up at the same place, it could be that in the intervening years, we still

Â get some sort of benefit. But that idea, that it just takes a lot more, a long time

Â to get there, maybe we can get a little boost in between, is still sort of, you

Â know, taking this somewhat negative view that any intervention we do can't matter.

Â But yet we've got this darned theorem, right? We've got this theorem that says,

Â finite number of states, fixed transition probabilities, can get from any state to

Â any other, then none of these things do matter. Well, let's look at these. Let's

Â look at them seriously and ask, which of these things maybe doesn't hold. Well,

Â the finite state thing, that's kinda hard to argue with. Because we could get, sort

Â of, bin reality in the different states. Remember, earlier we talked about

Â categories? Well, these states are categories, so we can think about which,

Â you know, categories do we create to make sense of the world. And having a finite

Â number doesn't seem like the big idea. This "can eventually get from any one state

Â to any other", well that one, you know, maybe there's cases where that's not true.

Â Maybe there's cases where you can go from one state to another. So that's what we

Â want to look at. But the one we really wanna focus on is this "fixed transition

Â probabilities", because it could be that when we move from one state to another,

Â when we move from tense to happy, when we move from not free to free, or as more

Â countries move from not free to free, that suddenly the transition probabilities in

Â the system change. There's some larger facts in these transition probabilities

Â change. So the thing we wanna focus on when we think about why history may

Â matter, why interventions may matter, is because those transition probabilities

Â may change over time, as the function of the state we're in. Now this doesn't mean

Â the Markov model's wrong. The Markov model's right. It's a theorem, it's always

Â true. But if we want history to matter, if we want interventions to matter, then

Â we've gotta focus on this. We've gotta focus on interventions or policies or

Â histories that can change those transition probabilities. Let me phrase this in a

Â slightly different way. If we think about changing the state of the process, moving from

Â tense to happy, it's just gonna be a temporary fact. But if you think about

Â changing the transition probabilities, then we can have a permanent fact. So to

Â think about what are useful interventions, they're gonna be interventions that change

Â the transition probability. If you think about moments in history, it could be

Â things like tipping points, that we've actually talked about before. Those are gonna be

Â moments in history that change the transition probabilities. So if we have a

Â tipping point, if we move from one likely history to another, what

Â must be going on is, those transition probabilities have to be changing. So what have we learned?

Â [laugh] We learned something very powerful. That if we have a finite set of states,

Â fixed transition probabilities, and they can get from any state to any other, then

Â history doesn't matter, inventions don't matter, initial conditions don't matter.

Â Now that's not to say that those things don't matter in the real world, they

Â probably do. But if they do, then one of those assumptions have to violated, either

Â states aren't finite, that's sort of hard one to disagree with, so it must be

Â that either we can't get from someplace to every place else, or that

Â those transition probabilities can change. Now the most likely one is that transition

Â probabilities can change, and interventions that really matter, interventions that tip,

Â histories that matter, are events that change those transition probabilities. So

Â what we see is not that everything's a Markov process. What we see is that this

Â Markov model helps us understand why are some results inevitable, because they

Â satisfy those assumptions, and why are some results not. Okay. Thank you.

Â