0:35

It is distributed. This actually is a very fussy word.

Â It's hard to exactly pin down its definition.

Â We are going to see a very distributed solution.

Â Today and later in the course we'll see many other variance of sort of distributed

Â solutions. But roughly speaking distributed means

Â that the network elements in this case the transmitter and the receivers do not need

Â to talk to each other to much with explicit message passing.

Â This is also an iterative algorithm. So the near far solution, for example, is

Â not iterative. You just estimate a channel and then you

Â tell the receiver what to do. It's a one shot.

Â But most of the algorithm we'll be see, looking at will be iterative algorithm.

Â That means there's a certain time index. We usually use k or t to index the time.

Â Sometimes it's discreet, sometimes it's continuous.

Â Most likely to be discrete for what we'll be looking at.

Â Okay. So I have to index this.

Â And we hope that as times goes on this index krt becomes very large something

Â good will happen. For example, the algorithm actually stops.

Â It converges. It converges at a pretty good point,

Â solving some optimizational gain problem. Alright, so there is enough background of

Â this just one line theorem Iterated Distributed Algorithm.

Â So what is the algorithm. Let me write down algorithm first.

Â It's so simple to write down just one single line and then let's intuitively

Â look at it before moving on to mathematical analysis of it.

Â 3:42

Well, let's take a look. A symbol, first, in communication.

Â [inaudible] You don't need to know anything about the network, except your

Â own target SIR, which presumably you know and it does seem to vary over time, and

Â the current SIR that your intended receiver is getting.

Â That's all you need to know. Second, this is very simple in

Â computation, where there is only one multiplication or division, right?

Â And then there is very simple in configuration.

Â What do you mean by configuration? A lot of times, in algorithm I have a lot

Â of parameters. Game parameter is a typical example.

Â Step size, it's a special case of that. We'll see many parameterized algorithm

Â throughout the course. But this one, actually has no parameters.

Â Right. The ratio between gamma and SIR, itself,

Â is the game. You don't have to tune any algorithmic

Â parameters, and we'll see that simplicity in configuration is often the hallmark of

Â a successful technology transfer from research to adoption.

Â Any one of our algorithms, not used, because there are too many parameters to

Â turn, and people don't know what to do with those parameters.

Â They aren't sure if the parameters today, will still be robust and good tomorrow.

Â But this E P C is very simple. In communication, in computation, in

Â configuration. But is it trying to do the right thing?

Â Well intuitively it is. I'm going to look at both.

Â The equilibrium behavior and the convergence behavior intuitively.

Â Equilibrium looks actually very good, all right.

Â What is equilibrium here? Equilibrium is, in this case, we're going

Â to look at, at least two more definitions of equilibrium, but in this case, just

Â means that nobody changes from one time to another anymore.

Â In other words, there's certain time, T, beyond which, no one's transmit power is

Â moving anymore. Well, whenever that is the case, clearly,

Â that means that this ratio is one. That means, everybody's SIR, is exactly,

Â the target SIR, alright. So the question is will you ever get to

Â equilibrium. Will you ever stop?

Â Let's think about that. That's actually not trivial, right?

Â You think, oh well, if my current SIR is lower than my target, this gain parameter

Â is bigger than one, that means my next transmit power will be bigger.

Â Well, that'll help me presumably to lift my SIR next time closer to gamma.

Â On the other hand, if my current SIR is already bigger than the target gamma.

Â Then I'll rather make my trans pat power smaller, because, remember, we're talking

Â about a 2G voice call. So, if I get my target gamma, I don't care

Â if it's bigger than that or not. So I'd rather conserve power.

Â I want a transmit power minimal way to achieve the target Gammas, and this is

Â doing the right thing. So intuitively, the direction, is correct.

Â It is sending basically a negative feedback that says too much SIR, lower

Â your power, too little SIR, increase your power.

Â Too little, too small relative to the target.

Â This is all well and good, except you're not the only one in the room, or in this

Â case, you're not the only one in the network.

Â There are many other transmit/receiver pairs going around here, 'kay?

Â And they all think the same way, so while you are moving, they are moving too, their

Â transmit powers that is. So it's not at all clear that this

Â ensemble, this network, this corral of transmit receiver peers will collectively

Â converge and stop moving together. But at least it sounds plausible, because

Â the directions are somewhat correct. Now proving that convergence will happen

Â is not easy. In fact, convergence may not happen.

Â You can run this algorithm and it doesn't stop at all.

Â And that's because if everybody requires a very big gamma, gamma 1's big, gamma two

Â is big, and maybe there's no way to achieve these gammas.

Â You can run the simple example right? I want P1 G1 one over P2 G1 two plus N1 to

Â be really big. I also want P2 G2 two.

Â Divided by P1 and G to one, plus N2, this is received as R for user two, to be

Â really big. And for certain G's and S, you see that I

Â can not find any P1 and P2 to numbers that can make both ratios very, very big.

Â There is intrinsic competition or trade off among these users.

Â So, for sufficiently large Gammas, all of them being big, you can't even get

Â convergence anyway. So, you can't prove because that's not

Â true. But it turns out that we can show,

Â whenever the gammas are reasonable that is, they are actually achievable.

Â That is a vector of Ps to achieve them for the given Gs and Ns.

Â Then we can prove that this DPC algorithm will converge to this desirable

Â equilibrium. That's provably true.

Â But to show that would take us to advanced material.

Â And we'll come back to that after the normal period of this lecture.

Â