1:00

So this is the physical manifestation, this is the mathematical representation.

Â Okay? And on each link there is such a pricing signal P that going to be fed

Â back from the network to the sources to enable the negative feedback loop.

Â And we're going to sum up all those from the links, that's along the path used by

Â source I. Keep writing S of source I as given by

Â routing. Okay?

Â So by summing up all these PL with respect to the path of a particular

Â source I, we get a number called a QI, because we're talking about a particular

Â source I. This is the path,

Â the price, because it's a sum of the prices.

Â Again, it's an interpretation, as mentioned at the beginning of the

Â lecture, along the path used by source I.

Â And this Y arrow is the short hand notation for link load, okay?

Â It is the sum of the load from the sources that's using this link,

Â because routing is fixed and given, we don't get to change the summation index.

Â Okay. But we get to change X and will be

Â varying P in a dynamic way. So, just remember short hand notation, Y

Â sub I, link load q sub I path price. Skipping the very interesting derivation,

Â here is what end up where we end up with. In each source I, will autonomously,

Â okay, run the following update iteratively from discrete-time slot T to

Â T + 1. Okay? You're going to update your right.

Â Okay? X at time T, basically as the demand

Â function. Okay? Where the price is the path price,

Â at this time T and the path price, time T, will be the update from the path right

Â price from the last iteration T - 1, that will be down here in a minute.

Â Okay? So this says that if you have source I,

Â just look at your total path price, and that is the price that you pay,

Â and put it into your demand function, that should be your source rate.

Â Now, what is demand function again? You may remember it is the utility

Â function's first derivative inverse. Why is that?

Â Because, numeric function is derived by looking at the net utility maximization.

Â Look at my utility minus the price I have to pay, which is the path price,

Â for the path I'm traversing times the volume which is X, you maximize this over

Â X. Then you will see that X as a function of

Â price is, is U prime inverse. Okay?

Â And that is the demand function, exactly as what we saw back in lecture

Â 11. So this is done, at each time slot T, at

Â each source I, and each link L will also autonomously

Â execute a following computation. You will update the link price at time T

Â from the link price in the last iteration T - 1 by updating it with the following

Â summation. You look at the y out, at time T now.

Â The link load at this point and the supply,

Â this is the demand, this is supply.

Â Supply's fixed, since we assume fixed capacitors on each

Â link out. Demand is dynamically changing based on

Â the price that you have been generating and the difference is the difference

Â between demand and supply on your link. And this is the direction of change.

Â Does it make sense? yes it does.

Â If the demand is bigger than the supply during this step of the transient, then

Â you are going to increase your price next time.

Â When this P is increased then all those Qs okay, for those sources we use this

Â link arrow, we'll see a bigger Q, as Q becomes bigger.

Â U prime inverse of the demand function is a decreasing function that will make the

Â source rate lower. Exactly what happens in many market

Â places in an iterated way, when demand exceeds supply and say rank up the price.

Â The price is cranked up, then, the demand will drop.

Â So, in the next iteration, those links, sources using link l would

Â see a smaller x, and therefore their total sum, y, the

Â link load, will be smaller. That's moving in the right direction.

Â When y is bigger than c, this iteration, next iteration, will be smaller.

Â Conversely, if the current demand is actually less than the capacity, then

Â while you're not fully utilizing this link's capacity, so you can see a smaller

Â p. And that will reflect in a smaller q

Â which will reflect into a bigger X and then reflect in a bigger Y.

Â Next iteration demand will go up. So it's a very smart and intuitive

Â arrangement. So this direction is correct.

Â We just need two more modifications. Number one is while the direction is

Â right. We need a little thing called a step

Â size, you know to buy beta here. Beta is some positive constant.

Â It's called step size because it says, even you've got the right direction, you

Â don't want to overshoot, okay? For those of you who have played golf,

Â not me, but played golf on Wii, which is very difficult, you know that when you

Â get close enough to here's your ball. Okay, here's the hole,

Â to the hole. If you, even if you get the right

Â direction, and you give too big of a kick.

Â Okay? The force is too big.

Â You'll overshoot and land over there. Then you say, alright.

Â The right direction's this way. Right direction, but you overshoot again.

Â And this may oscillate back and forth, unless you start to kick with smaller

Â forces, as you get closer and closer to the hole.

Â So that force to used hit the golf ball is the step size.

Â 7:42

You don't want to oscillate and overshoot back and forth, so you want to use a

Â smart step size rule. One rule says that if the step size is a

Â constant, independent of iteration T, and very small then when it's sufficiently

Â small then you can guarantee convergence, meaning you will get it into the hole.

Â Another rule says if I make B smaller and smaller.

Â As I move along the time axis.'Kay? As I, as I get closer and closer to the

Â hole, I start to be more and more careful.

Â That can also guarantee convergence. So the selection of the step-size in a

Â distributed uncentralized way is both a science and an art, we just don't have

Â time to talk too much about it. A little bit more in advanced material

Â but for our problem, there are simple and, good choices of beta.

Â Simple in the sense it's easy to come up with it.

Â And good in the sense it will guarantee convergence.

Â The second modification we need is that these prices, okay, cannot be negative.

Â So you can just say if it becomes negative we'll just take it as zero.

Â We usually use the symbol of big bracket sup plus to denote the fact that if

Â whatever's in the big bracket dips below zero we just take the zero as the floor.

Â Alright. Now, this is our algorithm.

Â Okay? At each source I, you just look at your

Â current path price into your demand function, driven by your own utility

Â function. And at each link L, there is this price

Â update. And as T goes on through the iterations,

Â you actually guaranteed, as long as the step size is smart enough, to converge

Â and converge to what? To an optimizer of this convex

Â decomposable optimization problem, called the basic network utility maximization.

Â 9:53

But the proof of that is in the man's material.

Â Here we just want to look at this and make sure we can intuitively understand

Â why it makes sense. Number one is that we know the direction

Â is right, bigger demand will lead to smaller demand next times.

Â Smaller demand now lead to a bigger demand next time with respect to given a

Â fixed length capacity. Number two, is it distributed?

Â Well, it is actually very distributed. Now the source rate updates are very

Â distributed. All you need to know is as a source eye

Â your own path price QI. Not only you don't need to know the other

Â sources, so you are you know, source one, you

Â don't need to know Q2, Q3 and so on. In fact you don't even need to know the

Â individual link prices along the links used by yourself.

Â You only need to know the end-to-end. Half price,

Â the link price update computation is also distributed.

Â You only need to know the load on yourself.

Â So you're link one, why one?

Â You do not need to know why two, why three, what happens on the other links.

Â And you don't even need to know the ingredients in your Y1.

Â You don't need to know the individual source rates, you just need to know the

Â total link load. The sum of the source rates among the

Â sources using. So this is a very distributed action.

Â As long as the sources can get these path prize with some message passing.

Â Okay. Either explicit or implicit.

Â As we'll see in a minute, actually in the next module I guess.

Â where we'll see that it can be done in the implicit message passing to get these

Â path prices interpretable as the end to end loss rate or queuing delay.

Â Number three. There is a very nice economic

Â interpretation. Just like back in lecture eleven when we

Â used the pricing to coordinate demand and supply.

Â Back in lecture two when we used pricing to coordinate demand and supply in

Â auctions. Okay,

Â now we're using pricing, at least as interpretation of this machine protocol

Â called TCP Congestion Control. Prices, again, coordinate demand and

Â supply in a very fast time scale around round trip time, time scale.

Â And in this case distributively over a network.

Â We're looking at a more complicated version than the simpler ones, before.

Â Now we have to run it over a network, and the good news is that it can be done

Â distributively. Now this can be summarized in a feedback

Â loop. This is the negative feedback loop that

Â we we're looking for about one hour ago, in the beginning of this lecture.

Â Okay? Each source labelled by one, two, three,

Â I denote, draw this, this way because I want to highlight the fact that their

Â actions will be autonomous as distributed.

Â Actions we'll decide in XI, this induces a vector X into the network.

Â Okay? You can represent as this routing matrix

Â R. Then each link only sees the link load

Â Ys, again, autonomous action. Distribute it across the links.

Â And then they will update these prizes P.L.

Â Independent of each other through the network, and the sources will only see

Â the corresponding path prize's, QIs. Okay.

Â X goes up, P will go up. Q will go up, then X will go down.

Â Now Y will go down. And with a sufficiently smart step-size

Â beta, this is guaranteed to converge. Now, will they converge using existing

Â TCP protocols? We'll come back to that towards the end

Â of the lecture. Well first let's look at a small numeric

Â example using the same topology that we saw.

Â