1:00

So this is the physical manifestation, this is the mathematical representation.

Okay? And on each link there is such a pricing signal P that going to be fed

back from the network to the sources to enable the negative feedback loop.

And we're going to sum up all those from the links, that's along the path used by

source I. Keep writing S of source I as given by

routing. Okay?

So by summing up all these PL with respect to the path of a particular

source I, we get a number called a QI, because we're talking about a particular

source I. This is the path,

the price, because it's a sum of the prices.

Again, it's an interpretation, as mentioned at the beginning of the

lecture, along the path used by source I.

And this Y arrow is the short hand notation for link load, okay?

It is the sum of the load from the sources that's using this link,

because routing is fixed and given, we don't get to change the summation index.

Okay. But we get to change X and will be

varying P in a dynamic way. So, just remember short hand notation, Y

sub I, link load q sub I path price. Skipping the very interesting derivation,

here is what end up where we end up with. In each source I, will autonomously,

okay, run the following update iteratively from discrete-time slot T to

T + 1. Okay? You're going to update your right.

Okay? X at time T, basically as the demand

function. Okay? Where the price is the path price,

at this time T and the path price, time T, will be the update from the path right

price from the last iteration T - 1, that will be down here in a minute.

Okay? So this says that if you have source I,

just look at your total path price, and that is the price that you pay,

and put it into your demand function, that should be your source rate.

Now, what is demand function again? You may remember it is the utility

function's first derivative inverse. Why is that?

Because, numeric function is derived by looking at the net utility maximization.

Look at my utility minus the price I have to pay, which is the path price,

for the path I'm traversing times the volume which is X, you maximize this over

X. Then you will see that X as a function of

price is, is U prime inverse. Okay?

And that is the demand function, exactly as what we saw back in lecture

11. So this is done, at each time slot T, at

each source I, and each link L will also autonomously

execute a following computation. You will update the link price at time T

from the link price in the last iteration T - 1 by updating it with the following

summation. You look at the y out, at time T now.

The link load at this point and the supply,

this is the demand, this is supply.

Supply's fixed, since we assume fixed capacitors on each

link out. Demand is dynamically changing based on

the price that you have been generating and the difference is the difference

between demand and supply on your link. And this is the direction of change.

Does it make sense? yes it does.

If the demand is bigger than the supply during this step of the transient, then

you are going to increase your price next time.

When this P is increased then all those Qs okay, for those sources we use this

link arrow, we'll see a bigger Q, as Q becomes bigger.

U prime inverse of the demand function is a decreasing function that will make the

source rate lower. Exactly what happens in many market

places in an iterated way, when demand exceeds supply and say rank up the price.

The price is cranked up, then, the demand will drop.

So, in the next iteration, those links, sources using link l would

see a smaller x, and therefore their total sum, y, the

link load, will be smaller. That's moving in the right direction.

When y is bigger than c, this iteration, next iteration, will be smaller.

Conversely, if the current demand is actually less than the capacity, then

while you're not fully utilizing this link's capacity, so you can see a smaller

p. And that will reflect in a smaller q

which will reflect into a bigger X and then reflect in a bigger Y.

Next iteration demand will go up. So it's a very smart and intuitive

arrangement. So this direction is correct.

We just need two more modifications. Number one is while the direction is

right. We need a little thing called a step

size, you know to buy beta here. Beta is some positive constant.

It's called step size because it says, even you've got the right direction, you

don't want to overshoot, okay? For those of you who have played golf,

not me, but played golf on Wii, which is very difficult, you know that when you

get close enough to here's your ball. Okay, here's the hole,

to the hole. If you, even if you get the right

direction, and you give too big of a kick.

Okay? The force is too big.

You'll overshoot and land over there. Then you say, alright.

The right direction's this way. Right direction, but you overshoot again.

And this may oscillate back and forth, unless you start to kick with smaller

forces, as you get closer and closer to the hole.

So that force to used hit the golf ball is the step size.

7:42

You don't want to oscillate and overshoot back and forth, so you want to use a

smart step size rule. One rule says that if the step size is a

constant, independent of iteration T, and very small then when it's sufficiently

small then you can guarantee convergence, meaning you will get it into the hole.

Another rule says if I make B smaller and smaller.

As I move along the time axis.'Kay? As I, as I get closer and closer to the

hole, I start to be more and more careful.

That can also guarantee convergence. So the selection of the step-size in a

distributed uncentralized way is both a science and an art, we just don't have

time to talk too much about it. A little bit more in advanced material

but for our problem, there are simple and, good choices of beta.

Simple in the sense it's easy to come up with it.

And good in the sense it will guarantee convergence.

The second modification we need is that these prices, okay, cannot be negative.

So you can just say if it becomes negative we'll just take it as zero.

We usually use the symbol of big bracket sup plus to denote the fact that if

whatever's in the big bracket dips below zero we just take the zero as the floor.

Alright. Now, this is our algorithm.

Okay? At each source I, you just look at your

current path price into your demand function, driven by your own utility

function. And at each link L, there is this price

update. And as T goes on through the iterations,

you actually guaranteed, as long as the step size is smart enough, to converge

and converge to what? To an optimizer of this convex

decomposable optimization problem, called the basic network utility maximization.

9:53

But the proof of that is in the man's material.

Here we just want to look at this and make sure we can intuitively understand

why it makes sense. Number one is that we know the direction

is right, bigger demand will lead to smaller demand next times.

Smaller demand now lead to a bigger demand next time with respect to given a

fixed length capacity. Number two, is it distributed?

Well, it is actually very distributed. Now the source rate updates are very

distributed. All you need to know is as a source eye

your own path price QI. Not only you don't need to know the other

sources, so you are you know, source one, you

don't need to know Q2, Q3 and so on. In fact you don't even need to know the

individual link prices along the links used by yourself.

You only need to know the end-to-end. Half price,

the link price update computation is also distributed.

You only need to know the load on yourself.

So you're link one, why one?

You do not need to know why two, why three, what happens on the other links.

And you don't even need to know the ingredients in your Y1.

You don't need to know the individual source rates, you just need to know the

total link load. The sum of the source rates among the

sources using. So this is a very distributed action.

As long as the sources can get these path prize with some message passing.

Okay. Either explicit or implicit.

As we'll see in a minute, actually in the next module I guess.

where we'll see that it can be done in the implicit message passing to get these

path prices interpretable as the end to end loss rate or queuing delay.

Number three. There is a very nice economic

interpretation. Just like back in lecture eleven when we

used the pricing to coordinate demand and supply.

Back in lecture two when we used pricing to coordinate demand and supply in

auctions. Okay,

now we're using pricing, at least as interpretation of this machine protocol

called TCP Congestion Control. Prices, again, coordinate demand and

supply in a very fast time scale around round trip time, time scale.

And in this case distributively over a network.

We're looking at a more complicated version than the simpler ones, before.

Now we have to run it over a network, and the good news is that it can be done

distributively. Now this can be summarized in a feedback

loop. This is the negative feedback loop that

we we're looking for about one hour ago, in the beginning of this lecture.

Okay? Each source labelled by one, two, three,

I denote, draw this, this way because I want to highlight the fact that their

actions will be autonomous as distributed.

Actions we'll decide in XI, this induces a vector X into the network.

Okay? You can represent as this routing matrix

R. Then each link only sees the link load

Ys, again, autonomous action. Distribute it across the links.

And then they will update these prizes P.L.

Independent of each other through the network, and the sources will only see

the corresponding path prize's, QIs. Okay.

X goes up, P will go up. Q will go up, then X will go down.

Now Y will go down. And with a sufficiently smart step-size

beta, this is guaranteed to converge. Now, will they converge using existing

TCP protocols? We'll come back to that towards the end

of the lecture. Well first let's look at a small numeric

example using the same topology that we saw.