This used to measure the amount of uncertainty but also can be used to

measure the amount of variance. Well, speaking of variance, how about the

standard deviation, or may be the second order moment of this vector,

okay? Normalized.

And that's called the Jain's Index in certain networking literature,

okay? A lot of these try to normalize this so

that you going to map whatever vector into a number in the range between zero

and one, where one is most fair, okay?

Now, we have seen another related but subtly different approach where we

provide an objective functions and optimization problem, and then try to

understand what is the resulting optimizer's property.

For example, alpha fair utility function, okay? Utility parameterized by the

parameter alpha between zero and infinity says that I'm going to look at [SOUND]

one minus alpha. This function, if alpha is not one and by

L'Hopital's Rule this function if alpha is one,

okay? If I maximize a summation of these utility functions, I got a utility

function for the entire vector. And the resulting maximizer, x star,

satisfy the so called alpha fair notion. Including proportional fair that we

talked about, one alpha is one. Including max/min fair

that says, you cannot make one user's rate higher or resource higher without

making someone whose already getting a smaller amount of resource get even

smaller resource, okay? And that's called the max/min fair

which happens as alpha becomes very large, close to infinity,

okay? These two are special cases. As you can see already, these two

different approaches, a normalized diversity index and an alpha fair

objective function are actually different,

right? For example, the treatment of efficiency

is different. For alpha fair utility maximization,

efficiency is implicit and bad in it. But still in there and normalize the

index is not effected by the magnitude often,

okay? So, one, one the vector is the same as

100, 100th vector. Can these two approaches be unified?

And can more approaches be discovered? In fact, how many more approaches are

there? Well, let's try something that we saw

back in Lecture 6 when we talked about the axiomatic construction of a voting

theory and of bargaining theory. Back then, we looked at the axiomatics a

treatment of errors impossibility theorem, as well as

the axiomatic treatment of bargaining by Nash, in the advance material part of the

lecture, okay? So, in one case, a set of axioms

that led to impossibility result and another case led to a unique possibility

result. So, what kind of axioms are we talking

about here? We will see that in the advance material

of this lecture, the axiom of continuity. Very briefly it just says that, the

function that maps, a given vector of allocation in to some scalar.

The fairness value, okay?

Should be a continuous function, okay?

Of so called homogeneity that says, if I scale this x by say a factor of five,

five pi vector x, it should give the same as if I was

looking at x. In fact, doesn't matter if it was a five

or any positive constant. So, scale does not matter.

Another way to call that kind of function is called homogeneous function,

okay? So, inefficiency is automatically taken

out of consideration for the moment. And then, of saturation.

This is a tenical axiom where we will skip this on to the advanced material of

partition that says, you can grow the population size and the notion of

fairness still remaining well defined. And finally, of starvation that says,

the allocation of half, half between two users equal allocation should be no less

fair than the allocation of zero, one, which stops one of the two users,

okay? Notice this is not saying that you put allocations should by axiom be viewed

as most fair. It simply says, it is not less fair than

starvation, okay? So, it'ss a very weak statement,

therefore a very strong axiom. It turns out that if you believe these

five axioms, then skipping to the relation we will be able to derive a

unique family of functions F. And that's a vector of allocation to a

scaler representing the fairness value. And this scalar will allow us to do two

things. A, allow us to do quantitative

comparison, okay?

Between two vectors, which is more fair. And the second is scale.

Not only gives you a order comparison, but also provides a numerical scale to

talk about how much fair is one allocation with respect to the other.

So, it is our job now to go through that unique family of fairness function

constructed based just on these axioms of the function.