0:26

So now we're going to walk through a numerical example.

Â First we will define the length of the time slots.

Â Suppose we are using 802.11G WiFi, which transmit at 54 megabit per second.

Â And here the relevant timing parameter. Okay, each

Â a single time slot is nine microsecond. Everything is microsecond.

Â Okay? And then the SIFS is ten microsecond.

Â And the DIFS is six plus two time slots units which is 28 microseconds.

Â Okay so now we can look at TB, that is the time slot length for a back up slot

Â with idle channel. Okay that's the length of DIFs,

Â so that's just 28 microsecond. What about the time slot for a successful

Â transmission? That means we have the data frame self

Â plus waiting siffs plus acknowledgement packet plus waiting DIFs This totality is

Â the TS. So lets look at the time it takes to send

Â this data. Now the data consist of a header and then

Â the payload. The header, and then the payload.

Â Okay. First of all there is a 16 microsecond

Â physical layer preamble. You just waste sixteen microseconds as a

Â guard band at the top. And then there is a 40 bits physical

Â layer head, a physical head with information about a physical layer

Â configuration. The first 24 bits is sent at a much lower

Â speed instead of 64 is sent at six megabits per second so that it is has a

Â much higher probability of being decoded at the receiver end.

Â So there is 24 of them over six megabit per second, okay?

Â Since this microsecond, this is megabit per second the factors cancel each other

Â out and just write us 24 over six then there is sixteen remaining.

Â While plus 240 MAC layer, link layer had, okay?

Â Plus 32. Bit of error correction codes in the link

Â layer. All these are sent at the rate of 54

Â megabit per second. Plus of course, the actual pay load.

Â This part is L bits okay and we're going to later assume that L is

Â Basically, 8,192 bits, okay? So it's a typical value of the number of,

Â bits in the payload, okay? So now payload is sent at 54 megabit per

Â second. So this boils down to 50, 25.33 plus the

Â payload. Say, this number of some other number

Â over 54. Microsecond.

Â Okay. That's the length of TS.

Â So finally, what about TC, collision? [COUGH] Well there's actually a couple

Â different variations on how pe, people define TC, but I think that the proper

Â way is to define it as essentially the same as TS, because you have to wait

Â until the disappear is over before you can guarantee that the acknowledgement is

Â indeed not sent back to you. So we have defined these parameters.

Â Okay? Now let's also define some other

Â parameter. Let's say the maximum number of back off

Â stages you can have in exponential back off is three.

Â So you can multiple by two by two by two then you stop and declare the frame is

Â lost. The minimum window of time they need to

Â wait is, let's say two to the four minus one.

Â Okay? In our analysis, we ignore this minus one

Â factor but here we can just say incorporate that and that is fifteen.

Â 4:54

We can plug in the formula. First, we can plug in the formula of the

Â tau and C solution turns out we can solve numerically tau to be 0.0765.

Â That is basically a 7.65% as the contention probability, the probability

Â you'll transmit as a single station. Okay.

Â And then this lead to PTPS calculation, which leads to the S calculation,

Â together with all those constants. Okay.

Â The exact blown out form of the formula is in the textbook.

Â And, or you can just verify that through your own, calculation, okay?

Â So now, we're going to look at this S as a function of a few things.

Â First of all, as a function of N, the number of user stations,

Â or in other words the impact of crowd size.

Â So now, I'm plotting S in meg-, megabit per second over N here.

Â If you look at the aggregate, okay,

Â as, as a function of n for all, then it goes up, and then goes down.

Â Now goes up is actually better understandable, because I have got more

Â stations. But it quickly starts to go down.

Â This is the point where, basically the tragedy of commons kicks in so much that

Â adding more users will reduce even the total throughput across all users.

Â And this happen around eight users. And if you look at the total throughput

Â divided by M, okay,

Â as in over M. That's the average per station

Â throughput. Then, they actually always go down.

Â It never goes up. Why?

Â Because you add more user and more interference.

Â What is important is that it goes down so rapidly.

Â As you go from like two, three users down to ten fifteen users.

Â It went down from 25 so megabit per second now.

Â Notice not 54 because 54 is the physical layer of speed.

Â Okay. After the o, the overhead it goes down to

Â about 25 realistic speed. This is the theme we'll pick up in the

Â next lecture. Okay.

Â In today's lecture we'll notice the shape of this drop.

Â This drop is dropping very rapidly. Okay.

Â The, the point of going down all the way to only one or two megabit per second.

Â So no wonder in a busy hot spot. the average per station throughput is so

Â low, because despite all the smart ideas, the SMA random access controls strata

Â commons in a very inefficient way. Now few more charts for example we can

Â also measure S as a function of aggressiveness.

Â One way to look aggressiveness is to look at the minimum window size you have to

Â wait up to that point. Now we that for different size of the

Â crowd all happens that initially, okay, if you assuming W mean a bigger means you

Â maked it You make this this contention less

Â aggressive, then the flufoot actually goes up.

Â Okay? That's very good.

Â But at some point it will go down because it is so non-aggressive, you're actually

Â wasting idle resources in the network. So there's a point beyond which deemed

Â more polite that it actually hurts your through put.

Â Again, very typical of a cocktail. You don't want to be too aggressive but

Â if you're too non-aggressive then you're just wasting time slots.

Â And as the crowd gets bigger and bigger. Okay, we see that.

Â The. Range of W mean before it start to band

Â down becomes longer and longer. That means as the crowd gets long- bigger

Â and bigger it pays more and more to reduce aggressiveness.

Â Another way to look at aggressiveness is look at

Â The maximum number of back off stages that you allow is you make B bigger.

Â You tend to increase the average contention window size and therefore

Â become less aggressive and you'll see a similar behavior here.

Â Okay? as the crowd becomes bigger and bigger,

Â the impact is more prominent. Okay.

Â The Throughput actually becomes bigger as you

Â become less aggressive. Okay.

Â The impact of B however is less prominent than impact of W mean there.

Â Finally, as a function of the payload size L.

Â Okay. We were talking about somewhere around

Â here. Okay.

Â So get around 25 mega per second. Now you see a monotonic increase curve,

Â because more payload means less overhead, relatively speaking.

Â But this is a misleading chart, because remember all the way back early in the

Â lecture we did not model the actual interference or collision phenomena

Â accurately. As the.

Â Payload gets bigger and bigger actually, the chance of collision goes up.

Â Because the chance of two packages overlapping in time goes up as it takes

Â longer to transmit the payload. Okay.

Â If you incorporate that factor, this actually will start to bend over and

Â downward. So in summary, what we have seen is that

Â in wi-fi, interference management is done through random access rather than in

Â power control accelerator. And a big part of the reason is because

Â it's operating in the unlicensed spectrum.

Â There are a few very good ideas, including randomized and exponential

Â backoff. Including differentiated wait and listen

Â intervals. Including limited explicit message

Â passing. Which, by the way, the RTS/CTS is not

Â always enabled. And that may also explain some of the

Â inefficiency of throughout in hot spots. But, it's got a big limitation.

Â Now, we went through, Simple, relatively speaking.

Â But, still a little bit involved approximation of the throughput, and we

Â saw that this throughput per a station as a function of n drops rapidly the

Â performance decrease very fast as the contention intensifies.

Â Even, as we go up from several users, to just say, ten or fifteen users.

Â And, this is underlying reason why The performance in hot spot tend to be poor

Â unless you don't have a large crowd, it so happens.

Â And we see a fundamentally different way to do distributed coordination in taming

Â this tragedy of commons. So now we're going to wrap up our

Â wireless lectures with one more lecture on a very practical important question,

Â what is the actual speed that I can expect on my cellular, LTE or 3G network.

Â That will be the next lecture, I see you then .

Â