0:00

In this lecture, we'll continue our formal treatment of asymptotic notation. We've

Â already discussed big O notation, which is by far the most important and ubiquitous

Â concept that's part of asymptotic notation, but, for completeness, I do want to tell you

Â about a couple of close relatives of big O, namely omega and theta. If big O is

Â analogous to less than or equal to, then omega and theta are analogous to greater

Â than or equal to, and equal to, respectively. But let's treat them a

Â little more precisely. The formal definition of omega notation closely

Â mirrors that of big O notation. We say that one function, T of N, is big omega of

Â another function, F of N, if eventually, that is for sufficiently large

Â N, it's lower bounded by a constant multiple of F of N. And we quantify the ideas of a

Â constant multiple and eventually in exactly the same way as before, namely via

Â explicitly giving two constants, C and N naught, such that T of N is bounded below by

Â C times F of N for all sufficiently large N. That is, for all N at least N naught.

Â There's a picture just like there was for big O notation. Perhaps we have a function

Â T of N which looks something like this green curve. And then we have another

Â function F of N which is above T of N. But then when we multiply F of N by one half,

Â we get something that, eventually, is always below T of N. So in this picture,

Â this is an example where T of N is indeed big Omega of F of N. As far as what the

Â constants are, well, the multiple that we use, C, is obviously just one half. That's

Â what we're multiplying F of N by. And as before, N naught is the

Â crossing point between the two functions. So, N naught is the point after which C

Â times F of N always lies below T of N forevermore. So that's Big Omega. Theta

Â notation is the equivalent of equals, and so it just means that the

Â function is both Big O of F of N and Omega of F of N. An equivalent way to think

Â about this is that, eventually, T of N is sandwiched between two different constant

Â multiples of F of N. I'll write that down, and I'll leave it to you to verify that

Â the two notions are equivalent. That is, one implies the other and vice versa. So

Â what do I mean by T of N is eventually sandwiched between two multiples of F of N?

Â Well, I just mean we choose two constants. A small one, C1, and a big constant, C2,

Â and for all N at least N naught, T of N lies between those two constant multiples.

Â One way that algorithm designers can be quite sloppy is by using O notation instead of theta notation.

Â So that's a common convention and I will follow that convention often in this class.

Â Let me give you an example. Suppose we have a subroutine, which does a linear scan

Â through an array of length N. It looks at each entry in the array and does a constant amount of work

Â with each entry. So the merge subroutine would be more or less an example of a subroutine of that type.

Â So even though the running time of such an algorithm, a subroutine, is patently

Â theta of N, it does constant work for each of N entries, so it's exactly theta of N,

Â we'll often just say that it has running time O of N. We won't bother

Â to make the stronger statement that it's theta of N. The reason we do that is because

Â you know, as algorithm designers, what we really care about is upper bounds.

Â We want guarantees on how long our algorithms are going to run,

Â so naturally we focus on the upper bounds and not so much on the lower bound side.

Â So don't get confused. Once in a while, there will a quantity which is obviously theta of F of N,

Â and I'll just make the weaker statement that it's O of F of N.

Â The next quiz is meant to check your understanding of these three concepts:

Â Big O, Big Omega, and Big Theta notation.

Â 3:29

So the final three responses are all correct, and I

Â hope the high level intuition for why is fairly clear. T of N is definitely a

Â quadratic function. We know that the linear term doesn't matter much as it

Â grows, as N grows large. So since it has quadratic growth, then the third response

Â should be correct. It's theta of N squared. And it is omega of N. So Omega

Â of N is not a very good lower bound on the asymptotic rate of growth of T of N,

Â but it is legitimate. Indeed, as a quadratic growing function, it grows

Â at least as fast as a linear function. So it's Omega of N. For the same reason, big

Â O of N cubed, it's not a very good upper bound, but it is a legitimate one, it is

Â correct. The rate of growth of T of N is at most cubic. In fact, it's at most

Â quadratic, but it is indeed, at most, cubic. Now if you wanted to prove these

Â three statements formally, you would just exhibit the appropriate constants. So for

Â proving that it's big Omega of N, you could take N naught equal to one, and

Â C equal to one-half. For the final statement, again you could take N naught

Â equal to one. And C equal to say four. And to prove that it's theta of N squared you

Â could do something similar just using the two constants combined. So N naught would

Â be one. You could take C1 to be one-half and C2 to be four. And I'll leave it to you to

Â verify that the formal definitions of big omega, big theta, and big O would be

Â satisfied with these choices of constants. One final piece of asymptotic notation,

Â we're are not going to use this much, but you do see it from time to time so I

Â wanted to mention it briefly. This is called little O notation, in contrast to

Â big O notation. So while big O notation informally is a less than or equal to type

Â relation, little O is a strictly less than relation. So intuitively it means that one

Â function is growing strictly less quickly than another. So formally we say that a

Â function T of N is little O of F of N, if and only if for all constants C, there is

Â a constant N naught beyond which T of N is upper bounded by this constant multiple C

Â times by F of N. So the difference between this definition and that of Big-O notation, is

Â that, to prove that one function is big O of another, we only have to

Â exhibit one measly constant C, such that C times F of N is upper bound,

Â eventually, for T of N. By contrast, to prove that something is little O of

Â another function, we have to prove something quite a bit stronger. We have to

Â prove that, for every single constant C, no matter how small, for every C, there

Â exists some large enough N naught beyond which T of N is bounded above by C times F

Â of N. So, for those of you looking for a little more facility with little O

Â notation, I'll leave it as an exercise to prove that, as you'd expect for all

Â polynomial powers K, in fact, N to the K minus one is little O of N to the K. There

Â is an analogous notion of little omega notation expressing that one function

Â grows strictly quicker than another. But that one you don't see very often, and I'm

Â not gonna say anything more about it. So let me conclude this video with a quote

Â from an article, back from 1976, about my colleague Don Knuth, widely regarded as

Â the grandfather of the formal analysis of algorithms. And it's rare that you can

Â pinpoint why and where some kind of notation became universally adopted in the

Â field. In the case of asymptotic notation, indeed, it's very clear where it came

Â from. The notation was not invented by algorithm designers or computer

Â scientists. It's been in use in number theory since the nineteenth century. But

Â it was Don Knuth in '76 that proposed that this become the standard language for

Â discussing rate of growth, and in particular, for the running time of

Â algorithms. So in particular, he says in this article, "On the basis of the issues

Â discussed here, I propose that members of SIGACT," this is the special

Â interest group of the ACM, which is concerned with theoretical computer

Â science, in particular the analysis of algorithms. So, "I propose that the members

Â of SIGACT and editors in computer science and mathematics journals adopt the O,

Â omega, and theta notations as defined above unless a better alternative can be

Â found reasonably soon. So clearly a better alternative was not found and ever since

Â that time this has been the standard way of discussing the rate of growth of

Â running times of algorithms and that's what we'll be using here.

Â