0:00

The next thing we're going to look at is how to compute modular large integers. So

Â the first question is how do we represent large integers in a computer? So that's

Â actually fairly straightforward. So imagine we're on a 64 bit machine, what we

Â would do is we would break the number we want to represent, into 32 bit buckets And

Â then, we will basically have these n/32 bit buckets, and together they will

Â represent the number that we want to store on the computer. Now, I should mention

Â that I'm only giving 64 bit registers as an example. In fact, many modern

Â processors have 128 bit registers or more, and you can even do multiplications on

Â them. So normally you would actually use much larger blocks than just 32 bits. The

Â reason, by the way, you want to limit yourself to 32 bits is so that you can

Â multiply two blocks together, and the result will still be less than 64 bits,

Â less than the word size on the machine. So now let's look at particular arithmetic

Â operations and see how long each one takes. So addition and subtraction

Â basically what you would do is that addition would carry or subtraction would

Â borrow and those are basically linear time operations. In other words, if you want to

Â add or subtract two n bit integers the running time is basically

Â linear in n. Multiplication naively will take quadratic time. In fact,

Â this is what's called the high school algorithm. This is what you kind of

Â learned in school, where if you think about this for a minute you'll see that,

Â that algorithm basically is quadratic in the length of the numbers that are being

Â multiplied. So a big surprise in the 1960s was an algorithm due to Karatsuba that

Â actually achieves much better than quadratic time in fact, it achieved a

Â running time of n to the 1.585. And there's actually no point in me showing

Â you how the algorithm actually worked, I'll just mention the main idea What

Â Karatsuba realized, is that in fact when you want to multiply two numbers, you can

Â write them as, you can take the first number x, write it as 2 to the b times

Â x2 plus x1. Where x2 and x1 are roughly the size of the square root of x. Okay, so

Â we can kind of break the number x into the left part of x and the right part of x.

Â And basically, you're writing x as if it was written base 2 to the b. So it's got

Â two digits base 2 to the b. And you do the same thing with, y. You write y base

Â 2 to the b. Again, you would write it as, the sum of the left half plus the

Â right half, And then, normally, if you try to do this multiplication, when you open

Â up the parentheses. You see that, this would require 4 multiplications, right?

Â It would require x2 times y2, x2 times y1, x1 times y2, and x1 times y1. What

Â Karatsuba realized is there's a way to do this multiplication of x by y using only

Â three multiplications of x1 x2 y1 y2. So it's just a big multiplication of x times y

Â only it takes three little multiplications. You can now recursively

Â apply exactly the same procedure to multiplying x2 by y2, and x2 by y1, and so

Â on and so forth. And you would get this recursive algorithm. And if you do the

Â recursive analysis, you will see that basically, you get a running time of n to the 1.585.

Â This number is basically, the 1.585 is basically, log of 3 base 2.

Â Surprisingly, it turns out that Karatsuba isn't even the best multiplication

Â algorithm out there. It turns out that, in fact, you can do multiplication in about n<i>log(n) time.</i>

Â So you can do multiplication in almost linear time. However, this is an extremely asymptotic results.

Â The big O here hides very big constants. And as a

Â result, this algorithm only becomes practical when the numbers are absolutely

Â enormous. And so this algorithm is actually not used very often. But

Â Karatsuba's algorithm is very practical. And in fact, most crypto-libraries

Â implement Karatsuba's algorithm for multiplication. However, for simplicity

Â here, I'm just gonna ignore Karatsuba's algorithm, and just for simplicity, I'm

Â gonna assume that multiplication runs in quadratic time. But in your mind, you

Â should always be thinking all multiplication really is a little bit faster than quadratic.

Â And then the next question after multiplication is what about

Â division with remainder and it turns out that's also a quadratic time algorithm.

Â So the main operation that remains, and one that we've used many times so far, and

Â I've never, actually never, ever told you how to actually compute it, is this

Â question of exponentiation. So let's solve this exponentiation problem a bit more

Â abstractly. So imagine we have a finite cyclic group G. All this means is that

Â this group is generated from the powers of some generator little g. So for example

Â think of this group as simply ZP<i>, and think of little g as some generator of</i>

Â big G. The reason I'm sitting in this way, is I'm, I want you to start getting used

Â to this abstraction where we deal with a generic group G and ZP really is just

Â one example of such a group. But, in fact, there are many other examples of finite

Â cyclic groups. And again I want to emphasis basically that group G, all it

Â is, it's simply this powers of this generator up to the order of the group.

Â I'll write it as G to the Q. So our goal now is given this element g, and some

Â exponent x, our goal is to compute the value of g to the x. Now normally what you

Â would say is, you would think well, you know, if x is equal to 3 then I'm

Â gonna compute you know, g cubed. Well, there's really nothing to do. All I do is

Â I just do g times g times g and I get g cubed, which is what I wanted. So that's

Â actually pretty easy. But in fact, that's not the case that we're interested in. In

Â our case, our exponents are gonna be enormous. And so if you try, you know,

Â think of like a 500-bit number and so if you try to compute g to the power of a

Â 500-bit number simply by multiplying g by g by g by g this is gonna take quite a

Â while. In fact it will take exponential time which is not something that we want

Â to do. So the question is whether even though x is enormous, can we still compute

Â g to the x relatively fast and the answer is yes and the algorithm that does that

Â is called a repeated squaring algorithm. And so let me show you how repeated

Â squaring works. So let's take as an example, 53. Naively you would have to do

Â 53 multiplications of g by g by g by g until you get to g by the 53 but I want to

Â show you how you can do it very quickly. So what we'll do is we'll write 53 in

Â binary. So here this is the binary representation of 53. And all that means

Â is, you notice this one corresponds to 32, this one corresponds to 16, this one

Â corresponds to 4, and this one corresponds to 1. So really 53 is 32

Â plus 16 plus 4 plus 1. But what that means is that g to the power of 53 is

Â g to the power of 32+16+4+1. And we can break that up, using again, the rules of

Â exponentiation. We can break that up as g to the 32 times g to the 16 times g to the

Â 4 times g to the 1, Now that should start giving you an idea for how to compute g to

Â the 53 very quickly. What we'll do is, simply, we'll take g and we'll start

Â squaring it. So what square wants, g wants to get g squared. We square it again to

Â get g to the 4, turn g to the 8. Turn g to the 16, g to the 32. So

Â we've computed all these squares of g. And now, what we're gonna do is we're simply

Â gonna multiply the appropriate powers to give us the g to the 53. So this is g to

Â the one times g to the 4 times g to the 16 times g to the 32, is basically

Â gonna give us the value that we want, which is g to the 53. So here you see that

Â all we had to do was just compute, let's see, we had to do one, two, three, four,

Â five squaring, plus four more multiplications so with 9 multiplications we computed g

Â to the 53. Okay so that's pretty interesting. And it turns out this is a

Â general phenomena that allows us to raise g to very, very high powers and do it very

Â quickly. So let me show you the algorithm, as I said this is called the repeated

Â squaring algorithm. So the input to the algorithm is the element g and the

Â exponent x. And the output is g to the x. So what we're gonna do is we're gonna

Â write x in binary notation. So let's say that x has n bits. And this is the actual

Â bit representation of x as a binary number. And then what we'll do is the

Â following. We'll have these two registers. y is gonna be a register that's constantly

Â squared. And then z is gonna be an accumulator that multiplies in the

Â appropriate powers of g as needed. So all we do is the following we loop through the

Â bits of x starting from the least significant bits, And then we do the

Â following: in every iteration we're simply gonna square y. Okay, so y just keeps on

Â squaring at every iteration. And then whenever the corresponding bit of the

Â exponent x happens to be one, we simply accumulate the current value of y into

Â this accumulator z and then at the end, we simply output z. That's it. That's the

Â whole algorithm, and that's the repeated squaring algorithm. So, let's see an

Â example with G to the 53. So, you can see the two columns. y is one

Â column, as it evolves through the iterations, and z is another column, again

Â as it evolves through the iterations. So, y is not very interesting. Basically, all

Â that happens to y is that at every iteration, it simply gets squared. And so

Â it just walks through the powers of two and the exponents and that's it. z is the

Â more interesting register where what it does is it accumulates the appropriate

Â powers of g whenever the corresponding bit to the exponent is one. So for example the

Â first bit of the exponent is one, therefore, the, at the end of the first

Â iteration the value of z is simply equal to g. The second bit of the exponent is zero

Â so the value of z doesn't change after the second iteration. And at the end of the

Â third iteration well the third bit of the exponent is one so we accumulate g to the

Â fourth into z. The next bit of the exponent is zero so z doesn't change. The

Â next bit of the exponent is one and so now we're supposed to accumulate the previous

Â value of y into the accumulator z so let me ask you so what's gonna be the value of z?

Â 10:10

Well, we simply accumulate g to the 16 into z and so we simply compute the sum

Â of 16 and 5 we get g to the 21. Finally, the last bit is also set to one

Â so we accumulate it into z, we do 32 plus 21 and we get the finally output g to the 53.

Â Okay, so this gives you an idea of how this repeated squaring algorithm works.

Â It's is quite an interesting algorithm and it allows us to compute enormous powers of

Â g very, very, very quickly. So the number of iterations here, essentially, would be

Â log base 2 of x. Okay. You notice the number of iterations simply depends on the

Â number of digits of x, which is basically the log base 2 of x. So even if x is a

Â 500 bit number in 500 multiplication, well, 500 iterations, really 1,000

Â multiplications because we have to square and we have to accumulate. So in 1,000

Â multiplications we'll be able to raise g to the power of a 500 bit exponent.

Â Okay so now we can summarize kind of the running times so suppose we

Â have an N bit modulus capital N as we said addition and subtraction in ZN takes

Â linear time. Multiplication of just, you know, as I said, Karatsuba's actually makes this

Â more efficient, but for simplicity we'll just say that it takes quadratic time. And

Â then exponentiation, as I said, basically takes log of x iterations, and at each

Â iteration we basically do two multiplications. So it's O(log (x))

Â times the time to multiply. And let's say that the time to multiply is quadratic. So

Â the running time would be, really, N squared log x. And since x is always gonna

Â be less than N, by Fermat's theorem there's no point in raising g to a power

Â that's larger than the modulus. So x is gonna be less than N. Let's suppose that x

Â is also an N-bit integer, then, really exponentiation is a cubic-time algorithm.

Â Okay so that's what I wanted you to remember, that exponentiation is actually

Â a relatively slow. These days, it actually takes a few microseconds on a modern

Â computer. But still, microseconds for a, for a, say a four gigahertz processor is

Â quite a bit of work. And so just keep in mind that all the exponentiation

Â operations we talked about. For example, for determining if a number is a quadratic

Â residue or not, All those, all those exponentiations basically take cubic time.

Â