In this course, you will learn to design the computer architecture of complex modern microprocessors.

Loading...

From the course by Princeton University

Computer Architecture

234 ratings

In this course, you will learn to design the computer architecture of complex modern microprocessors.

From the lesson

Vector Processors and GPUs

This lecture covers the vector processor and optimizations for vector processors.

- David WentzlaffAssistant Professor

Electrical Engineering

Today, we move on to our new topic. Vector computers.

Â So, a little bit of introduction on vector,

Â Vector machine is a vector processor. Broadly, it's a way to get at having data

Â level parallelism. Many times for, let's say, array

Â operations, you're going to want to take one whole array and add it too another

Â whole array. And let's say, these arrays are large.

Â Does it really make sense to have a processor sit in a tight loop doing load,

Â add, Store, load, add, store, load, add, store

Â in a loop? And it's the insight that comes out that if you have computations that

Â work on vectors or matrices or even multi-dimensional matrices,

Â You can think about building an architecture where you don't have to have

Â as much instruction fetch, instruction decode bandwidth.

Â And you don't have to sit there and fetch new instructions and continually operate

Â on those new instructions. You could just have an instruction which

Â encodes some large amount of computation. Because its all the same, it's the

Â insight. Also, in today's lecture, we're going to

Â be talking about single instruction multiple data architectures.

Â This is kind of a degenerate case of vector architectures.

Â And a good example of this is something like multimedia extensions or MMX in the

Â Intel processors or Ultivec in the power PC architecture.

Â The newer thing that Intel has added now, they all call SSE.

Â Streaming, something extensions. I actually don't know what the second S

Â stands for. And then they also now have something they

Â call AVX, which is even wider. They can, can,

Â Basically and continually add in more instructions to make the short vector

Â nature better. And then, finally today, if we have time,

Â we'll be talking about graphics processing units.

Â So, I have some examples here. This is the ATI FirePro 3DV7800 and then

Â we have the Nvidia equivalence, Nvidia competitor, which is the Nvidia Tesla, I

Â think this is C075. Both of these, these are both very fast

Â processors. And what is interesting is, these started out as graphics, graphics

Â processors. So, they started out to play video games

Â effectively or to do some sort of rendering of three-dimensional data.

Â So, you're taking some data, You operate on it and there's massive

Â parallelism there. Lots of different triangles in a, in a

Â three-dimensional image, for instance, in three-dimensional rendering.

Â And people have this insight that, that same processing architecture that is good

Â at rendering triangles might be good at doing, let's say, dense matrix operations

Â also. And we've seen this outgrowth and we've seen a whole programming model come

Â up around this and this is, this is very recent. to some extent,

Â These architectures don't come from the same lineage as some sort of normal

Â processors. They come from, really come from fixed

Â function hardware that was there to design, there to render video games and

Â three-dimensional sorts of scenes. So, their architectures look quite a bit

Â different and the naming is very different, so if you go pull, pick up the

Â manual, it tells you how to program one of these things and you come from a computer

Â architecture background, you're just not going to understand any of the words.

Â Your book actually, the Hennessey and Patterson book has a very good table which

Â and that makes life a lot easier. Okay.

Â So, let's get started. Looking at vector processors, and let's

Â look at the programming model first before we look at the architecture.

Â So, this would be software model, not the not what the hardware looks like, yes.

Â So, to start off here, A couple things to note is in the

Â traditional vector architecture, you're going to have some scour registers.

Â And these are the registers like in a normal microprocessor.

Â They just hold one value. Thye're maybe, let's say, 32 bits or 64

Â bits in width. And then, you have a second register file,

Â which holds. Vectors.

Â And when you go to access one of these vectors, it's the same thing as a register

Â file, file here. If you go to access, let's say, vector

Â register three, or something like that, you're going to, that doesn't denote one

Â value. Instead, it denotes many values at one

Â time. And typically, we have a fixed width here

Â drawn, but typically these things have very long widths.

Â So, for instance, something like the Cray processor or the Cray-1 processors, had a

Â maximum vector length of 64 elements where each element was 64 bits.

Â So, it's a lot of data that you're, you're sort of moving around at one time with one

Â operation. And an important piece of sort of

Â architectural or least program model hardware here is the vector-length

Â register. The vector-length register says, how many

Â of these elements are actually populated?" And we'll see why that's important.

Â But for right now, let's just think of having the vector-length register be

Â equivalent to the maximum number of elements in the vector.

Â So, think of it as having 64 elements and the vector-length register just says

Â there's, you're always operating on all 64 bit, entries of data in parallel.

Â Now, if we go look at the program model connected to this, we need to add some

Â extra instructions now. In our Scalar processors, or all the

Â processors we've been talking about up to this point,

Â It operates on one register with one other register.

Â And that still exists in this model. But it operates only on these Scalar

Â registers. Now, the reason why we still have the

Â Scalar registers around in this model, is we want to have things like branch

Â conditions, address computation, things like that are not vectorizable.

Â They don't, you know, you don't have 64 addresses.

Â Maybe, maybe you do in certain cases. But typically, you're not going to have

Â that laying around. You're just going to have an address and

Â you need to load from address and sort of for branches, you need to do the branching

Â based on some value, And not all 64 values.

Â But, we now add some special extra instructions.

Â So, if you go look in your book, they develop this architecture they call VMIPS

Â or vector MIPS. And they add some extra instructions here

Â which look very similar to normal MIPS but all of a sudden they put some Vs at the

Â end here. So,, VV which means it operates on a

Â vector with another vector. They also developed some instructions

Â which have a V and a S, which is the Scalar so you can do a vector plus a

Â Scalar which would be something along the lines of if you were to have, let's say,

Â add vector Scalar where you're adding one vector with a Scalar register where the

Â scale register, let's say is loaded with one.

Â You could do this add and it'll increment every element of the vector by one.

Â You also have load in stores, which can pull out very large chunks of memory and

Â put back very large chunk of memory from the arrays in memory.

Â But if you look at what's going on in one of these instructions, we're taking one

Â vector, another vector, putting it into Some sort of arithmetic operation and then

Â storing it into another register. This is a register-register vector

Â architecture. There has been some register-memory and

Â memory-memory vector architectures out there, where instead of naming registers,

Â vector registers, you can name places in memory, but the vector-vector oh, excuse

Â me the register-register variants are, are the most popular.

Â Just like the register-register Scalar computer architectures are now the most

Â popular. One thing I did want to point out here is,

Â we've said nothing about how many ALUs there are in this architecture.

Â This is just the abstract programming model.,

Â So, don't get this confused with having one, two, three, four, five, six

Â functional units or something like that. This is just a abstract model right now,

Â we have not talked about the hardware. So, this brings up, how do we get data?

Â And we have a instruction here that we'll call load vector.

Â Load vector has a destination, being a vector and the is, Is a register, and you

Â might have another offset in the register. But let's say, there's only one register

Â in our, in our basic load vector operation here.

Â And this is the address that points to the base of the vector in memory.

Â And when you go to do this load, it's actually going to pull in from memory into

Â our vector register. You could also start to think about having

Â interesting offsets or strides here. So, that's what this picture here is

Â trying to show is we have a base pointer pointing to by register one, it's a Scalar

Â register and note it's has different naming, these have Vs and these are Rs and

Â then, We have a stride here which says, where in

Â memory to take from. So, you can think about having something

Â where you can do basically multiple locations in memory.

Â But you want every fifth element or something like that.

Â So, you could load register two here with five, register one here with the base

Â address, And then, do this load vector instruction

Â and it'll take each fifth piece of memory of some data size and load it into the

Â vector register. And this is our abstract model, but at

Â the, at the beginning here, let's assume what's called the unit stride which

Â basically means this here, is always one, so its always getting the next value in a

Â row. We'll, we'll talk in more complicated

Â cases about having non-unit stride. Okay.

Â So, let's look at what this does to code. Here we have a basic code example, it's

Â going to multiply element-wise.. Different elements of a, of a, of Vector

Â here, A and B, and deposit it into Vector C.

Â Now, this is in memory because this is C code so these are actually arrays.

Â Now, obviously this is not a, you know, array multiplication here, cuz array math

Â is much more complicated. This is a element-wise multiplication.

Â If you go look at the Scalar assembly code.

Â Well, first of all, we need to have a loop.

Â We have to load the first value, load the second value, do the multiply, do the

Â store. This is showing code for floating-points

Â double precision multiplies. Then, you have to increment a bunch of

Â pointers. Check the, the boundary case and, and loop

Â around. On your vector architecture, life gets a

Â little bit easier here because we can do all 64 of these in one instruction, we

Â don't have to loop. And all you really have to do is load,

Â load, load vector, load vector, multiply and store.

Â And this instruction on the top here loads the vector length register.

Â And we look at the vector-length length register here of 64, cuz we're trying to

Â do 64. But if we were to load the vector-length

Â register to, say, with 32, we would only do the first 32 multiplications.

Â And you can set that vector length register all the way up to maximum vector

Â length. So, the vector-length register,

Â There's, there's this value here we call the vector-length register max.

Â Which is the width, the, the, the, the largest,

Â It's going to be length of a vector. The vector length register says for the

Â given operation we're about to compute, How many of those operations we should do?

Â So, you could either easily have, something with, a vector length of a

Â thousand. But you only want to do, let's say, the

Â first 64 operations so you can load your vector-length register of 64 and only do

Â 64 operations. A good example for this actually is some

Â of the super computers. Cray, Cray machine have relatively short

Â vector-length maxes, but if you go look at something like NEC the Japanese

Â supercomputers, the NECSX8 or nine or something like that which is, I think,

Â actually now probably the fastest computer in the world or the SX9, I think is or

Â whatever is the, the newest. I actually, I think it's the SX9 the new

Â Japanese vector shift computer. They have very long vector-length maxes so

Â they can actually have a vector-length of a thousand. So that, in one instruction,

Â they can basically encode a thousand operations which is pretty, pretty fancy.

Â But they can, you still need to be able to set the vector-length because maybe you

Â don't want to do all a thousand all the time.

Â Okay, so why, what is this vector stuff coming has some advantages?

Â Control Data Corp 6600 or the Cray-1 they have

Â Very deep pipelines. And if you think about the architecture

Â we've been building up to this point, we had to add a lot of forwarding logic and a

Â lot of bypasses to be able to bypass one value to the next value.

Â Well, if you have a very deep pipe line, And you observe back to back multiply or

Â something like that, you're going to stall a lot.

Â But in a vector computer, because you know you're operating on, let's say, 64

Â operations at a time anyway, This actually allows you to take out a lot

Â of the bypassing. So, while these vector architectures have

Â no bypassing in them. Because if you're going to be operating on

Â 64 things, and your pipeline length is six anyway, there's no possibility that you'll

Â ever actually have to forward data back to, let's say, itself or something like

Â that in the early you could do all the bypassing between different operations in

Â the register file itself. Also, you know, deep pipelines are good

Â cuz you can have very fast clock rates. So, to give you an example, the old Cray-1

Â had a 80 megahertz clock. Now, you might say, 80 megahertz ooh,

Â that's, that's not very fast. But, you know, 80 megahertz back in the

Â probably late 60s' early 70s,' was very fast clock rate for a processor.

Â I mean, these were supercomputers, mind you, but they were very aggressive and

Â they can do that because they had deep, deep pipelining and lots and lots of

Â logic, and these things were physically large.

Â I mentioned the memory system. And, vector computers have some

Â interesting changes that you have to think about in the memory system.

Â One of the things you can do is, because you have so many memory operations going

Â on, You can use vector load.

Â You can actually overlap going out to main memory with doing the next load

Â effectively, even if you're doing them sequentially.

Â And most these vector architectures have many, many memory banks.

Â And what's nice is if you have unit stride, you know that your one operation,

Â your one load is going to, to go to this bank, the next operation is going to next,

Â that bank, that bank, that bank, that bank and have basically a very good bank

Â distribution or bank utilization. And this is assuming right now that we are

Â actually only doing one memory load at a time.

Â And I have a little note up here that says, okay, well, each load takes, let's

Â say, four cycles. Busy bank time and you have twelve second

Â link to get out to memory in this Cray-1 machine.

Â Well, On a normal architecture, this would be

Â pretty bad, because you'd be stalling, twelve cycles, let's say, to go out to

Â your memory system. I mean, that's, that's not the end of the

Â world but that's, that's not great if you, like have a, a load, and then a use, a

Â load, use and just keep going back and forth, between those load and use.

Â But in the vector architecture, because we have a long vector length and we're

Â loading 64 different values and we know that they're going to have good

Â distribution over many different memory banks,

Â We can effectively do this one load and we can overlap the latency in the memory

Â banks with each other. So, we'll start one load here, and then

Â one lead here, one load here. And if, you know, it has four cycle

Â occupancy on the respective bank, and we have a 64-entry vector, definitely by the

Â time we wrap around and get back to using this bank again, that first operation will

Â be done. So, it's a relatively effective way to

Â increase the bandwidth of your architecture and guarantee that you're not

Â going to have bank conflicts.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.