0:00
[SOUND].
So, welcome to Lecture 11. Where we are in the class, you know a lot
about logic and synthesis and optimization and representation.
You know how to place the logic. We now know as of last week that there's
a magic step between the synthesis activity and the placement activity which
is called technology mapping. That takes the stuff that comes out of
synthesis and turns it into a real gates available as standard cells in the
technology library. And you know how to place things, so
what's next? What's next is routing things.
You actually have to connect the wires. You actually have to connect to the
logic, so the outputs go to the inputs of the next gates.
A large ASIC can easily have 10 million or 20 million wires.
That's a whole lot of work. It's an interesting algorithmic universe
for how we do that. And in this lecture sequence, we're
going to actually explore that. So, let's start with sort of a basic
overview of what's going on in the routing task.
So, we have figured out how to place all of the gates that are created by logic
synthesis and then technology mapping. From our technology library, and we have
the new problem of figuring out how to connect the wires.
So this is the routing problem. And in our, you know, our big ASIC, you
could have thousands of macro blocks representing you know, memories and other
predefined chunks of things like processors and such.
Millions and millions of gates, and millions of wires literally.
A big ASIC. A big ASIC could be 20 million wires or
40 millions wires. You're going to end up with, literally,
kilometers of wire on the surface of the chip.
It is physically impossible to do any of that stuff manually.
We need tools, we need software to do this.
So, this is what we're going to start talking about.
So, one of the basic filing problems that we have to think about.
Like I said, the first is scale. Big chips have an enormous number.
Millions of wires, and not every wire gets to take an easy path to, connect its
pins. So it would be nice, if you have a wire
that you know, just sort of, goes in some nice simple path but you know, it might
have to do something. Kind of unpleasant to actually get
connected. You must connect them all.
You can't afford to imbed any wires manually.
It's just too difficult and we have a lot of geometric complexity now too at the
nanoscale with, with, You know, wires that are some number of
tens of nanometers across, the geometry rules are incredibly complex just, just
to make the physics work. And that really makes the routing hard.
nevertheless we're going to do what everybody does when we start talking
about routing, we're going to use a simple grid representation of the layout.
It turns out that's the right place to start, and in fact the way real routers
The work is they, they have a sequence of tools.
So just like placement, we got rid of some detail.
We, for example, pretended the wires were two point springs, we pretended the gates
had no physical size. And then we dealt with the, the massive
scale part of the problem and then we circled back later and we did some other
work to, to deal with the physical problems like the gates overlapping.
When people build real routers we do the same thing.
We make geometric abstractions like everything's on a grid.
We deal with most of the problems with the simple geometry.
And then we go in very close and we deal with, the, the detailed problems of, of,
you know, rectangles and nanoscale physics.
So we're not going to be able, to, to do that.
We're going to do the big macroscopic problem.
There's also a lot of electrical complexity to deal with.
it's not enough to just make sure you connect all the wires.
You have to ensure that the delays through the wires are not too big.
That there are any wire to wire interactions, so, electrical cross talk.
A signal on one wire very close to another wire effect the electrical
outcome. Lots of interesting rules to worry about.
Lots and lots of interesting rules to worry about.
We don't have time to talk about all of these things.
We're mostly going to try to deal with a scale problem, how do you actually route
a lot of wires with a simple geometric representation.
4:13
So, the first physical assumption to, to be aware of is that there's many layers
of wiring available for routing at any real asic.
So they are made of metal today, they're, they're copper.
and we can connect wires across different layers with things called vias.
And so, these are, you know, literally. Little plugs of metal that go from one
layer of metal wiring to another layer of metal wiring.
Here's a real simple view. This is a standard cell, a single
standard cell, something like an NAND gate drawn in a kind of a 3D
representation. And, you know, standard cells would
typically be using metal on layers 1 and 2 to sort of connect things on the
inside, right? So in this particular case Things that
are going in this direction are, are the metal one.
So that's things like this. And things going in this direction, like
that would be the metal two. And then there's some other stuff
happening, going again in the other direction up here.
That's, that's a metal three. you know, the standard cell's internals
would be, you know, routed down metal one and two, maybe metal three.
the routing of the wires that would be connecting the logic gates would be up
on, you know, 3, 4, 5, 6, 7, 8, maybe even 9 and 10, depending on the
particular technology. The upper most layers of the wiring are
often reserved for special things like the clocks, global signals, power
distribution, things like that. Here's a typical example from about 2000,
this is just from Wikipedia so I'm allowed to show it.
this is a cross section of the way a five layers of routing of ASIC might look.
Little bit of terminology if you look closely at the picture, and on the next
slide we'll zoom in here. FEOL, that's what it says over here.
That's Front End Of Line. And the line means the fabrication line,
the manufacturing line. The front end stuff are the chip
fabrication steps that make the transistors on the silicon wafer.
Now that's as opposed to this label over here, the back end of the line.
Those are the fabrication steps that make the wires.
And what you can see is that in terms of the vertical space, it's mostly back end
of line. Because all of these wires are, you know,
separate layers. This particular picture's cross sectional
view of both the front end and back end of line stuff for five layers of wiring
will always have stuff happening up at the top which is actually some packaging
stuff. That's the big blob up at the top that's
the sort of bump. We are going to talk about this just a
little bit more on the next slide. So here's the same picture again but now,
let's just talk about this a little more carefully with a little more labeling.
So again, the stuff at the bottom, those are the transistors.
That's how we make the switches that actually make up the insides of our gate
level logic. And then what we have, the orange things,
are levels of metals. And so there's a layer 1 metal down at
the bottom and then on top of that a layer 2 metal, and then on top of that a
level 3 metal. And if you look very closely it will say
something like CU1, CU2, CU3. CU means copper.
So this is copper layer 1 metaling, copper layer 2 metaling, copper layer 3
metal This is the wiring for the logic, this is how you connect stuff to make the
logic. On top of that there are yet more layers
of metal, metal 4 and metal 5. And if you look very careful with this,
one of the things you'll see is that the metal layers at the top are a little
wider and they're a little thicker. And there's a reason for that.
This is a designed part of all modern ASIC kinds of technologies.
They're designed to be a little wider and a little thicker so they offer less
electrical resistance. And the reason is for that is because
you're going to be doing some special stuff up there.
Like routing the clock that connects all the flip flops.
And routing the power distribution and so you really care about things like
resistance being low. And the way you do that is by making
things thicker and wider, up at the top. Now there are also vias, and so what you
see is between every layer where there's sort of a big orange thing right, which
is some little piece of some wire. There's a small orange thing, and this is
basically a little plug where, you know, more or less, you can imagine this is a
hole that goes from one layer above layer k plus one to a layer below layer k of
the metal that you fill in with the appropriate metal and so you actually
make a connection. So, there are vias that connect from
metal 1 to metal 2, metal 2 to metal 3, metal 3 to metal 4, metal 4 to metal 5.
That's what those little yellow circles are.
And, up at the top, we've actually got a little piece of something interesting,
this is package level interconnect. Because the wires, you know, on the chip
need to connect to electrical connections off the chip.
And they are at a much larger physical scale.
And so, what that thing is up at the top is a solder bump.
And that's just literary a big blob of metal.
Which is the appropriate electrical connection to a pin on a printed circuit
board somewhere so you can take this chip, put it in the appropriate package.
Okay, and you know, connect it to its package and connect the package to the
board. Okay?
So, cross-sectional view of five layers. the only difference between this and
today is that today, you know, you could have ten layers of metal, maybe 12 layers
of metal, depending on the technology. So, a little bit about placement versus
routing. There are lots of different kinds of
placement algorithms. I mean, there are iterative methods.
They're most used for floorplanning these days, but, but you know, they're around.
There are lots of different kinds of analytical methods.
I should you one, the so-called quadratic method.
There's lots of different quadratic placement methods.
They're other methods that involve creating very large systems of non-linear
equations and then solving them with clever means.
honestly there are not quite so many routing algorithms.
there's a small selection of critical algorithms for the, for the, for the
routing tasks. what they really are they're lots of
routing data structures, there's tons of innovation, tons of interesting
proprietary stuff, where people innovate around how one represents the structure
of the surface on which we're routing. To represent the challenge efficiently,
but there's one very, very big idea and since I only get you for a week for a
couple of hours, to talk about everything interesting in routing.
There's one very big idea and that's the right thing to talk about in this
lecture. And so this is the big idea called maze
routing. From one amazing early paper from EF
Moore, The Shortest Path Through a Maze from, wow, 1959.
EF Moore's Wikipedia page. Very famous guy, worth, worth taking a
look. And, as an aside, yes, it's that Moore.
The Moore of Moore state machines, as opposed to Mealy state machines.
It's that Moore. He's a really famous guy.
And it's a very famous method, and depending on what computer science kinds
of courses you've taken in the past. You might have actually built something
that finds a path through a maze, because it's a beautiful, simple idea.
And it's the basis for just tons of useful stuff in the chip design space.
So let's talk about how we get from a maze to a wire.
So we're going to make a huge geometric assumption which is called a gridded
routing. The layout surface is a grid of regular
squares, and a legal wire path is a set of connected grid cells.
Through unobstructed cells in the grid. And, so we can mark obstacles, things
that obstruct, which we also call blockages, which are places we're not
allowed to route. So, more or less simply put, we mark in
the grid where we're not allowed to route, and everything else in the grid is
a place we are allowed to route. And so, for example, there is an
obstacle. You know?
Just a big black box that blocks things. And, here is a wire.
Okay, and the wire starts at a cell labeled with an s and it goes to a cell
labeled with a t. And it goes Left and right, and then up
and then to the right and up. Now, this there's a reason the wire looks
this. So for us, wires are strictly horizontal
and vertical. There are no diagonals or funny angles,
no 45 degrees. And it's often the case that we'll
describe paths by compass Directions. And so this is just a set of compass
directions, north on the top, south on the bottom, west on the left, east on the
right. And so sometimes I'll say top and bottom
and left and right, but when I'm trying to be more precise, I'll use compass
directions, which is actually quite common in the router business.
So, north is the top. The grid assumption is a pretty critical;
assumption. It applies a surprising amount of
restraints on the wires. So it applies that all the wires are
roughly the same size, in, in terms of their width.
Or more precisely that the wires and their vias which connect from one layer
to another all fit in the grid. Without any rule violation.
So the spacing rules are appropriately met.
So, for example, I can put you know, this wire on the left in this pair of grid
cells vertically and this wire on the right in this pair of grid cells
vertically. And so there's just this little grid
here, you know, four cells across, two cells high.
I've got two blue wires going up the middle two columns.
And, and then there are vias in the top of the middle two columns.
And that apparently connects to a red layer, which looks like it's below.
that goes to the left and the right. So let's say layer K plus one is the blue
layer. Layer K is the red layer.
One of the things that I could do right now is I could put another wire.
and I'm just sort of drawing it in the left hand column here.
I could put another wire on layer k plus 1, and that would be legal.
Right. And I know that would be legal because
the grid is set up so the spacing rules are all just satisfied.
So I can put a v anywhere I want in this grid.
I can put a wire anywhere I want in this grid.
I don't have to worry about leaving any blank grid cells when I route things.
The gird is set up so that I can use any unobstructed grid cell to put my wire
down and it'll all just work. Another thing that's true is all the pins
we want to connect to also have to be what is referred to as on grid, which
means they are in the center of one of those grid cells.
So if I route a wire into a grid cell and you tell me there's a pin there, we both
just agree that I connected it, because we agree that the pins are right in the
middle of the grid cells. So it implies a lot of sort of precise
low level geometry kinds of assumptions and it's often an approximation of
reality. but as it turns out it's a good
approximation of reality, because you know when you've got 20 million wires
you've got to simplify some stuff to be able to make some progress.
So we simplify some things. We solve most of the problem with a
little bit of geometric simplification. We'll cycle back later.
We'll put the extra detail at the end. It's a good engineering approach to
things. It works.