0:00

So, in Lecture 3, we're going to continue talking about computational Boolean

algebra. But we're going to focus on real,

practical data structures and operators that are the, basically the foundational

methods that everybody uses to do large industrial scale sorts of designs.

When we talked about the Unate Recursive Paradigm at the end of the last lecture,

we introduced a simple kind of representation and just one basic

operation, Tautology. Urp is a historically important

representational scheme, but it's not really the way we do things to day.

So what I want to start talking about is one of the most powerful data structures

in the boolean universe and that's the binary decision diagram.

So they're called BDDs. And in this lecture, we're going to

introduce what a decision diagram is and that we're going to start manipulating it

and restricting it and constraining it so that it becomes a really powerful data

structure on which we can do important operations.

So lets start. So, here we are again, focusing on

representations and how important they are in our study of computational boolean

algebra. Where the goal is to be able to manipulate

boolean objects as data structures that are operated on by operators that we can

build in software. And as way of example, I'm just showing

what we ended up with at the end of the last lecture which was the URP tautology

algorithm. And you know this is really a great warm

up example. I, I just got a little example on the

right here. We've got an internal node in a tautology

computation that has cubes b, c, and b bar c bar.

So it would seem that, we've already had a set to 1 before we got here.

We cannot tell anything about tautology for this particular collection of cubes.

So, we're going to have to go a little further set b equals 1 on the left hand

side, set b equals 0 on the right hand side.

Do the co factoring we see on the left hand side, oh, I can tell its a tautology

because I've got an[UNKNOWN] cube, there's a one in there.

Now on the right hand side we can again see it's a tautology because we've got two

unit, two single variable cubes c and c bar.

And when we OR those together, we get a 1. This really does show the big idea of

boolean functions as things we manipulate with software.

But I have to admit that URP is you know, it's not the real way that people do it or

it's not the most popular the most influential, the most widely deployed way,

it's not one of those. So what we want to do in this sequence of

lectures is look at a really important, elegant way to do this a really powerful

and interesting way. These things called binary decision

diagrams. So let's talk about those.

Now, decision diagrams were actually studied by lots and lots of people, Lee

and then Boute and then Akers, way back when.

But they got seriously useful when Randy Bryant of Carnegie Mellon University made,

made a really remarkable breakthrough. So these things are technically called

reduced ordered BDDs or ROBDDs, although honestly, everybody just calls them BDDs

these days. This was really the breakthrough that made

these things useful as a, as a remarkable data structure.

There's a, there's a big and interesting Wikipedia page on these that's certainly

worth looking at. And I'll also say that Randy provided lots

of the nice BDD pictures in this lecture and some of the foundation structure of

this lecture. So big thanks to Randy Bryant for, for

letting us use that material. So, our first step in the development of,

of reduced order BDDs is just to, to talk about what a decision diagram is.

So the first big idea is just to turn a boolean function into a decision diagram.

And you can, you can kind of think of this as taking the truth table.

And I've got one shown at the upper left. And kind of turning it on it's side and

putting it in the form of a tree. So this is a truth table for three binary

variables. X1 x2 x3, and so it's got eight rows.

The first four are 0001 and the second four 0101.

And the big idea of a decision tree is that it, it answers the question what is

the value of the function as the sequence of decisions.

So every vertex represents a variable and so here is a vertex that, that represents

variable x1, I've just got shown here. And the edges that go out of those

vertices are decisions to set that variable to a value.

So, if you follow a dotted green line then you've decided to set the value to a 0,

and so I can say over here x1 equals 0. And if you go on a solid red line, you've

decided to set the value equal to a 1, so I can write x1 equals 1 over here.

Now, all of these arrows are really directed.

They real, they really, they're all, they're all going down.

But by convention we just don't draw them because we know where they go, and the

answer is that they all point down. Right, so I don't really need to draw

that. And then the idea is that you follow a

sequence of decisions as you move through the variables and out their edges until

you have visited all the variables you need to be able to make a decision.

And so, for example, the first row of the truth table, and I'm just going to

highlight it like this, on the first row of the truth table you'd say, well, if x1

is 0, now what? Okay, well, now I need to know something

else. Okay, well if x2 is 0, now what?

So, well now I need to know something else.

Well, what if x3 is 0? Oh, well actually I know enough to, to

tell you the value of this function. The value of this function is a 0.

That's just that row of the truth table. And this very simple decision tree which

completely defines every single row of the truth table as a unique path through the

decision tree. Just kind of maps out the truth table.

So it's really going to be 0, 0, 0, 1. Like the first four rows of the truth

table 0, 1, 0, 1 like the next four rows of the truth table is we map that, that

decision tree out. So that's really all there is to a

decision tree. It's kind of like a truth table turned on

it's side rendered in the form of a sequence of variable value, variable value

decisions until you can tell me what the value of the function is.

So there's a little bit of terminology. We can just walk through that here.

We talked about the vertices, but we just tend to call them variables.

That just makes life easier, because they're all associated with a variable.

And we often talk about the, the, the edges, that come out of a node as being

pointers, because they are. And sometimes we talk about the low

pointer as being the pointer that, that's what happen when I assign the value to a

0. We also talk about the high point or the

high child as being what happens when we assign the value to a 1.

And a variable ordering is just the order in which the decisions about the variables

are made. So I'm just circling that big one right

here. X1 is gets a decision.

And then x2 gets a decision. And then x3 gets a decision and then we

know enough to actually tell you what the value of the function is.

In this case it's a 1. That's called variable ordering.

And the things at the bottom are called constants, right?

No surprise, because they're not variables, and they're also the leaves of

the tree. They're the things that, that you get at

the bottom when you can't go any further and you don't need to visit any more

variables. Now, one of the things to note is that

different variable orders are possible. I mean, I haven't said anything about why

one would choose a particular variable order.

I mean, I'd, I did it in a somewhat obvious manner here all of the x1s are at

the top of the tree and then the next level are x2s.

And then the next level are x3s but there's no reason why I had to do that.

I could have flipped that. So in this particular example, after I

choose to, say, set x1 equal to 0, I could decide that x3 is the next variable that

I'd like to look at. And then x2.

And in this case I'm going to get a different tree.

In this case when x3 is a 0 then it doesn't matter what x2 is the value of the

function is a 0. And when x3 is a 1, it also doesn't matter

what the value of the function is sorry it also does not matter what the value of x2

is the value of the function is a 1. And so this is a different tree.

8:21

No surprise, and that's going to be a problem.

So, let's make a few observations about this.

First, every path from the root to a leaf traverses the variables in some order,

that's not a big surprise. Each such path constitutes a row of the

truth table, so that's a decision about what output is, what the output is when

the variables take particular values. So, that's interesting.

It's a sort of a very different way on, on evaluating rows of the truth table, but we

have not yet specified anything about the order of the decisions.

I, I have, I have not constrained ourselves in any way.

And as a consequence, a big thing happens and it turns out to be a not very good

thing. The decision diagram is not unique for

this function and that's not helpful. And one of the big things that we're

really going to be shooting for here is something that's called the Canonical

form. Now a representation that does not depend

on the gate level implementation of a boolean function.

That's what canonical means. So, if you give me the same function of

the same variables, it always produces the exact same representation.

In this case, a data structure. So for example, a truth table is

canonical, and I'll just say this, up to order.

Right, and what that really means is as long as you tell me the value of the

variables in the top row of the truth table.

As long as you tell me while there's a and then there's b and then there's c going

from left to right. If I make the function as the truth table

and you make the function as the truth table, we made the same truth table.

It's canonical. What we really, really want is a canonical

form data structure, so that if we both agree on what the boolean function is and

we both build it, we get exactly the same data structure.

So what's wrong with this diagram representation?

Well, several things. It's not canonical.

That's a big one. But honestly the, the first thing that's

really wrong with it is it's just way too big to be useful.

I mean, it's as big as a truth table. If you had a function of a hundred

variables, you have 2, 2 to the 100th rows.

That's just not happening. And in this particular decision diagram,

you have 2 to the 100th leaf nodes down at the bottom and that's not happening, I, I

need a better order, I need a better idea .

And so I'm going to, I'm going to impose some structure on things and we're going

to see where that takes us, so the next big idea after just the first one, which

was let's make a decision diagram is ordering.

And in this particular case we're going to restrict the global order of the

variables. And so, what that's going to mean, is that

every path, from the root, to a leaf, visits the variables In, and then I'm

going to write it big, the same order. Now there's a little technical point here,

which is that it's okay to omit a variable if you don't need to check it to decide

which leaf node to reach for a final value of the function.

So, you have to visit the variables in order, but if you skip some along the way

that's okay, that's not a bad thing. And we'll show that as a quick example

next. So Was it, what does it mean that there's

a global ordering of the variables in a BDD?

It means we're going to assign a total order to them.

So for example, in this case, let's assume the variables are ordered x 1, x 2, x 3,

and that's what the less than signs mean. It means in order of.

X1 will appear before X2; X2 will appear before X3.

The variables must appear in this specific order along all paths, but it's, but it's

OK to skip variables. So on the left, we see just a, y'know, a

nice, a nice simple example. X1 appear and then X2 appears and then X3

appears. So that's perfectly OK.

But the one on the right-hand side is also OK.

There's no X2 in there, so X1 appears, and then on a path from the roots to the leaf,

to a leaf, there is no X2. That's fine.

X, x1 shows up before x3, everything is good.

The examples on the righter things are, are examples of bad things.

So, you know, this, this one, this one is just r o n g, this one is just wrong,

right, x3 can't come before x2 and x2 can't come before x1.

And there's no other way to say what, what the one on the right is this is just

stupid. Right, how can I have a decision diagram

where I make, in this case X1 equal to a one, and then change my mind and make X1

equal two a zero. That, that's just stupid.

Right, so, the properties here are that we are looking for there are no conflicting

assignments along the path. That's what that right most example that I

just showed. That's a conflicting assignment.

You gotta see each varible at most once. On a path, right never twice, you're

always going to see them in the same order.

So suppose we go back and we say I insist on an X1, X2, X3 order.

Now what, right. So this decision diagram over here is

still okay. Right, it's, it's not doing anything

illegal. But the diagram on the right is also okay.

Right? And it is equivalent.

It is the same function, but different and, and that's a problem.

Right, because note what happened here. What happened here was there was an

observation. If x 2 is a 0, then, as we go here, if x 2

is a 0 then, we look at x 3 and the value of the function is x 3.

X 3 0. Function makes a 0.

X 3 1 function makes a 1. If the value of x 2 is a 1.

Same thing happens, x3 makes a 0, the function's a 0, x3 is a 1, the function is

a 1. I don't really need x2, x, x2 is not

really providing me any value. If I make it go away I can substitute this

very different diagram which has, on the left, still the sort of the full original

tree. Of the, I got a very simple version of the

decision diagram but they replaces the entire right sub tree whether its 3 nodes,

whether its single x3 with this 0 and a 1 underneath it.

Well obviously this is not a canonical structure because both of these decision

diagrams represent same boolean function. And as like said previously, the big thing

that we were looking for was, no two different diagrams are allowed to

represent the same boolean function. So I need another idea, I need another

constraint. So in the next talk, we're going to add

another constraint and we're going to see how well that does in making our binary

decision diagrams canonical. [music].