[SOUND] So welcome to lecture five. So far, what we've been doing in the class is representing stuff and manipulating stuff. And that was really the focus of all of our work on computational computational Boolean in algebra. And so in the previous week we built and explored some powerful methods Binary Decision Diagrams SAT Solvers to be able to do this. But now we actually want to move on and design stuff. We actually want to synthesize stuff and so the first thing to look at is two level logic, and so that's a standard usm of products form. A lot of and gates and an or gate and it turns out that there are some powerful synthesis techniques that we could use that are going to make use of all of the great computational Boolean algebra that we started with. So in this introduction lecture where we just look at the basics, we're going to review basically the structure of two level forms. The structure of optimal solutions to two level forms and introduce an idea that maybe we can solve this with a heuristic method and not exactly. So we can actually tackle big problems, so let's go look at them. Date, we've been talking about computational Boolean algebra for representing things for manipulating things, and one of the great applications was for verifying things. For example, our two blocks of logic implementing the same combinational function. It's all good stuff but to move forward, we need to move from representation to synthesis. How do you actually design something, how do you optimize a design so that it has a minimum amount of hardware? And the first place to start for that is classical two level logic. So let's go take a look at what those problems look like. This I hope looks pretty familiar. 2-level minimization means we're trying to find minimal sum of product's forms where we have many AND gates each making a product term and a single OR gate summing them all together. So this is SOP logic as you probably saw at some point in your basic introductory digital design classes. What is our goal for minimization here it means we want the fewest AND gates, and so I'm showing you all the AND gates here. We want the fewest AND gates in the design and among all such designs that have the same number of AND gates, we would like the one with the fewest input wires. And there's a name for those input wires, they're called literals. It's the appearance of a single variable in true or complimented form in an input to an AND gates. So I gave you this little Boolean function below f is ab bar + ab bar c + abc + bcd + bcd bar, and then I've drawn it below as a logic network. There's five AND gates, one, two, three, four, five of them. The first one has ab bar as the input, then ab bar c as an input, abc as an input, bcd as an input, bcd bar as an input. And if I'm going to ask you, so how many literals are in this design just pretend you drew it and count the wires, that's all these mean. So how many literals are there let's just count the wire 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14. There are 14 literals in this design and the reason this is worthwhile knowing is that this is actually a pretty good metric for success. Let's try to minimize the number of literals. If we can minimize the number of literals we're probably going to do a pretty good job in getting a minimum design. Now the problem is that none of the methods you know is really effective or practical here. You know a lot of Boolean algebra boy, that's really hard with a lot of variables. It's hard enough with a half a dozen variables, let only 40 variables. You can't tell when you have a good solution when you do these things by hand. Karnaugh maps are the same problem. It's hard with many variables, you can do six variables. If you're awfully, awfully, awfully persistent you could do seven or eight, but boy, you still can't tell when you're done. You can't do 20 variables and there are tabular solution techniques, famous things like Quine McCluskey. They have an unfortunate problem which is they're intrinsically exponential. So you can get a good result, you can't just afford it. We need a better strategy, we really need a very different idea here. And there are two big ideas that we're going to see in this lecture. First big idea is that we are going to just stop trying for the best and the perfect answer. This is just going to go away. We're going to try to get a good answer, we're actually get a very good answer. We just can't the perfect answer and the second is iterative improvement. And what that means is that we're going to take an answer and then we're going to do something very interesting to that answer. We're going to reshape it in a way that allows us to produce another answer, possibly a better one. And we're just going to keep iterating that until basically it doesn't get any better. That's certainly very different than the way you've seen Boolean algebra for things like minimization so far. So let's just go look at what that might involve. So here's an example of the difference best and good enough. And on the left I've got a Karnaugh map classical 2 by 2 Karnaugh map. Ab on the horizontal axis, cd on the vertical axis. It's got four cubes which is to say four product terms that we've got covered. It's got an ac bar term which is clearly the 2 by 2 cover on the upper left. It's got an ad term which is the right most two columns in the middle most two rows. And then it's got a cd bar term which is the entire bottom row. That's really the best you can do, there's just no better answer to this problem. The solution on the right is a pretty good answer, it retains the ac bar cube top left two by two, it retains the cd bar cube bottom row. But it replaces the the ad cube with two cubes that actually overlap some other stuff. So there's now a c bar d cube, which is the whole second row of the Karnaugh map, and there's a new ac cube which is the bottom right two by two corner of the Karnaugh map. And if you were to compare these two solutions, what would you notice? Well they're both made of product terms, which we now know we can call cubes that are as big as possible. And that is to say, I cannot take any of these things that I've circled at a Karnaugh map and I cannot make then bigger. We insist on this, these things in this form are called prime implicants. So roughly speaking an implicant is anything you can circle in a Karnaugh map. And a prime implicant is a product, also a cube, that is as big as possible, I can't make it any bigger. And there's a very, very famous result from the 1950s, that says that the best solution is always composed of a cover a primes, which is to say a bunch of things. You can circle on a Karnaugh map that are each as big as they can possibly be. So what's going on with that solution on the right? It's actually got some subtle stuff that's worth talking about. So I'm again drawing the same best solution on the left, at the top of this slide and then again showing the same good solution. There are couple of things that I can say. The first thing that I can say as I said on the previous slide is it's made of primes. Every cover, every cube is as big as it can be. But neither solution can be improved by removing anything. And clearly on the left, everyone is covered by exactly one prime implicants, I can't do anything. On the right, it is not the case that everyone is covered by a single cube. However, I can't take any single one of those cubes away without uncovering something. And so there's a technical term for those things. Both of those solutions are irredundant, which is to say they are not redundant. Which is to say, I cannot point at any single cube and say, you there, go away, I don't need you. Now this is also an important thing that we're going to strive for in the solutions we seek. We're going to look for solutions that are prime each of the cubes is as big as they can be and irredundant. We don't have any extra ones, but this is actually kind of interesting because it really suggest something a little deeper. So imagine that you can have a little graph over here, and this is all conceptual. And on the horizontal axis is every single solution, which is these circles in the carnal map that am illustrating here. Can't really do this, but just conceptually and on the vertical axis this, let's pretend that we could measure the cost of each one of these. We actually can, if we counted something like literals, that would actually be a pretty good model of the cost. And we see what turns out to be a really interesting kind of a bumpy kind of a cost surface there. The point of the best solution is that it really is the one with the deepest minimum, that's the best solution right there. I can't do any better than that, I don't care where you look, that's the best I can do. The point of the good solution is that it's also a minimum. It is just not the best one, so the best solution is a global sort of a minimum. The good solution is a local sort of a minimum, and what local means in this case is there's nothing simple that I can do to this design to make it better. I can't just take one of those cubes and make it bigger. Sorry, they're already prime, they're as big as they can be. I cannot take any of those cubes and get rid of them. Sorry, this is irredundant. Wow, this looks tough. I need a different idea. In fact, I need a very different idea. And that's where all of the really interesting stuff in our lecture about two level synthesis starts. So let's start by looking at what those very different ideas are. [SOUND]