So first, let's just discuss a little bit about learning and planning in an organizational context. Planning isn't the only way to generate learning. You can do a lot with leadership, for instance. You can emphasize learning is importance. You can try to build it into the culture. You can make sure you create a safe space for risk taking. And all this is important, but it's not enough because there's some really deep challenges that we need to overcome if learning is to become our central goal. There are two in particular that I want to emphasize. The first challenge arises because people are just not that great at learning in complex situations. Now, it's true that people are very good at learning in some situations. But when cause-and-effect are easily visible, when actions clearly lead to outcomes that provide clear, quick feedback, people learn well. Video games, for example, is a place where this is true. People can learn very quickly. In Internet context, you hear about A/B testing, where you present different approaches to your users and quickly get information about which works better. This leads to a quick learning that organizations can make use of. But often, feedback is incomplete, delayed, it's ambiguous, because the system is complex. The connection between actions and outcomes isn't clear, only emerges after large investments of resources and time. Now, here, learning takes sustained disciplined attention, and people are subject to psychological and organizational biases that push out learning. So, our approach needs to address the challenge that people are just not that great at learning. It also needs to address a second challenge, which is that managers and executives are not used to valuing learning as the central goal. Executives, in particular, are typically going to be very used to valuing results. In the mainstream organization of course, this is how things operate. And so, executives often get tense or anxious when the subject of learning comes up as a key outcome. It just sounds squishy. An executive won't find it satisfying to hear that we lost millions of dollars, but we learned an awful lot on the way. This is going to have a clear connection to planning because planning systems are typically set up to produce results. And we live inside planning systems. So, we're going to have to figure out how to make learning a compelling goal and make it part of our planning process. So let's start with exactly that challenge of making learning a compelling goal in organizations. Here's a way to think about doing that. Companies rarely get it right the first time in innovation efforts like those we're discussing here. That's no surprise, but let's look at the implications. Imagine two firms, one that takes a learning-orientation and the other with a results-orientation. Now think about what happens when results come in below the target. Well, at the results-driven firm, this clearly appears as a failure, something to be corrected. The pressure is to stick to the targets because revising targets downward is frowned upon. Now, even if executives are supportive, they know it's an innovation initiative, it's going to be hard for the leader not to get defensive and present ways to recover to the plan. Pressure to catch up means pressure to go harder, doing what we've been doing, rather than engage in the time consuming and risky enterprise of changing direction and rethinking what we're doing. So, unless the company gets fairly lucky and gets it pretty much right the first time, chances of success are low. Our results-orientation poses big problems when we start out with a lot of unknowns and a lot of assumptions that could easily be wrong. Now, at the learning-oriented firm in contrast, results below plan prompt reexamining assumptions. Our predictions were just that. We knew that there were predictions. So it's more natural to revise future predictions based on new data. It may be we've learned that this isn't an attractive opportunity. It could be that when we examine our predictions and assumptions, we realize we've learned something that undercuts the whole business. But if we find that out, we're going to find out more quickly in a lower cost than at the results-oriented firm. This is the fail fast ethos that you hear about. Or, we may find we need to alter direction, make a new plan, but one that will have an increased chance of success even if it takes time to reorient. Either way, we have a win with the learning-orientation compared to the results-oriented firm that sticks with the original plan too long. So, the point is, the firm that learns fastest will win in challenging innovation situations like the ones we're talking about here. So, executives who stick to results focus, working to be hard-nosed, they're being hard-nosed about the wrong thing. The thing to be hard-nosed about is learning instead. And that's the message that you've got to bring to the discussions about how we're going to plan out an innovation initiative. So, let's say you're successful there in learning emerges as a, if not the key, goal. Now, the next step is to establish experimentation as the method and the metaphor by which we achieve that goal. Now again, it's not controversial to suggest that innovation involves experimentation, but what is emerging from research in different places is that successful companies build their innovation planning around experimentation. They bake it in. So we're going to talk about that. But before we go into that, what do we mean here by experiments? Well, it's not science experiments strictly speaking. The canonical example here would be A/B testing in Internet applications, where you present two different user interfaces or two different offerings to randomly selected users, and you'll see which does better. This is clearly an experiment, and it highlights the importance of finding activities whose purpose is learning and which provide clear feedback. Now, one step removed from A/B testing is, the idea of deploying a minimum viable product. This is a very rich experiment that can reveal fairly quickly whether and how customers are going to buy and use the product and service that you're trying to develop. Of course, often, these kinds of customized experiments, low cost, fast experiments aren't fully possible. You have to make the investment, develop the product, and release it in a way that preserves the brand and so on. But the philosophy here is to put a central place on working creatively and diligently to resolve key unknowns and examine key assumptions early. The guiding principle here is that the ethics with these experiments is the fast failure approach, what Govindarajan and Trimble call "spend a little, learn a lot". This is the central metaphor you want to keep in mind as you're designing your innovation initiative. Now, we take these ideas about learning and experimentation back into planning itself. The implication here is that we need to plan the initiative not only as if it was a business with targets to hit, but also as if it was an experiment with a hypothesis to test, the hypothesis being that we've got a viable market and product opportunity here. We still have outcomes from that experimental focus, but there're learning outcomes. What's progress then? Well, here's a way to think about it. If you look at the diagram, at the start in some areas, some key areas, we probably can only make fairly vague guesses rough. We only have a rough idea. Maybe we only have wild gap. We can make wild guesses about key elements in our business. Things like, well, what is the size of the market here? What will customers pay for the product? How much effort will it take to sell it or to make it? So, if we think about planning now, planning is about setting goals and devising a course of action that will get us to those goals. The goal here is to make our predictions better. That's an outcome that reflects learning. The method is to find ways to test, to rigorously test our assumptions about how our actions lead to outcomes. That's the kind of plan that we want, not just the traditional plan with resources and timelines and so on. So what we need to do is to develop a custom plan, to do just this. And the key word here is custom. The difference, the part that makes it custom, is that we're going to include a model of our business in it. It's not going to be a business model as such, but it's going to be a diagram, a model of cause-and-effect that identifies key actions and outcomes that we hope to achieve through them. The key outcomes are going to be specific to our initiative. That's why it's going to be custom. Plans also going to delineate the experiments that test our key assumptions and address the key unknowns. So, in this sense, it's something that is going to work into our planning process. We're going to use it as the baseline and as a way of evaluating how we're doing. But we're going to be evaluating our model, rather than numbers and targets alone. So, let me go ahead and describe the kind of model I was just talking about. Here's a simplified version. It's not a spreadsheet. It's a diagram. Spreadsheets are good for calculations. The diagram of causes-and-effects of actions and outcomes is good for discussions of what we think we need to do, how these things lead to the outcomes we expect, what we know, what we don't know. So, in the diagram here, and this is one way of doing it, a common way, there's other ways if you search for them, the nodes here, the circles, are first actions, and then the actions lead to outcomes, and there might be chained outcomes. So, for instance here, we have a proposed product, where the action, advertising spending, is assumed to increase an outcome trial use. And increase in trial use is seen to affect a subsequent and more consequential outcome sales. And there's also diagrammed additional factor that's positive to affect sales, which is product quality. Now, not shown here, a typical model is going to be more complicated than this. It might fill a whole page, and it might have further details, time delays, the type of relationship, how strong it is, marked down next to the arrows. It's a free-form diagram. But the reason I'm showing you this example, it's for you to see how it focuses our attention on how the business works. We need to be able to make reliable predictions about how the business works if we're going to know how much we should invest in it, whether it can be successful and so on. In this case, we need to be able to make reliable predictions about how advertising and trial use leads to sales. If we have only rough guesses in this one, then we're in trouble. And the model is going to draw attention to our lack of knowledge, because it's become evident that our sales estimates are on very unfirm ground. So, one of the things you want to note here is the complexity here is not in the cause-effect relationships. If you go back to research on learning and learning organizations, which is an enormous area. Senge, the Dean of Scholars on learning, he was famous for models with feedback loops and so on. And one of his points was we live in complex systems. And these certainly can happen here. What I want to emphasize here is the benefit is not from complexity. The benefit is that the model helps the team identify and communicate not only among themselves, but with outsiders, the sponsoring executive, for example. Characterize key cause-effect relationships. Characterize the assumptions that they've made. Let you discuss how well we know that relationship and make a record of this idea of how the business works as you test and update the model over time. The model is going to evolve as you become more and more confident about the way that your business works. Now, not all of the assumptions are known and unknowns are relationships in the model are equally consequential. We want to identify and learn about the most critical unknowns first. Some actions, for some action-outcome relationships, we either know pretty well how the relationships work or maybe it isn't really that important. But for others, if we're honest, we really only have a rough guess. And if we're wrong, the consequences could be severe. So, it helps to map the different unknowns out on a chart, the different relationships, action-outcome relationships on a chart. This will help to identify the most critical unknowns, the make or break ones. And these are the ones to prioritize. And so, what you do is you go ahead and on one dimension, you mark out how sure you are about the relationship and on the other, the importance of the consequences if you're wrong. And the cells that you're going to concentrate on, of course, are the ones where you don't know a lot about the relationship, and it's critical that you understand it for the health of the business. That's the idea that is the center of what's different here. You're going to be planning out resources. You're going to be planning out timelines and so on. But you're going to be focused on developing your knowledge of a model of the business to the point where you have reasonably reliable predictions about how it acts. To do this, it's going to take time. So, let's move on to talk about how planning works over time in an innovation initiative. We need to iterate, to experiment, and to learn, and adjust at a pretty fast pace. It's critical we come to the point where we know whether or not this is a viable business, we know how to compete quickly. This has some implications. And first the most fundamental is, we want to establish a relatively fast review cycle. We meet to review and evaluate the plan and the model more frequently than we would with an established business. And for this reason in part, there's another reason, we establish separate planning forums for innovation initiatives. The meetings with executives are separate from meetings around planning for the established businesses. The first reason is because of the tempo. The second reason is because of the orientation. It's hard to take a results orientation if you're in the same meeting. And an established business comes up where you can make reliable predictions, and the results orientation makes perfect sense. And then, the very next agenda item is your innovation initiative, where you can't make reliable predictions it's about learning. The whole conversation feels very different and it's hard to switch between those two, so you establish separate planning forms. It's really as if you've got a separate planning track for innovation initiatives, and for those on established businesses. Now, what you do in those planning tracks is also a little bit different. In the innovation side, we're discussing data but we're also really focused on our assumptions and then the unknowns as opposed to in the established business, where we're really going to have fairly reliable idea of what's happening. We're looking at the data that's coming in. In the innovation side, we're very focused on trends, which way are things going rather than counting up the totals. And in the innovation side, it's okay to make revisions to predictions because we're trying to align our predictions with outcomes when we don't know what the outcomes are going to be. Whereas, that's frowned on in the established businesses because here we can make reliable predictions. But one point, when we make revisions, we do it through a formal process in the innovation initiative not just conversations, where somebody comes up with some random idea as to why something might have changed. We use that cause-effect model and keep it updated as a central part of the planning process. We use it as our anchoring device so that we can be rigorous about how we're developing our initiative. And in the same way that we change budgets and targets, you change that model. It involves a discussion and a formal process. So the idea is that you put all this together, and the idea of a custom plan. The idea of baking in the learning and the experimentation into a planning process is what you achieve here. And that's something pretty important. Because if you build experimentation and learning into the fabric of the innovation initiative in a way that's programmed and planned, that is on some level, has some familiar themes to those of us who lives in results-oriented environments where planning is a fact of life. Well, most of us and executives in particular, are going to be able to relate to this process as something that they can manage. And that's why you go through all this trouble. In the next video, we're going to see similar themes, but we're going to focus in on evaluation in particular, for innovation initiatives.