For users to be able to effectively make use of a system that you've designed, they need to be able to discover what the system can do. And how to operate it, or how to make it do whatever it can do. And this is how we bridge the Gulfs of Execution and Evaluation that we talked about previously. And to be more specific, to bridge the Gulf of Execution, users need to be able to figure out what actions are possible. And to bridge the Gulf of Evaluation, users have to be able to discover whether the actions that they completed were successful. And we'll talk about a few things that we can do to help bridge those gulfs and support the discoverability of our system. So we'll talk in this lecture about affordances, signifiers, feedback, constraints, and conceptual models. An affordance is a feature of an object or an environment that indicates the possibility of action. So to take this door know, for example, because of the size and shape of the door knob, and its fit to the human hand, it provides the affordance of graspability. It looks like something that we can grab with our hand. And if our hands were much larger, or much smaller, it wouldn't have to affordance of graspibility, because it wouldn't fit into our hands. And by providing that affordance, it suggests what we can do with it, and it suggests an action that we can take. To provide another example, look at this buttons, these buttons provide the affordance of pressability because they're about the right size for a human finger to press them. And as you can see here, the stop button is raised suggesting that it could be potentially depressed. And we often see these affordances or suggestions of actions that can be taken used in purely graphical interfaces as well. So, in this case they're not truly affordances, because they're not truly attributes of the object that suggest the actions that can be taken. But we use the visual cues that we associate with affordances to communicate the possibility of action. So in this example taken from a bank website, you can see that these six buttons across the navigation area, savings and checking, loans and so forth. Have a slightly raised effect to them suggesting that they could be pressed in the way that a physical button could be pressed. And by doing so it suggests where the user might be able to take actions of certain kinds using this system. So while affordances are aspects of physical objects, or physical environments that suggest how a user might interact with them. They'd often don't go all the way towards telling what will happen when the user interacts with them. Or in some cases where specifically they need to interact in order to make an action take place with the system. So to augment what we can accomplish with affordances we often use what are called signifiers, which are more explicit signs or messages that indicate to the user what will happen if they take a certain action. So to return to this example, the affordance of pressability is provided by the raised aspect of the stop button, and also the size of the button, and its fit with human hands and human fingers. However, the specifics of what will happen when that red button is pressed as opposed to what would happen when the green button is pressed is communicated through the word start and stop. And that would be, in this case, a signifier or a sign showing specifically what will happen when each of these different buttons is pressed. Signs are not always textual, they're not always written out in the way that it was in the previous example. So here's an example of a walking path through a field that shows where previous walkers have traveled. And indicate to somebody coming into the field later, what might be the best path to get through this field. In this case, the sign is the worn area through the path and is provided simply by the actions of previous travelers through this area. Affordances and signifiers often work together to communicate what actions are possible and what might happen when those actions are taken. So affordances if done well can communicate very intuitively where the action needs to occur with a system or an interface. But signifiers are often necessary especially when many actions are possible and perhaps many actions share the same affordances. So to use this example again of the bank website, we can see that there are many different actions that are available to go to different parts of the site, savings and checking, loans, credit cards, and so forth. All of those buttons share the same perceptual affordances, but the signifiers are needed in order to indicate what the function of each individual button would be. It's worth noting that conventions and standards can emerge over time that reduce the needs for explicit affordances. So most users have become accustomed to clicking on text in websites, even if that text is not necessarily called out with affordances like raised buttons, or underlines, or special coloring, or anything like that. So if we look at the Amazon website, we can see that all the navigation options like Browsing History, Todays Deals, Gift Cards & Registry, and things like that are not called out in any particular way. In fact, they're not even different from text on here that maybe isn't clickable. However, users have become accustomed to seeing text in certain places and knowing what it will do. So we don't always need to use both affordances and signifiers if we understand what our users expect. However, when we're working in a new domain, where maybe the interface standards aren't as well established. We would need to be more careful about using affordances and signifiers together to indicate what actions are possible and what might happen when we do them. This is an example here, this is the Amazon Dash, which is basically just a button. It's associated with a product that you can reorder by pressing the button. And you'll notice that this device is very simple, provides very little in the way of instructions for how to use it. But relies heavily on affordance in this case, the pressability of the button to communicate how to operate it and what the functionality might be. Another important design principle for supporting discoverability and helping users understand how a system works is providing feedback. So users need to know that the system received their input in the first place, but the users also need to know what the system did with their input. Let's look at an example of an interface that does a nice job of providing feedback at multiple levels. So this is a iPhone interface and they do a very nice thing with the password input. So when you have a password input one of the things you want to do is you want to Cloak the input usually by replacing the text that people that the user inputs with dots or stars or something like that. So somebody looking over their shoulder can't see it. However, one of the problems that you often have is that the user themselves is not sure if they've inputted the correct key. So you'll notice that when the user inputs a password on an iPhone. The key that they press is briefly shown, giving the user a chance to see that the system has received the correct input or the input that they gave. And so if they type a wrong character, they can quickly go and correct it. However, after a couple of seconds, that text is replaced by a dot cloaking the interface so that no one can't see it. But there's other levels of feedback that are important too. So for example, if I now sign in I receive feedback from the system saying the the password I input was incorrect. And giving me options for how to correct it like replacing it with another password, or following the forgot password dialog. Another principle that's important for guiding users towards the correct actions is to provide constraints. And the idea here is that unavailable actions should be disabled so it's not possible to take an incorrect or unavailable action. And also to limit the total number of options so the selection will be easier. So it'll be easier for users to find the correct actions. So here we return to the stop and start button example to show the principle of constraints. And what we see here is that, right now, the start button is not pressable. It's already been pressed in, and so that action is not available. So essentially constraining the user to only choose the stop action, because that's the only one that's available at this time. Here is another example, this one is from Google Docs and we can see that, when we have a blank documents, and we select the table menu, the only option that's available to us is to Insert a table. The other actions like inserting rows, inserting columns and so forth are not available to us, because those can only be taken once you have a table to operate on. And so, by constraining the actions that the user is able to take, we're able to keep them from taking an action that won't lead towards the result that they want, but encourages them to take the actions that are available. And finally, an example of limiting the total number of options that will make it easier for people to be successful. And a classic example of this is the Google home screen, which really gives you almost nothing that you can do except for type a search in. So it's very hard to make a mistake on this interface because there's really only one thing you can do, which is type in your search term. And then you actually have two options, you can either perform the usual Google search or you can click I'm filling lucky and just have it decide on a search result to give you. We can compare that to, for example, the Yahoo interface which does provide a search box and a search button, but provides many other options which could distract the user or lead the user down in different paths and make it less likely that they would successfully complete the search that they wanted to do. And finally, it's important to support the formation of user's conceptual models so that as they interact with the system, they can learn not only how the specific actions work that they've already taken, but how future actions might work using that system. So they understand in a greater sense how this system works, they are able to simulate future actions and anticipate what will happen when they do other things with the system that they haven't done yet. And the way that we can form effective conceptual models is to use affordances, signifiers, feedback, and constraints to help form those accurate conceptual models. So here's an example of an old fashioned air conditioning interface for an automobile. And you can see that the combination of these different interface elements make it fairly easy for the user to anticipate what's likely to happen when they interact with the system. So here we have a temperature slider that fairly clearly indicates what you need to do in order to increase or decrease the temperature. And here we have a fan control which fairly clearly indicates what you need to do to turn the power of the fan up or down. These interface elements in the middle are probably a little bit more ambiguous and a user might have to experiment with them or have some prior knowledge in order to figure out what exactly they're likely to do. In contrast, here's an interface that does not provide an effective conceptual model for how to use it. This is an old fashioned digital watch and one thing you'll notice about this is that it has these four buttons around the outside, which are the way that you control the different modes of the watch. So you could switch it into alarm mode, or turn on the stop watch, or change the time, or things like that. However, there's no really clear way to map those buttons to which specific actions they're going to do. So the only way that a user can form a conceptual model of this is by trial and error or by reading a manual. Formation of effective conceptual models is challenging, and perhaps one of the most challenging aspects of designing interactive systems. And the fundamental problem is that as a designer, we often have a very clear idea of how the system is supposed to work, and we use that when we design the system. However, the user doesn't have the benefit of what's inside our head. They only have access to whatever the system presents in terms of its interface. And through the affordances and the signifiers and constraints and feedback and through trial and error, they develop some model of how the system works. And it's often disconnected from what the designer originally thought. And so coming up with ways to present a system image that supports the formation of an effective conceptual model is challenging. But one of the things we really need to do when we're designing these types of systems. So as I said, the key tools that we have to support the formation of conceptual models are affordances, signifiers, feedback, and constraints. However, there are two more that we can think about that we can use to support the higher level formation of more sophisticated conceptual models. We need to think about consistency and how to use consistency across different aspects of the system so that what users learn in one place, they can apply elsewhere. And we also can think about places to apply metaphor, where we can take users knowledge of other systems and apply them to learning the system that we've designed for them. Metaphors in particular are a very common technique that user experience designers use to try to communicate complex system functionality in a way that users will rapidly understand. Here are some examples of metaphors that are quite common, so common in fact that we probably don't even think of them as metaphors anymore. But the notion of the file and the folder were instrumental in the early days of the desktop metaphor in the early graphically user interfaces to communicate to users how they could save information and retrieve information. And how they could group that information in ways that were logical and organize it so that they can find it later. The trash can or the recycling bin is a metaphor that communicates to users how they can discard information, but still get it back if they change their mind later. And finally, the most recent of these, the shopping cart metaphor is a common metaphor that we use on e-commerce sites as a way to communicate to users how they can select things that they want to purchase. And go through the process of selecting and separate that from the process of purchasing, and choosing shipping, and all of those kinds of things. And the shopping cart helps to organize that by giving people a sense of how that shopping process is going to work based on their experience on other types of real world shopping experiences. In this lecture, we've talked about how we can bridge the gulfs of execution and evaluation. We talked about the importance of of discoverability or supporting users ability to discover what a system can do and how to do it. And the importance of conceptual models, so helping users understand what else the system is likely to be able to do. And we've talked about the roles of affordances, signifiers, feedback, and constraints in helping to support both discoverability and the formation of conceptual models, so that we can design systems that will work well for the people that use them.