Hello, and welcome to the third week of the Mobile Interaction Design course. This, and the following week, are dedicated to the discussion of usability evaluation. You need the knowledge of usability evaluation, not only because of the practical task you have to carry out to get the course certificate, but also because the evaluation plays a key role in design activities. I'm talking about, not only the necessity of validating your design decisions with users. When designing interactions you need to choose among several alternative solutions. But how can you do that without knowing which one is better and why? You should learn evaluating usability to be good at interaction design. That's where I stand. Actually we slightly touched on the topic when discussing the goal directed design process. I mentioned that the holistic assessment of product qualities that can be done in different ways. For example via service using mobile analytic set up to gather data in accordance with the product KPIs and some other methods. But during the design of a new app or a feature, it's customary to evaluate separate properties. For example, you can determine whether the proposed solution is meaningful. That is, itself, the problem users have. As you know that's a matter of the what level. In the previous week, I pointed out several methods that help to do it like consideston or customizing scenarios. You can even evaluate aesthetics of interaction separately. For example, my friend and former colleague which is proposed an empirical method of pairwise comparisons that helps to choose among several visual design concepts. Of course, the aesthetics of interaction is concerned not only with appearance of a user interface. But also with interface sounds, animations, the style of interface text, and other sensory elements. As you remember, for the purposes of this course, we choose to use usability as a measure of how and well a product or service is doing on the health level. These two weeks will help you dive deep into the topic. They cover a little bit of theory on human activity and interaction problems at the beginning and address different usability relation methods whose classifications we are about to discuss. But before we start talking about the classification, there are a couple of things that you should learn about usability evaluation. Firstly, usability evaluation implies the existence of a specified user need. No matter what method you use, there always must be a goal. Without the goal, you have no basis to call an interaction successful or unsuccessful. Without the goal, there is no criteria to call any interaction phenomenon problematic. Secondly, to measure usability, the design of the method should take into account, or eliminate confounding factors related to the achievement of the goal. For instance, in performing some task in the real world, a person may change her mind for external reasons and drop out of the interaction. When measuring these abilities such things have to be kept in check. Another example, before renting a car, a user may visit an app several times in order to choose the car check-out prices, etc. In measuring the usability of the arrange and task such interactions should be removed from consideration. The classify usability evaluation methods we will use a bit of different sets of dimensions than in the classification of user research methods. For instance, l merged the qualitative, quantitative dimension with the research goals dimension getting formative and summative sub-classes. Formative evaluation is near always qualitative. The term originated in instructional design and it means that such kind evaluation help to form the design of an app. Formative methods such as observational field visits, scenario based walkthroughs, heuristic evaluation, and others I used for refinement purposes to discover and gain deep understanding of interaction problems. Summative methods are used to determine whether a design meets specific performance or satisfaction goals to establish usability benchmarks or to compare alternative solutions. As you may guess summative relation is always quantitive. Examples of such kind of methods are benchmark usability method testing, and methods of GOMS family. By the way, all our conclusions about types of gathering data, the amount of designs and samples from the previous week are true for these two separate classes of methods. Not all usability evaluation methods are user research methods. For examples, no one calls cognitive walkthrough a user research method. When professionals talk about user research method they almost always mean empirical methods. The methods that involve users. Analytic methods in opposite, do not imply their activities involvement. One can distinguish between two clients of analytic methods, model based and inspection. As it's name imply, the whole idea of model based method is to study interactions through creating analytical model. Aforementioned methods of GOMS family modelling methods, we will discuss them later in this lecture. According to Jacob Milson, usability inspection is the generic name of a set of methods that are all based on having evaluators inspect an interface. Examples of such methods are cognitive folk flu, respective based visibility inspection, different checklist techniques etc. All this methods are formative. You may wonder, why do we even need analytic methods if we have empirical ones? I'll give you two answers. Firstly, empirical methods cannot be used in some situations. The most prominent example or such situation is the process of design creation itself. When you propose any design solution you need to evaluate it and do this first. Because normally you will propose tons of alternative solutions within a short period of time. You can't use empirical methods here. They are too time consuming for this. In this case, analytical methods play the role of primary filter passing only solutions to the empirical validation. Another example of [INAUDIBLE] situation is when you don't have direct access to the target audience of your ad for some reasons. Secondly, analytic and empirical methods help to uncover different types of interaction problems. To get better results, you need to combine them in your design process. These dimensions state the same. Not that, in fact, all analytic methods are not using class. The same make due dimension has no sense for usability evolution methods. Because all empirically usability evaluation methods allow to collect data about what people do. In addition to this, formative empirical methods allow to gather what people say data. The fourth dimension is concerned with what you can evaluate. The fact is that different evaluation methods require different level of design readiness. What I mean by concepts and non-interactive prototypes are non-interactive artifacts of different levels of fidelity like sketches and opposite to high fidelity of wireframes and mockups presented on this slide. And we'll discuss various interpretations of design in the fifth week. You can make all these representations indirective without the need to write a single line of code using specialized prototype software. For example, just in mind, action, in vision, and so on. In this case, they will become interactive prototypes. The last option is concerned with mobile applications released on the market or at least ones that have reached the stage of alpha testing. [INAUDIBLE] methods for instance split testing can be applied only during this phase. All right, let's all with the classification. In the next part of this lecture, we will start an overview of usability evaluation methods in accordance with this classification. Thank you. [MUSIC]