We're going to be talking right now about how preference testing differs from A/B testing. A lot of people use the two terms interchangeably when they really mean one or the other. So, it's important to have a good sense of whether you're talking about A/B testing or preference testing. For the purposes of this course, we're really covering A/B testing in detail. One of the primary differences between A/B testing and preference testing is that in A/B testing, you're actually running an experiment with statistically significant data. In a preference tests, you're really asking do people prefer one version of something over another. So, about that statistical significance, since you actually are running an actual experiment, success is judged by the numbers. In A/B testing, you want to make sure that you're testing at a level of statistical significance that ensures that your business stakeholders have confidence in the level of performance of one variant over another. So, success really is judged by the numbers. In preference testing, you're typically running with a much smaller sample size, typically 10-30 participants, and you're using a mixed method approach in order to get a sense not only of whether or not one version performs better, but why. So, you would start with usability test and then follow up with questions saying, "Well, why did you prefer this version over the other one?" In another method, you might actually even have people treat tests different variants of a navigational structure so that you get a sense that, yes, they more quickly reached the point in the experience that you wanted them to faster using a particular path, and then also you can delve further into why by interviewing them about why they preferred that version over the other. So, next, A/B testing is really about understanding whether behaviors are being impacted by variances in how it is that you approach resolving a design problem. So, they're very much KPI-based, Key Performance Indicator, or metrics-based. So, those desired changes in behavior can be traced to specific metrics that you care about, and those metrics could range from conversion to engagement. When you're looking at preference testing, rather than looking at only the performance, so whether or not more people were successful using this version or that version, you're actually directly asking someone whether or not they prefer one version or another, or you're asking them other types of questions that help you to get a sense of is this better or worse. So, that might be, do you understand this? Why do you understand this? Is this clear to you? Do you like this one better? In A/B testing, one of the primary things that you'll notice is that you're using the between subjects method. That essentially means that you could be looking at the behavior of up to hundreds of thousands of participants to get an understanding of whether or not a particular design approach works better than another one. With preference testing, you're typically using the within subjects method. So, that means that you have a particular participant look at two to three options at most of a particular design approach and reacting to questions that help you to understand whether they understood or preferred one option or another, and that really is because people have difficulty comparing more than two to three things at once. Finally, A/B tests are typically run on a live site. So, that means that you are letting people see in real time different versions of your experience. Preference testing typically enables you to get much more radical because you have an opportunity to test things that might be so far outside of what it is you currently have available in your experience that people may react really strongly to them. So, you have a lot more latitude to test things that might be very controversial in that setting. So, in summary, A/B testing is really about looking at inexperience that currently exists and saying, "Based on behavior, we have confidence that people are going to respond better at a statistically significant level to this approach over that approach. Preference testing is really about enabling you to optimize as much as you can before release in approach so that you have a good sense of, "This approach is at least going to answer the needs and questions of most of the people who are going to be using it." You can actually use A/B testing and preference testing together, because essentially what you can do is start out with some approaches that you want to A/B tests and do some preference testing ahead of time to see what are the clear winners that we want to include in that A/B tests so that we have a better sense of something that's not going to fail when you put it out into wide release.