I want to end by talking about some competing expectations between evaluators, implementers, and funders that may affect your planning or implementation of coverage surveys. Some things to be aware of as you think about your coverage survey. Different stakeholders. There are a number of stakeholders that are often involved in these surveys, and they have distinct but overlapping interests. A typical set of stakeholders would be the program implementers. The folks who are implementing the program that you may be evaluating. Program evaluators. That may be one or several institutions that are charged with evaluating a program, and they may be the ones doing the survey. Sometimes there's no outside evaluator and the implementer is the one doing the survey. Then a third set of stakeholders are the funders. Whoever is funding the program and/or evaluation and coverage survey, and they have their own sets of interest. Program implementer's goal is generally to improve the program they're implementing. They're interested in surveys, in so far as they will help them provide information that will help them improve the program, or if it's a program that hasn't started yet to better design the program. They also may want to obtain continued funding for a program so they may want to use survey data as evidence that a program is working and should be continued. Then they may need to fulfill obligations to the funder. There's cases where the program funders require that an implementer conduct a household survey at baseline and/or endline to refine the program and also to provide evidence of impact or not. Outside evaluators have their own sets of interests rate. Their aim usually is to rigorously assess whether the program achieved a population impact. They also usually want to improve the program that they're evaluating, again, using survey data and other results. They also have obligations to the funder. They are funded to evaluate a program, usually by the same folks who are funding implementation and they need to fulfill those obligations. They need to do what they were funded to do. Funders have their own sets of interests, so they want to know whether the program that they funded achieved a population impact and, ideally, they want to see that it did, but they want to know was this a good investment or not? Is this something that should be replicated? They also ideally want to improve the program and determine whether to continue funding the program or replicate it elsewhere. How might the stakeholders and different interests come into conflict in ways that are relevant for your household survey? The first potential area of conflict is survey objectives and content. What does the funder or the implementer want to measure or think is important to measure versus what is measurable. Your role as the group or the person designing or implementing the survey is to think about the distinction between those things. Also, another point is the cost to measure indicators of interest. The stakeholders may have a long list of things that they want to measure. Some of them will require larger sample sizes and be more expensive to measure than others. Some of them might require biomarkers and the expensive to measure for that reason. Survey design. How big or small the change should the survey be powered to detect? Essentially, how big of a sample size do you need? The bigger the sample size, the more money. The rigor of survey sampling and implementation, and then the time and cost to implement the survey. I want to address these in a little bit more detail. Going to measurability, many but not all indicators are measured by survey programs or are able to be measured by household surveys. There was a recent review of this that found that of kind of the 58 interventions with evidence of effectiveness against cause-specific mortality and stillbirths. About half of them are currently measured on household surveys, fewer than half actually. The periconceptional, antenatal, and intrapartum periods are particularly poorly represented in part because reporting of those is more difficult, especially for intrapartum interventions. Then yet many of those interventions are delivered via health facilities. There may be a number of things that the implementers or the donors want to measure ideally. Things that they are doing in their program that you have to say actually, there's no way to measure this well, in a household survey. Coming back to the question of survey design, how big or small of a change should the survey be powered to detect? The smaller the change to detect, the larger the sample size and the cost. If you want to detect a 10 percentage point change in ORS and zinc treatment for diarrhea that will require a larger sample size and cost more than if you want to detect a 20 percentage point change. Some indicators also, as I mentioned, require particularly large sample sizes. These are primarily indicators that have small denominators, that are measured among small groups of people where you have to go to more households to find enough of that group. This is true, for example, of exclusive breastfeeding, which was measured among infancy or to five months, care-seeking for suspected pneumonia. We're only a very small percentage of children, maybe 2-3 percent have had signs of [inaudible] respiratory infection in the last few weeks and then indicators measured among adolescence where adolescents 15-19 represent a very small slice of the population. The rigor of survey sampling and implementation is often another conflict point. There are many potential sources of error in surveys. We've talked about some of them, we'll talk more about others in the coming lessons. And so activities to minimize error are really important. These include a strong pilot, good training, intensive supervision, and data quality assurance. Those are really important to ensure that you have high quality, accurate data. But those cost money and they take time, and so there may be a conflict between what the survey implementers, or the evaluators want to do in terms of rigorous implementation, and what they have the money or time for. Finally again, surveys take time and cost money to implement well. Often stake holders want you to complete a survey quickly at low cost and still have high quality and that's really not possible to. Often you need to have difficult conversations about what is feasible to do within what time frame and at what cost. How do you address these conflicts and potentially mitigate them? Probably the most important thing is to communicate early and often with all of the different stake holders. Make sure that you understand the aims of the different stakeholders, particularly program implementers. What are their aims? What do they want to get out of the survey? What do they want to understand? What does the [inaudible] want to understand? What do you need to do as an evaluator? Be clear, have clear specific survey objectives that are shared with all the stakeholders and that everybody agrees on. Have a list of indicators to be measured that is agreed on by all parties, and that may require some negotiation. Then have a budget and a timeline that you again communicate about, you may need to negotiate on, and keep different stakeholders aware of potential changes to the budget and timeline. Relatedly, you want to be realistic about the survey budget and timeline, so don't under-budget, right? Don't say, oh, we can do this. Say that you can do it in a less expensive way when you know, you're going to get to the field and you're actually going to need another a $100,000 or you're going to need another month that you hadn't told the other stakeholders about. That causes problems. Even if there is pressure to reduce the budget or to do things faster, you don't want to reduce the budget or the timeline below what is feasible to do what you need to do because it will just cause conflict in the end. Finally, when you have to make trade-offs and you will likely have to make trade-offs. You want to prioritize data quality. What do I mean by that? In this course, we're going to talk about a number of activities that are important for ensuring the quality of the data coming out of your survey. So this includes doing high quality sampling, high quality training that covers everything it needs to cover with sufficient time, intensive supervision and data quality assurance. Those things cost money and you really don't want to cut corners on those because if you do, then you won't be able to trust any of the data coming out of your survey. What can you trade off? Often you can trade off some of the indicators that you want to measure. So people may come in with a very long list of indicators and you may be able to work with them to cut that list down to the ones that are really essential. By doing that, you reduce the length of the interview and therefore the cost of the survey. Other things that you'll see sometimes is a stakeholder has one indicator that they're really set on measuring, but that may be very expensive. Because for example, it's measured among a small population group and therefore has a very large sample size. By cutting that one indicator, you may be able to reduce your sample size and therefore your costs without affecting the quality of the rest of the data. Similarly, any indicators that require bio-markers, if you cut those indicators, then you cut all the costs associated with the bio-markers. When you are making trade-offs, you want to be thinking about what you can do that is not going to negatively affect the overall quality of the survey. Then finally, keep in mind that usually everybody is acting in good faith. Different stakeholders have different interests, but everybody's interest usually is to improve the health of the population. To improve the health of children or mothers or whatever, whoever the beneficiaries of the program are. Now, you may have particular interests as the group implementing the survey. But that doesn't mean that the donor or the implementers are not also acting in good faith and acting to improve population health. Keeping that in mind will often get you a long way towards being able to have a productive collaboration with the different stakeholders.