• Overview
  • Sample Size: Precision
  • Sample Size: Reliability
  • Precision vs. Reliability
  • Bias: Overview
  • Selection Bias
  • Selection Bias
  • Response Bias
  • Quiz: Question 1
  • Quiz: Question 2

Sampling Strategy Overview

This interactive tutorial will help you to determine the sample size for your study.

A sampling strategy should answer two questions:

  1. How much data should be collected?
  2. How should the data be collected?

How much data is needed depends on how confident we want to be that the sample values correspond to the population values. Once we have decided how precise we want our estimate to be, we can use sampling theory to determine our desired sample size. We can also reverse the process to get a sense of how accurate our estimate is, based on the number of respondents we can reach.

This tutorial explains the key things to consider when determining a sample size, without focusing on the mathematics. The second question, how the data should be collected, will be explored in depth in the separate tutorial on tool selection. It will also be considered briefly here, in the context of how data collection might bias the data.

Sample Size: Precision

What sample size you need for your study depends primarily on two factors: How precise you want your estimate to be and how confident you want to be that your estimate is accurate.

The first concept we need to understand in picking a sample size is precision, also referred to as margin of error.

Say that you want to know how safe people feel in a refugee camp housing 5,000 individuals. You are going to distribute a questionnaire that contains this question:

How safe do you feel?

Not at all safe
5
 
4
 
3
 
2
 
1
 
Completely safe

How many people do you need to ask to get a sense of what the full population is experiencing?

To calculate this, we first need to decide how precise we want our estimate to be. In this case, let us say we want no proportion in the sample to deviate by more than 5% from what we see in the population. This means that if 35% of our sample says that they feel completely safe, we can feel confident that the proportion in the full sample is no less than 30% and no more than 40%.

This + or - 5% figure is a good standard of precision for most surveys. If you use supplementary tools to validate your data — through, for example, community group discussions or key informant interviews or if you have multiple data sources that you can triangulate with one another — you can use a higher figure such as + or - 10%.

Sample Size: Reliability

Reliability refers to confidence levels.

Once you have decided what level of precision you want, you have to determine how reliable it should be. Reliability tells us how confident we can be that the population value falls inside our precision estimate (+ or - 5% in this example).

The easiest way to think about reliability is to imagine repeating the survey 100 times, picking the same number of random people from the population each time. In this case, a reliability of 95% would mean that in 95 of our 100 surveys each response category would have proportions similar to the full population proportions (within our + or - 5% range). However, in five cases, at least one category would fall outside these proportions.

Statistical convention uses 95% as the most common threshold for reliability, but that is just a convention. You could pick a reliability of 99% or 90% or any other value (but the lower the value, the more careful you have to be in interpreting your results).

The Relationship Between Precision and Reliability

Precision and reliability together determine sample size for multiple-choice questions. Strike a balance point between the two.

There is a tradeoff between reliability, precision and the number of respondents, with the number of respondents increasing ever more steeply for higher precision. The key thing is to make an informed decision that accounts for both the importance of getting accurate information and the cost of collecting data.

Given most settings, a sample of 400 is sufficient to be able to draw good conclusions about an entire population. If you are not looking for representative samples or you don't have multiple data sources, which means you're less dependent on representative samples, a sample of 100-200 is sufficient.

Bias: Overview

Your sampling strategy can bias your data in two different ways, by selection bias or by response bias.

Your data is biased when the responses in your sample systematically differ from the attitudes of the full population. There are two ways sampling strategies can bias the data:

  • Either every person in the population is not equally likely to get selected for the sample, or
  • the data collection method makes people compelled to respond in a specific way.

When some people are more likely to get selected than others we call it a selection bias, when people feel compelled to respond differently from what they actually believe we call it a response bias.

Selection Bias

When some people are more likely to get selected than others.

Imagine that you want to conduct a poll to predict the outcome of a US presidential election. To get a good estimation, you plan to ask 10,000 people how they are going to vote. You decide to advertise your poll on Buzzfeed. About 3,000 of your respondents are sure they are going to vote, and of that 3,000, 2,500 (83%) say they will vote Democrat.

This proportion seems unlikely to be right, so you decide to conduct a second survey. The sample of 10,000 people remains the same but this time you pick random numbers from a phone book. Some 5,000 people report they are going to vote, and 3,000 of them (60%) say they will vote Republican.

In the first survey you have voter turnout rate of 30% suggesting the Democrats will get 83% of the votes. In the second case you have a 50% voter turnout, suggesting the Republicans will garner 60% of the votes.

So which result is right? Think about it and click on your answer to continue.

Nope!

The best conclusion is that neither poll is likely to be accurate, because neither polling method allowed you to reach the entirety of the electorate.

Buzzfeed is an entertainment website focusing on young adults, so most of your sample was probably college students. While you might have gotten a good sense of voting intentions and political preference among the American college community, you might not among the American population at large.

The second method is better, but it fails to reach many young people who do not register land lines, people who are homeless, or people who don’t have a phone for some other reason.

The point of this example is to illustrate that how you collect your responses will influence who is likely to respond, and if you only get responses from a sub-population that is systematically different from the total population, chances are that your response estimates will be different as well.

Response Bias

When people feel compelled to respond differently than they actually believe.

Now imagine that you are conducting the same survey to find out who will be the next US president, but this time you have volunteers from the Democratic Party asking people on the street.

If you choose this sampling method it is likely that a larger proportion of your sample will report supporting Democrats than you will see in the actual election. This is because most people tend to respond in a way that pleases the interviewer. This is known as the courtesy bias.

A related bias is the conformity bias, where people tend to respond in a way that is favored by their social group, regardless of their own opinion. This type of bias is particularly strong if respondents feel that their responses are not anonymous.

Using independent third party data collectors or anonymous survey tools can be a good way to reduce response bias. Regularly discussing the data with communities is also a good way to get beyond both courtesy and conformity bias. See the dialogue and course correction tutorial for more information.

Test Yourself: Question 1

What type of bias, if any, could be skewing your results?

You want to investigate sexual health among young women in a refugee camp in Jordan, just across the border from Syria. You pick five random areas of the camp and have a focus group with 30 young women in each area and ask them about their experience. No one in your sample claims to be sexually active.

Is this because of:

Test Yourself: Question 2

What does reliability and precision tell you about your results?

You want to know how important an urban food delivery program is to people receiving the aid in a suburb of Nairobi, Kenya. Responses range from 1 (Not Important) to 5 (Very Important). Your sample size gives you a 3% Margin of Error with 90% reliability, and 43% of your sample responds that the program is very important to them.

Assuming the data collection is unbiased, and that the question is well designed, what can you reasonably conclude?