Great question! If you don’t have a sample size calculator at the ready, we’ve got a handydandy table with the answers. To use the table, just ask yourself two questions:

How many people are in your population?

How representative do your survey results need to be?
Answering the first question is pretty simple. The second can be a bit trickier. Think of it this way–the closer your sample is in size to your population, the more representative your results are likely to be. And that’s why you’ll notice that the recommended sample size in the table below gets smaller as your tolerance for inaccuracy–or error–gets larger.
Let’s work through an example
Perhaps you’re interested in finding out how many people in your region of 10,000 people favor a longer school day for children and you’re willing to accept an error of plus or minus 5%. You sample 385 people, as the table recommends, and find that 70% of those surveyed are in favor of a longer school day. Given your 5% acceptable error rate, you can assume that if you’d asked every person in your region to take your survey, the actual proportion in favor of the longer school day would range from 65% to 75%.
But what if that range is too big? What if you need to be more precise? Well, then you’re going to need to sample more people. Using the table above and assuming a population size of 10,000 you can see that you would need 1000 survey respondents for a 3% error.
How many people should you invite to take your survey?
The table recommends the sample size you’ll need, not how many people you should invite to take your survey. So if you need 100 respondents and you expect that 25% of the people invited to take your survey will actually respond, then you need to invite 400 people (100 respondents ÷ .25 response rate = 400 invitations).
If you don’t know how many people are likely to respond to your survey invitation, it’s best to assume a fairly high response rate, like 25%, because it’s usually better to invite too few people than too many at first. You can usually invite more.
More to Come
We just covered a lot of information very quickly. If you’re interested in a better understanding of the terms we just referenced, watch for upcoming posts.
As always, we welcome your feedback, so let us know what you think in the comment section below.
P.S. If you are in need of people to take your survey, please click here to learn more about SurveyMonkey Audience—a new service that enables you to send your survey to respondents who match specific criteria you have in mind.
Do a Better Job with SurveyMonkey Enterprise
Give yourself—and the rest of your team—the answers you need to make smarter decisions. Access our most powerful tools with SurveyMonkey Enterprise.
Tags: best practices, target respondents
How does this table change if there are more than two answers to the questions?
My first reaction to this post was LOL. That soon turned to OMG. People might actually believe this! I better post a comment ASAP.
On the first day of every statistics class I’ve ever taken (and I’ve taken quite a few) one very important principle was drilled into me over and over again: the principle of RANDOM SELECTION. Nowhere in this post is the word RANDOM found. There is, in fact, an implied reference to a selfselected sample. When 25% of invitees respond to a survey, that sample is not random; it’s created by the choices made by the invitees, which is most likely not random at all. And if you don’t even start with a random selection of the population, there is absolutely no foundation to make statistical inferences from such a sample.
Further, there is no reference in this post to the “confidence interval.” Based on the suggested sample sizes you provide, it appears that you are assuming the accepted social science confidence interval of 95%. Stated simply, that means if you base you statistics on a RANDOM sample of the sizes in your chart, you can be confident that 95% of the time the results from the sample will be plus or minus the stated range of error from the population. In other words, on average, you will not achieve a result within that plus or minus range one time out of twenty.
But the most important point here is that in order to make any statistical inferences, your sample must be achieved through RANDOM SELECTION. Please, this is an important point to understand, and failure to acknowledge it is one of the most serious statistical errors one can make.
Hi Philip,
Thank you for your comment.
In response to your point about the principle of random selection, we are in complete agreement that we should have addressed the principle of random selection. Furthermore, we do see that we made an implied reference to a selfselected sample with our example in this post.
And, we’ve definitely taken heed to your point about the “confidence” interval. Going forward, we’ll make it a point to try and add a note about the statistical assumptions used in tables like these.
Finally, we appreciate your feedback and like hearing from our customers. Given your expertise, we always like knowing how we can better communicate these complex statistical topics in our blog posts.
Best,
Kalpana
Dear Kalpana and Phillip,
Thanks so much to Phillip for his comments! I was also disturbed by this and I hope that readers see Phillips response first.
But…I still have the same question that brought me to this blog post. If I have a population of 2500, and I send out a survey monkey questionnaire to all of them, some number of them will selfselect and respond to the survey. They were NOT randomly selected.
So…there is *no* way to talk about the responses in terms of generalizability at all. Right? Or is there any way to generalize, with, say, a much lower confidence level…or needing a much larger sample size…or…use the same numbers but say, “if these were generalizable, and we don’t know that we are, so bear in mind that this is exploratory information and not conclusive data…”
I fear the answer is a resounding “no,” but I have to ask because I’m hoping for a way out! And Kalpana, is there any possibility that you all will rewrite this post to at least insert [in brackets] that it assumes the responses are random and not selfselected, or pull it down, because it really is deeply flawed.
Thank you,
Ruth
The table is not dependent upon how many answers there are to a question(s) nor upon the population size, but upon how much error one will accept and how sure or confident one desires to be regarding the result.
For example, our result could state that we are 95% confident the percentage is between 55% and 65%; this statement uses a 5% error. Or we could say we are 90% confident the percentage is between 55% and 65%, still using a 5% error, but not as confident. Maybe our result is that we are 95% confident the percentage is between 57% and 63%, which uses a 3% error. (The error is half of the difference between percentages stated.) The confidence we desire and the error we will accept in each of these is what drives the sample size.
The table is derived by using a formula taught in statistics for determining sample size, if one wishes to research further.
Hi Kathy,
Thanks for your comment and for contributing your thoughts!. We like your point about the formula!
For those of you who would like this formula, here it is: n = N / 1 + N(e)2, where n = respondents, N = population size, and e = error.
Note that this formula and the table above are based on a few assumptions. There’s a quick formula you can use: We’ve used a confidence or risk level of 95%. This essentially means that you can be 95% confident that your data is accurate. For the example above, if you accept a 10% acceptable margin of error, you can be 95% confident that 60% to 80% of your population is in favor of an extended school day.
We’ve also assumed a maximum population response variance of .50, which assumes that people in your population tend to think as differently from each other as possible.
We’ve also assumed that the samples are being drawn using a probability sampling strategy, where every person has a known chance of being selected from the population.
Best,
Kalpana
What are the underlying formulae for your table data? (Excel file please!)
This article fails to mention that the margin of error calculations are based on 95% probability that the error will be within a certain percentage. There is still a 5% chance that it could be greater. Margin of error is typically calculated at 95%, but can be figured for any percentage.
Note that when margin of error is applied to a question with more than two answers the error percentage decreases.
whinge whinge whinge – so many haters!
Thanks for the snapshot survey monkey. As a non statistician, this is just the snapshots I need for my projects.
Thanks, M
Thanks Marilyn. We are keenly aware that there’s lots of moving parts still to be explored. More to come, and glad this was helpful
You only need to survey 100 people with a +/10% error rate for anywhere between 3,000 & 10,000,000 population?? The other error rate columns have similar but less dramatic comparisons too.
Hi Darren,
Great observations! For greater margins of error, there’s a higher likelihood that your respondents will fall into that group. So, moving from left to right on the table – the greater the margin of error for a given population, the less respondents you require. As for your point on populations, the greater the population, the more of the same population you’ll get. Eventually, each incremental person in a population (moving from top to bottom on the table) will change the population by less, resulting in a steady level of respondents needed at a given margin of error (100 respondents at a 10% margin of error for populations ranging from 3,000 to 10,000,000).
Sundog – I believe these were probably based on the workings of AD Little.
The practical illustrationsvare highly rewarding
A related area that intrigues me is deciding on what the margin of error should be! E.g., here in Ireland we are in the midst of a presidential election campaign. Various opinion polls are being published – all of which are touting a margin of error of +/ 3%. That’s the conventional margin of error in Ireland for political opinion polls.
Hi Brendan,
We agree – margin of error is an interesting topic! Thanks for sharing information on the Irish Presidential Polls with us!
I would like to know how to attempt determining what my audience should be. I will be focussing on Supply Chain, Service and Production environments to determine knowledge on the topic of Total Cost of Ownership. I would like to send out my survey to different sector groups and different companies for feedback. The idea is to get assistance from the people that I have targeted to also send the doc out to piers in their organizations.Where do I draw the line ito the sample that I need to address. The population is therefore not bound to a specific area. It is rather to a specific environment within an organization and this will be a global survey
Please assist
Thanks
Awie
Hi Awie,
For your survey, I’d think about your population as the total number of people who would be eligible to take your survey. From what you’ve described, your population might be the number of people who work in Supply Chain, Service, and/or Production environments.
I need tips on how to analyse open ended questions or qualitative data
Hi Tsimanyanye,
Check out this blog postt to learn more about our new Text Analysis feature for openended responses. Also, here’s another blog post that discusses how to get a quick summary of your openended text responses.
Sample size also depends on the % with a particular attribute or response. If 95% of the population like cake, you need a smaller sample to get a good chance of your survey reflecting that %. If only 50% like cake (Ok unlikely but possible) you need more.
Hi Steve,
Good point. For this table, we think of population as the number of people who fit your criteria. So, the number of people who like cake would be your population.
While others fall all over themselves to feed their egos by suggesting that they know about statistics than you do…let me take a moment as a nonstatistician SurveyMonkey user to THANK YOU for this post!
As a nonstatistician, I really appreciate the nonjargon laced, simple explanation that I can actually understand…and USE!
Please keep up these types of posts…and feel free to ignore the insecure “statisticians” challenging your expertise.
Thanks!
Hi Ted,
Thanks for the positive feedback :)
How would I describe the confidence of my results if I do not randomly select a sample from a population, but survey the entire population? If I were to survey the entire membership of an association (4000 members), and received 1200 completed surveys, what could I say about the representativeness of the 1200 responses?
Hi Darlene,
Great question! If you survey the entire population, there is no “estimation”, and you have no margin of error. We assume that nonresponse is randomly distributed if there aren’t any systematic invitation differences. So, In your example, systematic nonresponse would occur if the 1,200 respondents answered the survey because they were the only ones out of the association who had access to the internet to take your internetbased survey.
So, are my 1200 respondents not representative of the larger population (4000 members)?
Hi Darlene – Any time you’re taking a sample of a larger population, you’re going to have some margin of error. A typical margin of error is 5% (so if the data tells you that 90% of the population agrees with you, for example, in reality the number could be anywhere from 85 to 95%), but you may want it to be anywhere from 1 to 10%. A margin of error that is greater than 10% is not recommended. You can view a chart to help you calculate how many people you need to get the margin of error you’re looking for for your population size at: http://www.researchadvisors.com/tools/SampleSize.htm
We think, though, that 1,200 should be more than enough people to give you a reasonably accurate sample for a population of 4,000.
Rather than a chart, I would like to see a sample size selector, with the supporting mathematical formulas. I would like to see an explanation of Sample size and Confidence interval. If we submit our population it would be grand if Survey Monkey could give us this information with the analyze results section.
I don’t consider the comments as whining from haters. I think the observations are helpful and remind one that the statistics arena involves a good deal of gray and less black and white.
Most opinions can be worthy of consideration and helpful when put into the context of a total picture. It’s always good, however, to sort out the whining and fingerpointing that creates polarization.
Thank you for the Information it will help a lot for the inmplimentation.Thought provoking
Walter.
hey！morning！have a nice day ！ i feel very well ！ sunshine！
i have read all 。thanks！perfect！
yours，
shasir。
Hi Derik,
Here is the sample size calculator that I use without fail. It’s my first point of reference when I am designing a survey methodology. It explains much of what has been discussed above.
http://www.surveysystem.com/sscalc.htm
Can you please give me a reference for the “Respondents’ Needed at Error of…. “table you presented that gives sample size and random error? I would like to reference this in a paper I am writing on this topic. thanks,
This was so helpful, we are utlizing survey’s all the time and to have this information was valuable we have saved it for future reference
Patricia – Really glad you found the post helpful. Thank you for your feedback.
Hi all, could you please help?
Im conducting a survey which i sent out to the entire population (all students at my university).
Does any one know what type of sampling technique this is?
Hi Kaj –
You actually used no sampling technique since you sent out the survey to the entire population of your university.
In sampling techniques, generally you take a subset of the population and survey them. For example, let’s say you’re interested in studying attitudes towards animals. You have a hypothesis that females like dogs more than cats. To get a complete sense of females’ attitudes, you’d need to survey every single female. That is often impossible, which is where sampling techniques come in. When you use a sampling technique, you send a survey out to some females, usually randomly, and that subset “represents” the whole population of females. That way you can generalize the results from that subset to the whole population of females if the females in your sample think dogs are have more fur than cats, you can assume that females in general think dogs have more fur than cats.
In your case, however, the survey was sent out to the entire population, so this is not a sampling technique, as there is no subset or sample taken out from the entire population.
These kind of tips are good for lay practise, however I think the assumptions behind these formulas should be stated in the main text.
People might actually spend a too much money on doing surveys like this, perhaps one can do a pilot to test if p is far from 0.5 first and earn a lot of money.
And it should also be mentioned about different selection methods, random selection (not feasible usually, theoretically beautiful) or quota sampling.
PhD Student Mathematical statistics
Tobias –
You’re right. There are many things to keep in mind when thinking of survey design. This chart was made to simply help guide customers through just one step of the process. There’s much more to write about than one blog post will allow. Stay tuned for more.
Dear all,
I hope you can help me in regards to my questions. Our organisation organises several events a month and every participant of an event is sent a feedback questionniare. To get a representative answer how many completed questionnaires should we aim for? 50% or less or more? can we assume that the completed questionnaires are representative of the population (=event participants)? what margin of error and confidence level?
Thank you very much
Hi Astrid – What size sample you need depends on what population you’re trying to estimate, how accurate you need the estimate to be, and how many variables you’re estimating. For example, for our population of roughly 200,000 pro subscribers we would need at least 750 responses in our sample to estimate ONE question.
For a basic estimator tool to figure out your own sample needs, check out this tool: http://www.researchadvisors.com/tools/SampleSize.htm
Astrid – Your question is really quite complex. The first issue (as has been mentioned in previous posts here) is that the sample must be random – that means that the likelihood of being a respondent is the same for everyone in the population. Often true randomness is hard to achieve, and the lack of randomness can have different effects on the results depending on how the sample is biased. Here is an example based on your situation – you hold an event, and some participants are are very satisfied, some are very unhappy and some are neutral. Now, if the sample response is biased towards those with strong opinions, you can end up with quite an incorrect view of overall satisfaction regardless of the sample size. Let’s say the event drew 1,000 people (the population size) and that 100 people responded to the survey. Let’s say that 100 people were unhappy, 300 people were very happy and 600 people were neutral. If we scored this on a scale of 1 to 3 for satisfaction, then overall in the population the mean satisfaction level is 2.1 out of 3. But if you gave everyone a survey form and only those with strong opinions returned them, the mean satisfaction would appear to be 2.5 out of 3. This bias would show up in the results regardless of the sample size, and in fact a larger sample size would only give you the false impression that the results were more accurate.
A major source of nonrandomness is selfselection of respondents – people who for a variety of reasons (strong opinions, time on their hands, ulterior motives) are more likely to respond to a survey request. In the world of statisticians there has been a great deal of discussion over the last decade of how to deal with the inherent nonrandomness of webbased surveys rooted in the selfselection of respondents. Regardless of how narrowly targeted are the survey invitations, if the responses are not random then, properly speaking, you can throw all statistical analysis of the results out the window. The degree to which this problem affects the results of any specific survey are very difficult (if not impossible) to determine, but should always be taken into account as best as possible when interpreting the responses.
Wow Raymon. What a great answer! Thanks for helping a surveymaker out. Have a great day!
Esto es excelente. muchas gracias por los detalles.
Muchos gracias.
why the respondents of my survey stopped increasing , thought i published the survey on Facebook , and as some of respondents said , they did the survey , but the number is still the same , is there some sort of respondents limitation ???
my survey is for my research , and it’s a small scale research
Hi Inass It depends what plan you are on! If you’re on the Basic (free) plan, you have access to 100 responses per survey. To analyze responses beyond 100, you’ll need to upgrade to a paid plan. Here’s some more info on our pricing structure: https://www.surveymonkey.com/pricing/upgrade/details/?
Hope that helps!
Thought provoking article and some comments actually provide some insight.
Hi Paul thank you for the support! Let us know if you ever need any help with any of your survey projects and thanks for reading. :)
New Question (or perhaps not): I invite 300 people to participate in an employee survey. The unit population therefore has in common that they all work for the same company. I invite everyone to participate, and therefore each person has a fully equal chance of participating.
If only 120 people respond (a 40% response rate and a statistically viable sample overall for a small organisation) but are all from the same department, what then can I infer about my results?
I have a real dilemma here, as nonresponse is an indicator of trust in the organisation.
How do I explain and justify what has occurred? Are my results valid/ interesting internally, but invalid for external use?
Do I interpret this differently for a small organisation than I would for a large one?
Thanks for any help and insights here.
Hi Celica! Apologies for the delay in response, this is a great question. One of our Audience Specialists may be able to help you out. You can send them a note directly to audience@surveymonkey.com
Just want to say your article is as astounding. The clarity
in your post is simply great and that i could assume you’re knowledgeable on this subject. Fine with your permission allow me to grab your RSS feed to stay updated with coming near near post. Thank you a million and please carry on the gratifying work.
Thank you for the support!
Hey there! Do you know if they make any plugins
to assist with SEO? I’m trying to get my blog to rank for some targeted keywords but I’m not
seeing very good results. If you know of any please
share. Thank you!
Hello there it depends on what blog platform you’re using. If you use WordPress, you might want to check out their resource library and see what other bloggers recommend for SEO plugins. Thanks for reading and good luck!
You mentioned the error rate in your example (5%), but not the confidence interval associated with the numbers in this table. I assume the CI was 95%, so for the example…. if you sampled the entire population, you could be 95% sure that the true response would have been between 65 and 75 percent.
help me guys: https://www.surveymonkey.com/s/XQQ57FV
hi thanks for the information
but my survey is to know about awareness of a product in India, so the population of my survey is population of India, that is 1.2 billion people
so how many people should i have to take survey
and it is not concentrated on a specific group. so how can i choose the people
could u help me with this
It depends on what you’re willing to accept in terms of a margin of error and how many people out of the entire population of India that you’re expecting will respond.
Between this post and the literature I found on Google, there is much information on how to acquire the response rate one seeks, but almost nothing on what percentage of respondents constitutes a good response rate, an average response rate, an adequate response rate, or an acceptable response rate. Understandably, an acceptable response rate for a survey of the general public will differ from that of an organizational or institutional survey. What are the standards that determine a low versus high cooperation rate? I’ve seen response rates of anywhere from 10%75%, depending on topic and purpose, but I am not certain how to interpret these percentages without some idea as to what’s accepted, using various methods, for various purposes. A table or chart of numbers indicating acceptable percentages, using the various contact and collection methods, would be helpful.
Hi, great question. We don’t have a set chart other than the example in the post right now but here’s a more detailed explanation that hopefully breaks it down even further from our methodology team…In terms of what could be considered a good versus bad response rate, that is much harder to assess, especially since it really differs by project. In general, we think it is more important that you have the right “sample” of people taking the survey rather than having a high response rate.
A survey that has a low response rate, let’s say 10 percent, with a diverse mix of people may yield better data than a survey that has a response rate of 50 percent with a more homogenous group of respondents because the lower response rate survey includes the views of all different kinds of people.
The importance of response rates can also differ by sample size too: it’s more important for surveys with smaller sample sizes to have a high response rate so you don’t come to the wrong conclusion based on too little data. With larger sample sizes, you have a little bit more wiggle room.
Hope this helps!
if i have 22 expertize home same criteria how many sample size acceptable ?
Using Survey Monkey as a tool, my wife’s employer takes surveys of the efficacy of teachers and the classes they present. Her chief nursing officer throws away the non respondents in the class and takes as useful data only those clients who respond to the questionnaire. As a chemist, I have been submitting that without consideration of non respondents in the population, the CNO’s method over inflates both negative and positive data and may well skew the results. Furthermore, if the employees jobs are on the line it is illogical to consider the magnitude of negative responses when the population of non respondents is not computed in the results. Since the population of non respondents is not presented in the results as a whole I question the validity of the surveys or the evaluation of the data…