SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Integrate with 100+ apps and plug-ins to get more done.

Build and customize online forms to collect info and payments.

Create better surveys and spot insights quickly with built-in AI.

Purpose-built solutions for all of your market research needs.


Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.


Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in

If you have been in the online surveying business, you have probably heard terms like response rate and completion rate used frequently in articles and research reports and know that the higher the rate’s percentage the better.

But what do they really mean? How are they different? Let’s look at the true definition of both completion and response rates and how they relate to your online survey’s sample group and statistical accuracy.

When it comes to online surveying a completion rate refers to the number of surveys filled out and submitted divided by the number of surveys started by respondents. In other words, only the respondents who have actually entered your survey would be included in this statistic and only those respondents who completed the full survey would increase your completion rate. Below is an example of a calculated completion rate.

I have a survey with the following stats:

  • Emails sent: 1000
  • Number of respondents who entered the survey: 250
  • Number of completed surveys: 200

Let’s calculate the completion rate.

Completion rate = Number of completed surveys / Number of respondents who entered the survey

Completion rate = 80%

You’ll notice that the completion rate does not rely on the number of people contacted and is strictly based on people’s interaction with your survey. Because of this, a completion rate can, and should, be measured on any survey, including email, intercept, pop-up, embedded, and hybrids.

  1. Incomplete data: A low completion rate means your respondents are not filling out all the information you need. That means particular questions are going to have a lower level of reliability than other. Let’s say half your participants are dropping out of your survey before the last question, this would mean your last question has a lower sample size and therefore is more prone to be inaccurate.
  2. A frustrating survey experience: A low completion rate is your respondents sending you a message that they do not like the survey. You already have them in the survey attempting to fill out the questionnaire, but for some reason they are dropping out before giving you all the information you require. If you are in this situation, your survey may be too longhave poorly organized questions, ask personal or sensitive questions, or be seen as misleading to the respondents.

Prevent respondents from having a bad experience with your next survey by downloading our guide to ‘Writing Survey Questions Like a Pro.’ Download now→

Though deceivingly similar in description to completion rates, response rates provide valuable insight into the accuracy of your collected data. Put simply, a response rate refers to the number of people who completed your survey divided by the number of people who make up the total sample group. Here’s an example of a calculated response rate.

I have a survey with the following stats:

  • Emails sent: 1000
  • Number of respondents who entered the survey: 250
  • Number of completed surveys: 200

Let’s calculate the response rate.

Response rate = Number of completed surveys / Number of emails sent

Response rate = 20%

The important thing to remember is a response rate can only be calculated with a defined sample group. Meaning you need a contact list or record of the number of people being approached to take the survey. Unfortunately, deployment methods like pop-ups and website embeds make it difficult to define the number of people who are presented with the survey and can therefore render any measurement of a response rate unreliable. Usually, response rates are only used when the sample group is controlled by a fixed list of email addresses, telephone numbers, or home addresses.

  1. Higher level of error: The lower your response rate, the smaller your original sample group becomes. This could wreak havoc on your margin of error and the reliability of your results. Consider the fact that if we had a list of 278 potential respondents for a target population of 1000. Our survey sample size calculator says, with a 100% response rate, we’d have an industry standard margin of error of 5%. Lower our response rate to a relatively high 32%, and we are now looking at a 10% margin of error, effectively cutting the accuracy of your survey findings in half.
  2. Uninterested sample group: Beyond statistical inaccuracy, low response rates are an indicator that your potential respondents are simply not interested in taking part in your survey. You’ll need to find a way to entice more participants. First, ensure your email message and subject lines don’t look like spam and are branded properly in order to instill a sense of credibility. After this, it could be worth it to add in extra incentive to drum up interest from your sample group.
  3. Nonresponse bias: Sometimes a low response rate can be an indication of a nonresponse bias, which occurs when a certain demographic in your sample are not participating in the survey. Nonresponse errors or bias can appear for various reasons. Here is a short list of examples:
    • You sent the invite email during a religious or regional holiday
    • One email browser is labelling your email invites as spam
    • Your topic is a sensitive issue for certain contacts
    • Your topic is boring for certain contacts

The trick to eliminating nonresponse bias is to first pretest your survey and later to recognize and resolve any issues as it happens. Remember, there is nothing more detrimental to a survey’s findings than undetected bias. Not only does it hurt the credibility of your data, it will give your finding misleading results allowing you to draw incorrect conclusions.

So now that you know the difference between completion and response rates, you can view the results of surveys with a fresh perspective.

Toolkits directory

Discover our toolkits, designed to help you leverage feedback in your role or industry.

Receive requests easily with online request forms

Create and customize request forms easily to receive requests from employees, customers, and more. Use our expert-built templates to get started in minutes.

Considering a form vs. a survey? Use both to enhance your events and experiences

How do surveys and forms differ? Depends on the info you need. Here’s how to combine form data with survey feedback for seamless events and experiences.

Unlocking ROI with Salesforce and SurveyMonkey

How to enrich your survey data with existing customer data in Salesforce to drive better CX and NPS, CSAT, and more.