Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Integrate with 100+ apps and plug-ins to get more done.

Build and customize online forms to collect info and payments.

Create better surveys and spot insights quickly with built-in AI.

Purpose-built solutions for all of your market research needs.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in

Learn how SurveyMonkey can help you analyze your survey data effectively, as well as create better surveys with ease.

SurveyMonkey logo


The results are back from your online surveys. Now it’s time to tap the power of survey data analysis to make sense of the results and present them in ways that are easy to understand and act on.After you’ve collected statistical survey results and have a data analysis plan, it’s time to begin the process of calculating survey results you got back. Here’s how our survey research scientists make sense of quantitative data (versus qualitative data). They structure their reporting around survey responses that will answer research questions. Even for the experts, it can be hard to parse the insights in raw data. 

In order to reach your survey goals, you’ll want to start with relying on the survey methodology suggested by our experts. Then once you have results, you can effectively analyze them using all the data analysis tools available to you including statistical analysis, data analytics, and charts and graphs that capture your survey metrics.

Add analysts to any team plan for even bigger impact.

Sound survey data analysis is key to getting the information and insights you need to make better business decisions. Yet it’s important to be aware of potential challenges that can make analysis more difficult or even skew results. 

Asking too many open-ended questions can add time and complexity to your analysis because it produces qualitative results that aren’t numerically based. Meanwhile, closed-ended questions generate results that are easier to analyze. Analysis can also be hampered by asking leading or biased questions or posing questions that are confusing or too complex. Being equipped with the right tools and know-how helps assure that survey analysis is both easy and effective.

With its many data analysis techniques, SurveyMonkey makes it easy for you to turn your raw data into actionable insights presented in easy-to-grasp formats. Features such as automatic charts and graphs and word clouds help bring data to life. For instance, Sentiment Analysis allows you to get an instant summary of how people feel from thousands or even millions of open text responses. You can review positive, neutral, and negative sentiments and a glance or filter by sentiment to identify areas that need attention. For even deeper insights, you can filter a question by sentiment. Imagine being able to turn all those text responses into a quantitative data set.

Word clouds let you quickly interpret open-ended responses through a visual display of the most frequently used words. You can customize the look of your word clouds in a range of ways from selecting colors or fonts for specific words to easily hiding non-relevant words.

Our wide range of features and tools can help you address analysis challenges, and quickly generate graphics and robust reports. Check out how a last-minute report request can be met in a snap through SurveyMonkey.

Ready to get started?

  1. Take a look at your top survey questions
  2. Determine sample size
  3. Use cross tabulation to filter your results
  4. Benchmarking, trending, and comparative data
  5. Crunch the numbers
  6. Draw conclusions

First, let’s talk about how you’d go about calculating survey results from your top research questions. Did you feature empirical research questions? Did you consider probability sampling? Remember that you should have outlined your top research questions when you set a goal for your survey.

For example, if you held an education conference and gave attendees a post-event feedback survey, one of your top research questions may look like this: How did the attendees rate the conference overall? Now take a look at the answers you collected for a specific survey question that speaks to that top research question:

Do you plan to attend this conference next year?

Answer choices
Yes71%852
No18%216
Not sure11%132
Total1,200

Notice that in the responses, you’ve got some percentages (71%, 18%) and some raw numbers (852, 216). The percentages are just that—the percent of people who gave a particular answer. Put another way, the percentages represent the number of people who gave each answer as a proportion of the number of people who answered the question. So, 71% of your survey respondents (852 of the 1,200 surveyed) plan on coming back next year.

This table also shows you that 18% say they are planning to return and 11% say they are not sure.

Having a good understanding of sample size is also key to making sure you are accurately and effectively analyzing your survey results. Sample size is how many people you need to take your survey and complete responses to make it statistically viable. Even if you’re a statistician, determining survey sample size can be a challenge. But SurveyMonkey takes the guesswork and complexity out of the process with its easy-to-use margin of error calculator that helps you determine how many people you need to survey to ensure your results help you avoid your margin or error.

Trust the panel of respondents provided by SurveyMonkey Audience—175M+ people across 130+ countries.

Recall that when you set a goal for your survey and developed your analysis plan, you thought about what subgroups you were going to analyze and compare. Now is when that planning pays off. For example, say you wanted to see how teachers, students, and administrators compared to one another their responses about attending next year’s conference. To figure this out, you want to dive into response rates by means of cross tabulation, or use cross tab reports, where you show the results of the conference question by subgroup:

YesNoNot sureTotal
Teacher80%
320
7%
28
13%
52
400
Administrator46%
184
40%
160
14%
56
400
Student86%
344
8%
32
6%
24
400
Total respondents8522161321,200

From this table you see that a large majority of the students (86%) and teachers (80%) plan to come back next year. However, the administrators who attended your conference look different, with under half (46%) of them intending to come back! Hopefully, some of our other questions will help you figure out why this is the case and what you can do to improve the conference for administrators so more of them will return year after year.

A filter is another method of data analysis when you’re modeling data. Filtering means narrowing your focus to one particular subgroup, and filtering out the others. So, instead of comparing subgroups to one another, here we’re just looking at how one subgroup answered the question. Combining filters can give you pinpoint accuracy in your data.

For instance, you could limit your focus to just women, or just men, then re-run the crosstab by type of attendee to compare female administrators, female teachers, and female students. One thing to be wary of as you slice and dice your results: Every time you apply a filter or cross tab, your sample size decreases. To make sure your results are statistically significant, it may be helpful to use a sample size calculator.

Graphs can be a regular go-to tool when you aim to quickly demonstrate the results of your data analysis in a way that is easy for anyone to understand. It’s easy to create graphs with SurveyMonkey that provide clarity and context to your analysis which, in turn, makes using the data in more targeted and actionable ways.  

Cross tabulations, otherwise known as crosstab reports, are useful tools for taking a deeper dive into your data. Crosstabs structure your data into a table that groups respondents based on shared background information or survey responses, allowing you to compare each group’s answers to one another. This helps you better understand each group of respondents and uncover how they differ from each other.

Let’s say on your conference feedback survey, one key question is, “Overall how satisfied were you with the conference?” 

Your results show that 75% of the attendees were satisfied with the conference. That sounds pretty good. But wouldn’t you like to have some context? Something to compare it against? Is that better or worse than last year? How does it compare to other conferences?

Benchmarking can provide answers to these questions and more by readily allowing you to make comparisons to past and current data to identify trends in your industry and marketplace, and see how you stack up against them.

Well, say you did ask this question in your conference feedback survey after last year’s conference. You’d be able to make a trend comparison. Professional pollsters make poor comedians, but one favorite line is “trend is your friend.” If last year’s satisfaction rate was 60%, you increased satisfaction by 15 percentage points! What caused this increase in satisfaction? Hopefully the responses to other questions in your survey will provide some answers.

If you don’t have data from prior years’ conferences, make this the year you start collecting feedback after every conference. This is called benchmarking. You establish a benchmark or baseline number and, moving forward, you can see whether and how this has changed. You can benchmark not just attendees’ satisfaction, but other questions as well. You’ll be able to track, year after year, what attendees think of the conference. This is called longitudinal data analysis.

You can even track data for different subgroups. Say for example that satisfaction rates are increasing year over year for students and teachers, but not for administrators. You might want to look at administrators’ responses to various questions to see if you can gain insight into why they are less satisfied than other attendees.

You know how many people said they were coming back, but how do you know if your survey has yielded answers that you can trust and answers that you can use with confidence to inform future decisions? It’s important to pay attention to the quality of your data and to understand the components of statistical significance.

In everyday conversation, the word “significant” means important or meaningful. In survey analysis and statistics, significant means “an assessment of accuracy.” This is where the inevitable “plus or minus” comes into survey work. In particular, it means that survey results are accurate within a certain confidence level and not due to random chance. Drawing an inference based on results that are inaccurate (i.e., not statistically significant) is risky. The first factor to consider in any assessment of statistical significance is the representativeness of your sample—that is, to what extent the group of people who were included in your survey “look like” the total population of people about whom you want to draw conclusions.

You have a problem if 90% of conference attendees who completed the survey were men, but only 15% of all your conference attendees were male. The more you know about the population you are interested in studying, the more confident you can be when your survey lines up with those numbers. At least when it comes to gender, you’re feeling pretty good if men make up 15% of survey respondents in this example.

If your survey sample is a random selection from a known population, statistical significance can be calculated in a straightforward manner. A primary factor here is sample size. Suppose 50 of the 1,000 people who attended your conference replied to the survey. Fifty (50) is a small sample size and results in a broad margin of error. In short, your results won’t carry much weight.

Say you asked your survey respondents how many of the 10 available sessions they attended over the course of the conference. And your results look like this:

12345678910TotalAverage rating
# sessions attended10%
100
0%
0
0%
0
5%
50
10%
100
26%
280
24%
240
19%
190
5%
50
1%
10
1,0006.1

You might want to analyze the average. As you may recall, there are three different kinds of averages: mean, median and mode.

In the table above, the average number of sessions attended is 6.1. The average reported here is the mean, the kind of average that’s probably most familiar to you. To determine the mean you add up the data and divide that by the number of figures you added. In this example, you have 100 people saying they attended one session, 50 people for four sessions, 100 people for five sessions, etc. So, you multiply all of these pairs together, sum them up, and divide by the total number of people.

The median is another kind of average. The median is the middle value, the 50% mark. In the table above, we would locate the number of sessions where 500 people were to the left of the number and 500 to the right. The median is, in this case, six sessions. This can help you eliminate the influence of outliers, which may adversely affect your data.

The last kind of average is mode. The mode is the most frequent response. In this case the answer is six. 260 survey participants attended six sessions, more than attended any other number of sessions.

Means and other types of averages can also be used if your results were based on Likert scales.

When it comes to reporting on survey results, think about the story the data tells.

Say your conference overall got mediocre ratings. You dig deeper to find out what’s going on. The data show that attendees gave very high ratings to almost all the aspects of your conference — the sessions and classes, the social events, and the hotel—but they really disliked the city chosen for the conference. (Maybe the conference was held in Chicago in January and it was too cold for anyone to go outside!) 

That is part of the story right there—great conference overall, lousy choice of locations. Miami or San Diego might be a better choice for a winter conference.

One aspect of data analysis and reporting you have to consider is causation vs. correlation.

People digest and understand information in a range of different ways. Fortunately, SurveyMonkey offers a ton of different ways for you to analyze survey data so you can assess and present the information in ways that will be most useful to meet your goals and create graphs, charts, and reports that make your results easy to understand.

Here are some of the common questions that we can help you navigate as you build up your survey analysis chops:

Longitudinal data analysis (often called “trend analysis”) is basically tracking how findings for specific questions change over time. Once a benchmark is established, you can determine whether and how numbers shift. Suppose the satisfaction rate for your conference was 50% three years ago, 55% two years ago, 65% last year, and 75% this year. Congratulations are in order! Your longitudinal data analysis shows a solid, upward trend in satisfaction.

Causation is when one factor causes another, while correlation is when two variables move together, but one does not influence or cause the other. For example, drinking hot chocolate and wearing mittens are two variables that are correlated — they tend to go up and down together. However, one does not cause the other. In fact, they are both caused by a third factor, cold weather. 

Cold weather influences both hot chocolate consumption and the likelihood of wearing mittens. Cold weather is the independent variable and hot chocolate consumption and the likelihood of wearing mittens are the dependent variables. In the case of our conference feedback survey, cold weather likely influenced attendees dissatisfaction with the conference city and the conference overall. 

Finally, to further examine the relationship between variables in your survey you might need to perform a regression analysis.