Online Surveys = Ongoing Insights

How reliable are SurveyMonkey panelists around the world?

 → 
 → 
How reliable are SurveyMonkey panelists around the world?

Blog_Story3_multicultural copyChicago is my kind of town…for this week anyway as I’m here attending a conference on cross-cultural survey research. Why? To make sure SurveyMonkey is up to date and in the know on all things international research.

This is especially important as more and more SurveyMonkey customers are creating, sending, and taking surveys from outside of the US.

I presented research that my fellow survey scientist, Mingnan, and I did last winter.

Part of our job on the research team at SurveyMonkey is to make sure the responses you purchase through SurveyMonkey Audience are high quality ones. Two of our priorities are reliability and validity.

To verify reliability and validity, from December 2015 through February 2016 we fielded the same survey for one week each month in Australia, Brazil, China, India, the UK, and the US. We received more than 1,000 responses in each country for each wave.

You can view a preview link of the American version of the survey here. The questions are the same for the other countries but translated into the local languages.

Reliability

To check for reliability, we asked some questions that should not change much over time. For example, the question below asks respondents to indicate whether they have naturally red hair or were born in the month of February.

Reliability

With a large enough sample size and a diverse and representative population, the aggregated responses to questions like the one above should not change from month to month even though the people who are responding change from one wave to the next.

Our findings indicate that was exactly the case. We used chi-square tests for each variable to check for the significance (at the p=.05 level) of the differences between results from one month to the next. We found that the majority of variables passed our test.

Validity

The table above shows the percent of variables that passed our chi-square test in each country.

For example, responses in the US had 85% reliability from one wave to the next. In other words, 85% of the time we didn’t see a significant variation from the US responses over the three waves of surveys. Weighting for demographics (gender, age, race/ethnicity, and education) in each country improved the reliability in most cases.

Validity

To check the validity of our panel responses, we asked some questions that could be easily verified with other benchmarks.

Remember how we asked whether people were born in February? If we assume that everyone’s birthdays are randomly distributed across all months, then about every 1/12 person (or 8.33% of all respondents) should be born in the month of February.

This was pretty much exactly what we got in all six countries.

Unweighted

Some of the questions we included in the survey (like those on morality or credit card ownership) used the same wording as questions asked in nationally representative phone and in-person surveys conducted by the Pew Research Center and the World Bank. In general, with some variations by question or country, our results are very comparable to those benchmarks.

We’re happy with these results, and we’re glad to be able to demonstrate that SurveyMonkey’s Audience panel is both reliable and valid here at home and around the world!

Check back next week to hear findings and best practices from other survey research experts…stay tuned.

Inspired? Create your own survey.

Inspired? Create your own survey.

PRO Sign Up Sign Up FREE or Sign in
Write Surveys Like a Pro

Ever wonder what SurveyMonkey’s really made of?

Ever wonder what SurveyMonkey's really made of?

Read our engineering blog »