Online Surveys = Ongoing Insights

How Question Bank Was Built

How Question Bank Was Built

The goal of the SurveyMonkey Question Bankto provide methodologically sound questions on topics that SurveyMonkey customers are interested in–drove each step of how we built the bank.

Where to Start?

SurveyMonkey customers create 3+ million surveys and ask almost 33 million questions each year. We wanted to start the Question Bank building process by reviewing these questions but we also knew the world could end before we made it to the 33rd millionth one. So we did what any smart monkey would do and chose a subset of surveys from the total pool of SurveyMonkey surveys to work with.

What Topics are You Interested in?

Choosing the subset of surveys was a two-stage process. First, we scanned the 3+ million surveys (while maintaining customer privacy and anonymity) to identify the most commonly surveyed topics by SurveyMonkey customers. We learned that, currently, the most popular topics are customer feedback, human resources, public education, health care, telecommunications, retail, politics, non-profits, events, travel, and demographics. Then we moved on to identifying the questions within each topic.

What Questions are You Asking?

The questions themselves were chosen using a stratified random sampling procedure. We searched the 3+ million SurveyMonkey surveys looking for questions about one popular topic (or strata) at a time. Then, we randomly chose 10% of surveys on that topic, ensuring that each and every SurveyMonkey customer’s survey on that topic had an equal and independent chance of being selected.

For example, we scanned the entire survey pool just for those surveys that asked about customer feedback and then randomly chose a subset of surveys from all those customer feedback surveys. We repeated that process for each remaining category: human resources, public education, health care, and so on.

The end result? Eleven random samples which, when combined, included roughly 20,000 questions–questions we can confidently say are representative of (or look a lot like) the entire pool of SurveyMonkey customer questions on the same 11 topics.

We began reviewing these 20,000 questions and found that many of them were asking essentially the same thing. For example, the questions, “Does your boss make good decisions?” and “Are most of your supervisor’s decisions good or bad?” both ask about the quality of the decisions made by a supervisor. Thus, we were able to winnow down the 20,000 questions to about 1,500 questions commonly asked by SurveyMonkey customers.

What Other Questions Might You Want to Ask?

Most of Question Bank was inspired by questions SurveyMonkey customers created in the past. But we also wanted to include questions on the 11 most popular topics that you might like to ask in the future. The inspiration for these questions came from a number of sources, including professional publications, academic journals, and industry blogs.

For example, human resource professionals have learned that employees who report being satisfied with the recruitment process are more likely to be satisfied three months into the job than are employees who report being dissatisfied with the recruitment process. This connection is something HR professionals appear to be interested in, so we’ve included questions on the topic, such as “How clearly did your recruiter explain the details of the job to you?” and “Overall, were you satisfied with the recruiting process at our company, neither satisfied nor dissatisfied with it, or dissatisfied with it?”

How Can Questions Be Asked in a Way That Gets You the Answers You’re Looking for, without Error or Bias?

Next, we needed to see how we could help rewrite the questions created by SurveyMonkey customers so that they ask only about the information you’re interested in and nothing else. Every survey question potentially measures four items: your construct of interest (the good stuff), as well as other constructs, random error, and bias (the bad stuff). Our goal was to ask only about the good stuff.

The Good Stuff: Your Construct of Interest

Your construct of interest is simply what you’re hoping to get information about. For example, you might be interested in how professional your customer service representatives are. In that case, “professionalism” is your construct of interest, and you want to ask a question that measures “professionalism” without measuring anything else that could distract you from the actual measure of that construct. To understand this idea better, let’s take a look at an actual SurveyMonkey customer question:

What is the construct of interest here? Well, the surveyor is actually asking about two things with this question: whether the supervisor provides feedback and how judicious the supervisor’s assessments are. Each survey question should ask about only one construct at a time, so this question needs some tweaking. We need to split it into two separate questions:

But what else is going on with these seemingly innocent questions? Two troublesome potential distractions, actually: random error and bias. Let’s tackle each distraction separately.

The Bad Stuff: Random Error

What is it and how do you avoid it? Random error is the term scientists use when they’re talking about chance fluctuations, those things that happen haphazardly, without any direction. Imagine you’re playing darts and a gnat is buzzing around your head. It’s coming at you from a different direction each time you throw a dart, making each dart land just slightly off from the target, sometimes to the left, sometimes to the right, sometimes above, and sometimes below. You may normally be an amazing dart player, hitting the target nearly every time, but you wouldn’t know it from this round. Your darts are all over the board. That’s what random error does to your survey responses. It’s the gnat that distracts your respondents, moving their answers all over the place, making it hard for you to hear what they’re really trying to tell you.

You can avoid random error by asking questions that are very easily understood by the widest possible audience. For example, the phrase, “judicious assessment,” uses sophisticated language that will likely be interpreted differently by different people. That is not a good thing. You want all your respondents to interpret each word in your question the same way. Simpler language can help. For example, you could replace “judicious assessments” with “reasonable decisions” and ask the following question:

Now that’s a question that will be more easily understood by a wider range of people. But, as improved as the question is, it will still likely generate random error. Why? Well, look at the steps respondents have to take to answer the question. First, they’ve got to decide if their supervisors make reasonable decisions. And then they’ve got to decide which of the five answer choices line up to their own answer.

So let’s say a respondent says, yeah, sure, my supervisor makes reasonable decisions. Now what? I suppose I either agree or strongly agree that my supervisor makes reasonable decisions. But what does it mean to strongly agree that someone makes reasonable decisions? Does it mean the decisions are especially reasonable? Or does it mean that I feel strongly the decisions are reasonable? It’s not really clear is it? And therein lies the problem. The question asks for an answer that is not provided by the available answer options. Instead, respondents are forced to map their answers onto the less than optimal answer options they’ve been given.

Luckily, there’s an easy fix to this problem: use construct-specific response scales. You’re interested in reasonable decisions, so ask about how reasonable they are, like we’ve done with the following question:

Now the question and the answer options match up. Respondents no longer need to search for the answer option that best matches their own. That step has been completely eliminated. The answers generated will be less muddied by random error, and you’ll be able to know what respondents really think.

More Bad Stuff: Bias

What is it and how do you avoid it? Bias is the term scientists use when they’re talking about fluctuations that are slanted in one particular direction. So, returning to our dart example, imagine that the floor you’re standing on while playing darts is a bit uneven, causing your darts to always land slightly to the right of the target. Again, you may normally be an amazing dart player, hitting the target nearly every time, but, according to this round, you’re a dart player who hits slightly to the right of the target nearly every time. Your performance is consistent, but it’s consistently off-target. Bias is the uneven floor that makes your respondents’ answers lean in one direction or another, making it hard for you to hear what they’re really saying.

You can avoid bias by asking questions that are balanced and free from pressure to answer in any one direction. For example, when we changed the answer options from agree/disagree to construct-specific answer options in our “reasonable decisions” question, we eliminated an acquiescence bias, which is simply the tendency of people to go along with some request or opinion in order to be agreeable, polite, or respectful. For example, when presented with the statement, “My supervisor makes reasonable decisions,” respondents are more likely to agree than disagree that their supervisor’s decisions are reasonable. The answers generated by this question, then, will be biased in the direction that the supervisor makes reasonable decisions. Your survey data may be telling you this supervisor makes more reasonable decisions than he or she actually does.

Luckily, our last two changes to the question–making it a question rather than a statement and using construct-specific answer options–greatly reduced this bias. Now respondents are asked a biased-free question and provided a balanced set of answer options. No more uneven floor.

And That is How Question Bank was Built!

We started with the topics our customers were interested in and the questions that were already being asked. We found the construct of interest in a question, asked about it and no other construct, reworded the question to avoid error and bias, and, voila, a methodologically sound question was added to the SurveyMonkey Question Bank. Then we just repeated that process 1,499 times. And we’ll be repeating it many more times as you give us your feedback and we continue to add more topics and questions!

Let us know what you like and don’t like about Question Bank in the comments below. We’re eager to learn how we can make your surveying life a bit easier.

Lori Gauthier is our resident question-writing monkey, contributing to resources that help our customers quickly create methodologically-sound surveys.  Lori holds an MBA from The University of Texas at Arlington and a PhD in Communication and a PhD Minor in Psychology from Stanford University, where she taught the Communication Research Methods course and conducted survey methodology-focused research.

Tags: , , , , , , , , , , , ,

  • Pingback: Introducing Question Bank | The SurveyMonkey Blog()

  • Sara White

    I like the idea that you have these “pre-made” questions available, and looked over a few of them. However, I was uncomfortable with the answer choices on some of the demographic questions, though, such as “are you male or female” (which is not always a dichotomous, exclusive answer, for intersex, transgendered, and genderqueer individuals), and also the marital status question and its answer choices (what about residents of states with a legal civil union but no legal marriage for same sex couples)?

    I realize that people who have difficulty answering these gender and sexual identity and marital status questions are in the minority in most surveyed populations. However, by giving these respondents no true and appropriate way to answer the question, you will either get inaccurate answers, or you will get people dropping out of the survey before completing it– and either way you are introducing a small but systematic bias to your results.

  • To engage in the service monkey and the certain categories pertaining to what was avalable for example….Contribute and charity,Costumer and spotlight,Data Insight,Engineering,features,from the ceo,Privacy and policies,survey Methology and the videos absorbes special tasks at hand when it comes to the business field that could be usefull when it comes to making my life or anybodies elses life a sucess!!!!!!

  • gakuya

    would like to participate in monkey survey

  • Hi Lori,
    thanks for this clarification on randomness and bias.
    One suggestion I have: in your last example, respondents need to qualify HOW reasoinable their supervisors decisions are. Now that is difficult, for a number of reasons. Far more accurate, I would say, is to assess how often respondents understand the decisions of theur supervisor, or find them reasonable. Than you can choose from answers like:
    – Always
    – Often
    – Seldom
    – Never

    And would it not be even more accurate to ask respondents about the behaviour of their supervisor, for instance: “My supervisor explains the reasons for his decisions clearly to us” (always, often, seldom, never).

    • MychalH

      Thanks for the suggestions!

  • Alan

    That’s great. thanks

  • Colleen Moretz

    How would I word the response scale to rate the proficiency of skill sets for an entry level job (technical designer)?
    *Extremely important (or proficient)
    *Very important (or proficient)
    *Moderately important(or proficient)
    *Slightly important(or proficient)
    *Not important (NA)
    How would I word the response scale to rate the knowledge needed for an entry level job (technical designer)?
    *Extremely important (or knowledgeable)
    *Very important (or knowledgeable)
    *Moderately important(or knowledgeable)
    *Slightly important(or or knowledgeable)
    *Not important (NA)

  • Dear sir,
    I have your monkey survey to the total suvey your completion is the monkey survey your not gift is the gmail. not your send
    Happy New your 2012 your new monkey survey gift.

  • i think i will become a great follower. just want to say your article is striking. the clarity in your post is simply striking and i can take for granted you are an expert on this subject.

    • Hanna J

      Thanks Ariane. We’ll continue to post about relevant survey topics, so check back in for the latest!

  • Anna Parker

    I do not think anyone is an expert on any one category for a very long time. The world changes, people change, the playing field changes, life is constantly changing! Maybe a point person.within a category, but an expert with x amount of years? Is not LIFE always a learning experience until we expire? Patientce is all encompancing in any life/catedory and without it we as humans set ourselves up to be demi-gods in a certain field because of limited knowledge and life experience. All cultures (sans U.S.) look to the person with fruitful knowledge not because they have letters beside their name, or comendations on the walls, but because those who command true authority in a field generated for all generations, maybe centuries, because of ripple effect handed down through centeries. CONFUCIAN SAY “DON”T BE A FOLLOWER”

    • Hanna J

      Hi Anna – You are absolutely right. The great thing about research and knowledge is that it’s always evolving! We have quite a few Monkeys with PhDs working on building methodologically-sound survey templates and questions in Question Bank, but even the smartest Monkeys have more to learn! Thanks for your feedback. Have a great day.

  • Tom

    I find your insight into clear thinking and avoiding confusion very interesting and hope to heed the advice with my proposed Survey Document .

    Best and thanks Tom

  • I’ve been a SurveyMonkey user & fan for years. Over the last couple of years, I’ve been doing a lot of work on an NSF-funded program for improving the quality of course evaluation questions, called the Student Assessment of their Learning Gains (SALG), with an adaptation in the works that’s suitable for assessing learning outside of school courses (eg, workshops, reviewing learning materials, etc).

    The biggest difference is a shift from asking students to judge their instructor to asking students to assess their own learning gains in terms of the stated course objectives, and then soliciting their feedback on what elements of the course contributed most to the gains.

    I would love to talk with someone in your team about the possibility of some version of our questions being included in your question bank. Could you email me? Thanks very much!

    • Kayte K

      Hi Mel, thank you very much for reaching out! Check your inbox when you get a chance. 🙂

  • Graeme Gibson

    Your survey is based on theory

Inspired? Create your own survey.

Inspired? Create your own survey.

PRO Sign Up Sign Up FREE or Sign in

Write Surveys Like a Pro

Write Surveys Like a Pro

Ever wonder what SurveyMonkey’s really made of?

Ever wonder what SurveyMonkey's really made of?

Read our engineering blog »