The goal of the SurveyMonkey Question Bank–to provide methodologically sound questions on topics that SurveyMonkey customers are interested in–drove each step of how we built the bank.
SurveyMonkey customers create 3+ million surveys and ask almost 33 million questions each year. We wanted to start the Question Bank building process by reviewing these questions but we also knew the world could end before we made it to the 33rd millionth one. So we did what any smart monkey would do and chose a subset of surveys from the total pool of SurveyMonkey surveys to work with.
Choosing the subset of surveys was a two-stage process. First, we scanned the 3+ million surveys (while maintaining customer privacy and anonymity) to identify the most commonly surveyed topics by SurveyMonkey customers. We learned that, currently, the most popular topics are customer feedback, human resources, public education, health care, telecommunications, retail, politics, non-profits, events, travel, and demographics. Then we moved on to identifying the questions within each topic.
The questions themselves were chosen using a stratified random sampling procedure. We searched the 3+ million SurveyMonkey surveys looking for questions about one popular topic (or strata) at a time. Then, we randomly chose 10% of surveys on that topic, ensuring that each and every SurveyMonkey customer’s survey on that topic had an equal and independent chance of being selected.
For example, we scanned the entire survey pool just for those surveys that asked about customer feedback and then randomly chose a subset of surveys from all those customer feedback surveys. We repeated that process for each remaining category: human resources, public education, health care, and so on.
The end result? Eleven random samples which, when combined, included roughly 20,000 questions–questions we can confidently say are representative of (or look a lot like) the entire pool of SurveyMonkey customer questions on the same 11 topics.
We began reviewing these 20,000 questions and found that many of them were asking essentially the same thing. For example, the questions, “Does your boss make good decisions?” and “Are most of your supervisor’s decisions good or bad?” both ask about the quality of the decisions made by a supervisor. Thus, we were able to winnow down the 20,000 questions to about 1,500 questions commonly asked by SurveyMonkey customers.
Most of Question Bank was inspired by questions SurveyMonkey customers created in the past. But we also wanted to include questions on the 11 most popular topics that you might like to ask in the future. The inspiration for these questions came from a number of sources, including professional publications, academic journals, and industry blogs.
For example, human resource professionals have learned that employees who report being satisfied with the recruitment process are more likely to be satisfied three months into the job than are employees who report being dissatisfied with the recruitment process. This connection is something HR professionals appear to be interested in, so we’ve included questions on the topic, such as “How clearly did your recruiter explain the details of the job to you?” and “Overall, were you satisfied with the recruiting process at our company, neither satisfied nor dissatisfied with it, or dissatisfied with it?”
Next, we needed to see how we could help rewrite the questions created by SurveyMonkey customers so that they ask only about the information you’re interested in and nothing else. Every survey question potentially measures four items: your construct of interest (the good stuff), as well as other constructs, random error, and bias (the bad stuff). Our goal was to ask only about the good stuff.
Your construct of interest is simply what you’re hoping to get information about. For example, you might be interested in how professional your customer service representatives are. In that case, “professionalism” is your construct of interest, and you want to ask a question that measures “professionalism” without measuring anything else that could distract you from the actual measure of that construct. To understand this idea better, let’s take a look at an actual SurveyMonkey customer question:
What is the construct of interest here? Well, the surveyor is actually asking about two things with this question: whether the supervisor provides feedback and how judicious the supervisor’s assessments are. Each survey question should ask about only one construct at a time, so this question needs some tweaking. We need to split it into two separate questions:
But what else is going on with these seemingly innocent questions? Two troublesome potential distractions, actually: random error and bias. Let’s tackle each distraction separately.
What is it and how do you avoid it? Random error is the term scientists use when they’re talking about chance fluctuations, those things that happen haphazardly, without any direction. Imagine you’re playing darts and a gnat is buzzing around your head. It’s coming at you from a different direction each time you throw a dart, making each dart land just slightly off from the target, sometimes to the left, sometimes to the right, sometimes above, and sometimes below. You may normally be an amazing dart player, hitting the target nearly every time, but you wouldn’t know it from this round. Your darts are all over the board. That’s what random error does to your survey responses. It’s the gnat that distracts your respondents, moving their answers all over the place, making it hard for you to hear what they’re really trying to tell you.
You can avoid random error by asking questions that are very easily understood by the widest possible audience. For example, the phrase, “judicious assessment,” uses sophisticated language that will likely be interpreted differently by different people. That is not a good thing. You want all your respondents to interpret each word in your question the same way. Simpler language can help. For example, you could replace “judicious assessments” with “reasonable decisions” and ask the following question:
Now that’s a question that will be more easily understood by a wider range of people. But, as improved as the question is, it will still likely generate random error. Why? Well, look at the steps respondents have to take to answer the question. First, they’ve got to decide if their supervisors make reasonable decisions. And then they’ve got to decide which of the five answer choices line up to their own answer.
So let’s say a respondent says, yeah, sure, my supervisor makes reasonable decisions. Now what? I suppose I either agree or strongly agree that my supervisor makes reasonable decisions. But what does it mean to strongly agree that someone makes reasonable decisions? Does it mean the decisions are especially reasonable? Or does it mean that I feel strongly the decisions are reasonable? It’s not really clear is it? And therein lies the problem. The question asks for an answer that is not provided by the available answer options. Instead, respondents are forced to map their answers onto the less than optimal answer options they’ve been given.
Luckily, there’s an easy fix to this problem: use construct-specific response scales. You’re interested in reasonable decisions, so ask about how reasonable they are, like we’ve done with the following question:
Now the question and the answer options match up. Respondents no longer need to search for the answer option that best matches their own. That step has been completely eliminated. The answers generated will be less muddied by random error, and you’ll be able to know what respondents really think.
What is it and how do you avoid it? Bias is the term scientists use when they’re talking about fluctuations that are slanted in one particular direction. So, returning to our dart example, imagine that the floor you’re standing on while playing darts is a bit uneven, causing your darts to always land slightly to the right of the target. Again, you may normally be an amazing dart player, hitting the target nearly every time, but, according to this round, you’re a dart player who hits slightly to the right of the target nearly every time. Your performance is consistent, but it’s consistently off-target. Bias is the uneven floor that makes your respondents’ answers lean in one direction or another, making it hard for you to hear what they’re really saying.
You can avoid bias by asking questions that are balanced and free from pressure to answer in any one direction. For example, when we changed the answer options from agree/disagree to construct-specific answer options in our “reasonable decisions” question, we eliminated an acquiescence bias, which is simply the tendency of people to go along with some request or opinion in order to be agreeable, polite, or respectful. For example, when presented with the statement, “My supervisor makes reasonable decisions,” respondents are more likely to agree than disagree that their supervisor’s decisions are reasonable. The answers generated by this question, then, will be biased in the direction that the supervisor makes reasonable decisions. Your survey data may be telling you this supervisor makes more reasonable decisions than he or she actually does.
Luckily, our last two changes to the question–making it a question rather than a statement and using construct-specific answer options–greatly reduced this bias. Now respondents are asked a biased-free question and provided a balanced set of answer options. No more uneven floor.
We started with the topics our customers were interested in and the questions that were already being asked. We found the construct of interest in a question, asked about it and no other construct, reworded the question to avoid error and bias, and, voila, a methodologically sound question was added to the SurveyMonkey Question Bank. Then we just repeated that process 1,499 times. And we’ll be repeating it many more times as you give us your feedback and we continue to add more topics and questions!
Let us know what you like and don’t like about Question Bank in the comments below. We’re eager to learn how we can make your surveying life a bit easier.
Lori Gauthier is our resident question-writing monkey, contributing to resources that help our customers quickly create methodologically-sound surveys. Lori holds an MBA from The University of Texas at Arlington and a PhD in Communication and a PhD Minor in Psychology from Stanford University, where she taught the Communication Research Methods course and conducted survey methodology-focused research.