Online Surveys = Ongoing Insights

Just Back from the ESOMAR 3D Conference

Just Back from the ESOMAR 3D Conference

Two weeks ago, I attended the ESOMAR 3D conference. About half of the conference was dedicated to presentations that focused on an emerging method to get people more engaged as research study participants. Another set of presentations looked to social media as a source to investigate people’s attitudes and behaviors.

Here at SurveyMonkey, we have a slightly different perspective on the pressing issues facing the industry. I wrote the following article for the current issue of ESOMAR’s periodical Research World.  It was handed out at the conference and I wanted to share it with our blog readers (it’s a bit long for a post, but I wanted to share it in it’s entirety).  Take a read, and let us know what you think.


Shel Silverstein’s classic children’s book is the story of a boy that grows old asking a loyal tree to make him happy. The tree initially gives apples for the boy to sell for money, then branches to build a house, then the trunk to build a boat, until finally, the tree is only a stump for the boy—now elderly—to sit on and rest. Of course, the boy would eventually die and the tree would still be a stump, undoubtedly of little value to the generations of boys to come. Market research appears to be a tree that will give until there is nothing left to give.

Only a boy myself, in research years anyway, I hope we never get to that point.

So far the history of our relationship with the market research tree shows some evidence of selfish abuse of the tree. When people stopped opening their doors for face-to-face interviews, we decided to mail them. That worked for a while, until it didn’t. So then we found a more efficient way to get a hold of people—telephones. That’s been working for some time, but it’s becoming more cumbersome by the day as the number of mobile-only and mobile-mostly households increases. Now we’re after people on their beloved social networks. By the time this tree is a stump, we won’t be able to open our doors or mail, answer our phones or login to our social networks.

So what part of the tree will we cut down next? “Do-It-Yourself” research (DIY) seems like the next target.

The low hanging fruit in the attempts to dismantle DIY were claims like “garbage in garbage out.” When that didn’t deter people, the charges softened to medium hanging fruit: “well, ok, but be careful.” Indeed, such language is embodied in a variety of forums, including, for example, the Independent Consulting Group’s (ICG) guide to DIY research.  The guide offers caution and warnings for three cornerstones of research—questionnaire design, sampling, and interpretation of results.

When it comes to surveying an internal audience, the guide warns that employees may be more comfortable answering a neutral parties survey rather than their boss’s.  This is largely true, except that people are generally comfortable with their human resource departments. Indeed, a great deal of the surveys aimed at internal audiences are owned and operated by human resource personnel.

The approach to DIY sample is clearer, however. By now, SurveyMonkey and other DIY applications offer some sort of sample product (usually panel) for their customers. By and large, these products are either access to existing panels or entirely new panels built with essentially the same techniques as the major panels. So, what is good for the goose should also be good for the gander.

However, warnings about questionnaire design from the market research establishment are quite fair. To overcome the possible pitfalls of poor questionnaire design, SurveyMonkey decided to do what any sensible internet company of its size (300,000+ surveys monthly) would do—crowd source the problem. Presumably, hundreds of thousands of survey creators are a good measure of what the world is trying to discover via surveys.

Choosing the subset of surveys was a two-stage process. First, we scanned the 3+ million surveys (while maintaining customer privacy and anonymity) to identify the most commonly surveyed topics by SurveyMonkey customers. Then we moved on to identifying the questions within each topic.

The questions themselves were chosen using a stratified random sampling procedure. We searched the 3+ million SurveyMonkey surveys looking for questions about one popular topic (or strata) at a time. Then, we randomly chose 10% of surveys on that topic, ensuring that each and every SurveyMonkey customer’s survey on that topic had an equal and independent chance of being selected.

For example, we scanned the entire survey pool just for those surveys that asked about customer feedback and then randomly chose a subset of surveys from all those customer feedback surveys. We repeated that process for each remaining category: human resources, public education, health care, and so on.

The end result? Eleven random samples which, when combined, included roughly 20,000 questions–questions we can confidently say are representative of (or look a lot like) the entire pool of SurveyMonkey customer questions on the same 11 topics.

We began reviewing these 20,000 questions and found that many of them were asking essentially the same thing. For example, the questions, “Does your boss make good decisions?” and “Are most of your supervisor’s decisions good or bad?” both ask about the quality of the decisions made by a supervisor. Thus, we were able to winnow down the 20,000 questions to about 1,500 questions commonly asked by SurveyMonkey customers. As a last step, we applied the highest standards for scale construction and labeling so that everyone can administer reliable measures every time.

This type of innovation is only the beginning of where these new branches can take us. It also allows us to focus on what researchers do best; answering the ultimate question – what does the data tell us?  If the samples are good and the questions are right, then the area where market research can add tremendous value is research design and interpretation of results.  In other words, market researchers’ unique abilities are a) understanding what the client wants to figure out and b) creating the best way to go about uncovering that wisdom. No amount of algorithms, machine learning, crowd sourcing, and lines of code can replace the tailor-made decisions of a market researcher. In other words, software can’t give a brand advice. The whole idea, then, is to think of market research as consultative rather than a mere markup on fieldwork.

The first step for a market researcher in a DIY question platform like ours would be to identify the questions—and the order of the questions—that are a valid indication of the research question. Nobody really loves writing questions that much. In fact, we reuse them quite often to avoid the aggravation. Instead, people love what the questions deliver—information. So DIY question platforms save some hassle in that regard.

The second step is sample selection. Again, DIY tools offer sample in a one stop shopping format that is merely an automated version of the manual process we’ve used for more than a decade.  Ostensibly, the market researcher needs to figure out who to ask questions of. Identifying valid sampling frames is very much human—an expert human to be sure—and not software. This expertise is crucial in a time as companies are eager to research geographies with heavy dependencies on mobile points of contact whether via telephone or web—a trend that is only increasing.

The third step for the market researcher is a heavy dose of consultation. What did the questions and samples produce, how do we know, and what should we do about it? More importantly, how does this compare to what we’ve found in the past and what should we look for in the future? None of this critical thinking can be offered by DIY tools. So these are the shining moments for market researchers when a brand has a multi-million dollar decision hanging in the balance.

However market research changes, it will almost certainly move at a faster pace as tools and processes become automated and increasingly efficient. Gone are the days of six month fieldwork. When the stock market jumps around day to day, the brands that comprise it need to make decisions quickly with information collected almost immediately. It seems that market researchers will be able to anticipate some of the needs from brands and provide guidance and direction at a pace that matches the demands of the “new normal” in business life.

A $20 per month piece of software is hardly a replacement for a well trained and motivated person that cares about improving the products we buy and the society we live in. DIY software can, however, allow researchers to focus on the specific task of helping a client arrive at a better decision. In other words, we need to reach for the treetops together—build a tree house, even.


Tags: , , , ,

  • Sicco Jan

    Interesting topic.

    Perhaps beyond DIY is “do not ask” or “anticipative” research.

    The trend towards Big Data might allow (and in a lot of cases, apps already work this way) real time tracking of location based on mobile phone.
    Can any conclusions be made from location only?
    Is the number of telephones an indicator of popularity of a square, shop or Starbucks? Of sales in automobile outlets?
    Can we connect the data telephone companies have with the data credit card companies/banks have in a commercial way?
    Where do people use their credit card? What do they buy? Are they a specialist grocers customer for $1 (nice to count the customer but no real business volume) but buy their bulk in Wallmart?
    Do we need to conduct a survey or can we predict sales from location/spending trending?

    Of course, this does not capture new products or businesses acceptance, or a desire to change some rule/regulation, which is perhaps the majority of the surveys ran through survey bureaus.

    Is surveying actually a good predictor of behaviour/desires?

    Where are the times that local entrepreneurs knew their market and could suit their customer/consumers and/or would explore possibilities as scientific/opportunistic endeavor? Levi’s did not invent the jeans because he surveyed the gold diggers, but by observing. Darwin did not ask a large enough population of species as to how they thought they came to exist. Nobody asked Bell to create mobile phones, neither did he ask whether people would want to use his landlines. Etc.. Examples are abundant.

    Perhaps another added value of research consultants is not only adding meaning at the end of the survey, but also identifying what the requester intends to find in his survey. Not just matching the right question to possible the wrong problem (where problem is the created problem the survey seeks to address) but investigating the intention with the survey.
    I don’t have a specific example here, but in setting ISO certificates you might certify the production process for concrete life jackets. Certification would not be the answer to a marketable product. BTW, a lot of products care CE or TUV registration in the EU although it says nothing about the effectiveness of the product. It just won’t kill you. Nice to know for toys that bring no pleasure to the kid… 🙂

Inspired? Create your own survey.

Inspired? Create your own survey.

PRO Sign Up Sign Up FREE or Sign in

Write Surveys Like a Pro

Write Surveys Like a Pro

Ever wonder what SurveyMonkey’s really made of?

Ever wonder what SurveyMonkey's really made of?

Read our engineering blog »