Just when you thought disengaged human respondents were the worst offenders when it comes to skewing survey data, here come the robots.
For years, AI-powered bots have infiltrated online conversations and passed themselves off as human, whether on discussion forums or social media (just ask Elon Musk). They’re spamming online channels, and now they’re coming for your survey data.
Why? The most common motives include making money directly from incentivized surveys, or using participation as a vector for frauds and scams. More alarmingly, they can be used to intentionally distort data, rankings, and public opinion for both political and commercial gain.
But it’s not all bad news. AI and machine learning have also been beneficial to businesses, especially for marketers. From using chatbots for lead generation and customer service, to automating marketing campaigns, marketers have largely welcomed AI. According to a recent SurveyMonkey study, 73% of marketers feel AI helps them do their job better.
But for market researchers, AI-generated bots used to fill out survey forms aren’t generating anything but bad data. Understanding that survey data is only as good as the quality of its responses, we developed a ChatGPT-powered bot to put the SurveyMonkey platform to the test. Our mission: to understand how well AI-powered bots can outsmart common methods survey researchers use to spot false responses from bots.
How AI-powered bots are impacting your survey data
Below are some of our early findings about bot capabilities and the potential challenges market researchers may face in detecting them:
- They skip honeypot questions. Our test survey contained a honeypot question, using white text on a white background typically detected by bots, but skipped by humans. However, it was simple for us to instruct our bot to detect and skip these questions.
- They pump the brakes. Since bots can complete surveys with remarkable speed, it would follow that market researchers should create “speeding flags” as a quality metric that gauges time spent on the survey per page, or on the survey overall. Our bot was able to bypass speeding flags by adding delays throughout the survey, including delays between answering questions, selecting multiple answer options within multi-select questions, and typing out open-ended responses letter by letter—just as humans would.
- They provide human-like consistency. Our bot was able to use ChatGPT to understand context within a survey, answering questions by scanning overall survey information, rather than randomly selecting the answers. For example, it was able to detect “market research” in the survey title, increasing the chances that it would then select “market research” in a screener question. We were also able to instruct the bot to answer open-ended questions like a human, using a variety of answers that bear little resemblance to typical “bot language” (i.e. long paragraphs of text, consistently correct punctuation, or usage of uncommon words or phrases). For example, asking “What are you most excited about when it comes to using AI in market research?” yielded responses such as:
- “Increase productivity.”
- “Gain deeper customer insights”
- “Improving data analysis and decision-making”
- “I am most excited about the speed to insights that AI can provide.”
Best practices for AI bot detection in surveys
While no solution is foolproof, adding friction and adaptive strategies like the examples below can detect and limit the impact of bots on your survey data:
- Limit survey context before screener questions. As noted, our bot was surprisingly good at leveraging ChatGPT to gain context and provide logical, human-like responses. To make it more challenging for bots to predict and navigate screener questions, be sure to hide survey titles and introduce the survey topic after the initial screener questions.
- Make it difficult for bots to process survey elements. Our bot easily detected survey titles, questions, and response options that appeared in the underlying code within the survey’s webpage. To prevent bots from digesting and interpreting questions, use images of questions instead of text.
- Implement complex human behavior checks. Our bot was able to mimic human survey-taking behavior, scrolling up and down web pages, typing slower, and adding delays between questions. For better indications of bot activity within surveys, implement advanced bot detection measures such as CAPTCHAs, or analyze mouse movement and typing patterns.
- Prevent multiple submissions. What other damage is our bot capable of? It can submit results to the same survey multiple times. Although some bots are able to use different originating IP addresses, you can help “ballot-stuffing” by blocking respondents from completing the survey more than once, especially if they come from the same IP address.
- Add logic checks and trap questions. Another way to flag bots is to add questions aimed at making sure respondents are paying attention and answering consistently. (See examples of trap questions here).
As our experiment demonstrated, today’s AI-powered bots are sophisticated enough to mimic human behavior and evade many typical bot detection methods used by market researchers.
Survey fraud enabled by bots poses a serious threat to the market research industry. If left unchecked, it can undermine data integrity, erode confidence in your insights, and lead to costly mistakes. However, the more you can “know your enemy,” the more you can innovate and adapt, creating bot-free survey data that powers insights and action.