Our research has shown that survey design parameters directly impact respondent engagement. This essentially means that by modifying the survey design parameters, we can have an effect on engagement and, correspondingly, impact data quality – and turn a “bad survey” into a “good” one.
SurveyScore is an important component of the TrueSample data quality process that measures respondent engagement to help researchers optimize the design of surveys. SurveyScore Predictor is a tool that gives researchers a means to adjust and improve design parameters in order to optimize the SurveyScore results for a survey. When the SurveyScore is high, it’s more likely your survey respondents will complete your survey and give your questions the considered response you’re looking for, leading to higher-quality survey results.
A SurveyScore is scaled from 0-100, with 100 being the highest possible score, representing the highest relative level of engagement, and 0 being the lowest possible score, with the lowest relative level of respondent engagement. The chart here gives a typical distribution of SurveyScore for the 1600+ surveys that were used to build the score. (Note: this distribution may change over time as more surveys are added to the system.)
We’re often asked what type of surveys typically tend to score high and what type of surveys score low. While the problem is multidimensional, which means that the answer to that question cannot be answered by one parameter alone, we can nonetheless try to look at the partial dependencies of the score on some of the main parameters. The following table provides some typical design parameter values in three ranges of SurveyScore results:
As you might expect, the more complicated the survey, the lower the score. A large number of survey questions, longer questions, and very cumbersome matrix questions all can lower the engagement level of the individual taking the survey. However, it is interesting to note that as the SurveyScore result number increases, the mean number of matrix attributes in the surveys does not decrease correspondingly. But the total number of matrix attributes does indeed decrease with the increase in score, which is more a function of the overall number of matrix questions in the survey.
That means that in order to improve a survey so it scores in the mid-range, rather than at the bottom, it’s OK for the typical matrix question to have as many attributes as those in the low-scoring surveys, as long as there are not too many matrix questions overall. (Note: This analysis is by no means an indictment of matrix questions, which are essential to researchers – it is more a guideline for how to create a survey with limitations on these questions while keeping the respondents satisfied and engaged.)
For surveys that receive low SurveyScores due to length, consider conducting multiple shorter studies instead of one long survey, and weigh the cost implications against the improved data quality implications that come with higher SurveyScores. In typical scenarios, you can conduct pilot projects to evaluate the differences in data between the two formats to help you arrive at the final decision.
As always, the goal of a well-designed survey is to elicit the most accurate and representative possible responses from the respondents. With SurveyScore to indicate what’s “good” and “bad” about a survey, researchers are better able to improve the survey design to increase respondent engagement and enhance data quality.