As Twitter continues to gain popularity as means for self-promotion, the need for “account verification” becomes stronger and stronger. For any one legitimate account, particularly those of celebrities or other famous people, there may be tens of copy cats. It’s difficult, if not impossible for Twitter users to discern which account belongs to the real person they are trying to follow.
As Twitter describes it, “Verification is currently used to establish authenticity of identities on Twitter. The goal of this program is to limit user confusion by making it easier to identify authentic accounts on Twitter.” Once an account is verified, a little blue checkmark appears next to the user’s Twitter ID – serving as trusted proof that this person is who he or she says she is.
This need for verification isn’t dissimilar to the need that market researchers have to ensure that the respondents taking their surveys are who they say they are. Researchers need a stamp of proof for each respondent’s authenticity – proving that he or she is “real” and is qualified to participate in market research studies.
Interestingly, Twitter is very ambiguous about the methods and technologies it employs to perform account verification. It seems reasonable that the company would keep its policies secret to prevent “gaming” of the system. It’s possible that they even use different and subjective criteria for specific circumstances and users. After all, if users were to know exactly what criteria Twitter required accounts to meet in order to be verified, they would surely come up with ways to meet those requirements.
In the market research world, such ambiguity is a hard pill to swallow. Researchers, by nature, don’t like not knowing exactly what criteria and data points are being used for decision-making. And they expect to see research on research indicating why such criteria were selected and how it impacts their data. I think this is a completely reasonable expectation.
But there certainly is a valid business case for using a ‘black box” approach to validation of online market research survey respondents. Just as Twitter doesn’t want its users to know what criteria they use for verification, researchers shouldn’t want online survey respondents to know what criteria they use either. A market research data quality solution must be “opaque” to the survey respondents so that they cannot identify ways to skirt the quality checks in order to be considered valid for a survey. Either the quality solution should be completely hidden to the respondent – meaning they don’t know that a solution is in place at all – or it should appear random and inconsistent and maybe even subjective.
This string of thought leads me one step further…. Is there a case for keeping research quality criteria opaque to not only online survey respondents but also to the researchers conducting them? Can you imagine a circumstance where a research supplier might also be aligned to “game” a research quality system in order to finish a project for an end client? Perhaps a supplier would need to skirt the quality checks in order to be able to fill the respondent quotas for a survey? Or perhaps they would work around the quality solution in order to close a project more quickly and cost-effectively? So, that leads me to another question:
If a reputable, third-party research quality solution were to validate survey respondents using a “black box” approach that did not reveal the methods or techniques being used for validation — would you accept this approach?
I’d love to hear others’ perspectives on this subject, so if you have any thoughts, leave me a comment.