Agree to Disagree

People seem fond of agree/disagree scales. How many times have you taken a survey that was full of agree/disagree responses?

I suspect this fondness is in part due to the fact that it doesn’t take much effort to come up with the response options. Or maybe it’s because survey creators assume the words have universal meaning.

In any case, I see agree/disagree used most often in grid or matrix style questions. One of the most common ways to fit several questions into a grid is to convert the individual inquiries into statements and apply agree/disagree response options. For example, a question such as “how helpful was this blog posting?” would need to be converted into statement to agree with “This blog post was helpful.”

And by converting the question into a matrix with agree/disagree, the surveyor might then list out all their previous blog post topics and ask you to agree/disagree on the helpfulness of each post. This approach is dangerous because it induces acquiescence response bias—the trend that people offer more agree answers than disagree answers solely by virtue of the question format. That is, the response options push people toward “yes” answers because we all have a tendency to be polite, respectful, and agreeable—especially toward people in positions of authority and power, like researchers, employers, and teachers (all people who tend to create a lot of surveys).

Instead it’s best to give questions their own space and focus on the construct of interest, such as understanding how helpful a blog post is, how satisfied an employee is with the benefits plan, how challenging a student finds a particular course, how clean a customer finds a restaurant to be, and so on.

What do you think? Do you agree to disagree? What challenges have you had in pinpointing the key construct in your questions? Share your stories with us, and we’ll share our tips with you.

Tags: , , , , ,

Inspired? Create your own survey.

Inspired? Create your own survey.

  • Sarah

    I think it’s important to note that some agree/disagree scales, as you call them, are standard 5-point Likert scales and the basis for most self-reported social science data. Without this standard scale, which is as close as researchers can come to true interval data, social scientists cannot run parametric statistical analysis.

  • Alisa

    I always prefer satisfaction-based scales to agreement. They can be standardized just as the agreement scales (5-point or 10-point) and, unlike the agreement scales, truly get to the core of a respondent’s experience. In addition to acquiescence bias, agreement scales push respondents to think more about the statement in front of them instead of their actual experience. Respondents can get caught up in the wording of the statement, which can lead to misleading results. Additionally, agree or disagree responses are not as insightful as satisfaction ratings. For example, if asked to agree or disagree with the statement “this blog post is helpful” it is difficult to say what a 5 rating (i.e. Strongly Agree) means in terms of how helpful the blog post was. However, asking “how satisfied are you with this blog post” and getting a 5 rating (i.e. Very Satisfied)shows that the post was VERY helpful. In my opinion, it is always better to employ rating scales and question structures that more directly speak to the respondent’s individual experience.

  • Sicco Jan

    I find that when responding to a survey, often I think the right question is not asked.
    Related to agree/disagree: if I am generally satisfied but know a specific exception, most surveys do not cover the exception. For one exception, I will not disagree (and because of the bias, even one positive exception to a disagree opinion will tend to “not so disagreeable”). While for the issuer of the survey that specific feedback might have been most helpful. Some surveys counter this with free format textboxes at the end, but then you need 2 filters to keep the match with the agree/disagree scale, which is nigh on impossible if the text box must cover all questions: “finally, please submit any ideas you may have…”. By then, the respondents might prefer to end the survey in stead of start typing.
    As a countermeasure, I would suggest that the issuer considers the grain of the questions more deeply: the more general the question, the more bias is introduced.
    Probably, more careful selection of the audience might also help to target the questions. Through skip-logic?

  • Pingback: Hide and Go Seek (a Construct) | The SurveyMonkey Blog

  • Pingback: How to Get Started With Your Survey | The SurveyMonkey Blog

  • http://www.smilegree.com smilegree.com

    I think agree and disagree depends on data you needed. If you want to make a poll with detailed result, you should make it 5 scale and or adding some text area so the visitor can share their mind. They will love to share their thoughts..

  • http://www.metacafe.com/watch/6340661/dui_lawyer_in_tempe_actual_physical_control_alcock_associates/ dui lawyer Phoenix

    hey admin, helpful blog post! Pls continue this great work!!!

  • http://bloggerboon.com Boon

    Since the acquiescence response bias impacts ALL questions on the survey, it does not effect the statistical significance when comparing questions to one another in a matrix.

    For example:

    1.) The most important part of a pasta meal is the quality of the pasta.
    2.) The most important part of a pasta meal is the quality of the sauce.

    The acquiescence response bias does not influence the comparison between these two factors. A person who thinks the sauce is more important will agree to a stronger degree on question 2 than they will on question one, and thus the end analysis will STILL show which is more important to the population.

  • Pingback: 5 Tips for Creating a Great Survey | The SurveyMonkey Blog

  • Pingback: 「はい」と「いいえ」のあいだ | The SurveyMonkey Japan Blog

  • http://www.gyrfacymru.com RhysEW

    As a relatively inexperienced creator of surveys, thank you for this very interesting thread. However, have I been unknowingly trying to reduce ‘aquiescence response’ bias in my surveys?

    Rather than use a standard 5-point Likert scale, I often use a 4-point response scale, eg 1:Agree strongly, 2:Agree, 3:Disagree, 4:Disagree strongly. Such a compact scale is easy for respondees to grasp (especially in comparison to the 10-pointer) and ‘forces’ them to either side of the “Neither agree or disagree” option of a 5-point scale.

  • http://www.facebook.com reyal

    i can publish your add of agree to disagree on facebook and you also know how famous site is facebook so wish u luck and u will deffinately get profit

    • http://www.surveymonkey.com Bennett P

      Glad you found the post helpful! Feel free to repost it

  • Edward

    You do realize that the alternative questions you suggested are biased in their wording, correct? By asking a respondent “how” satisfied they are, you are assuming that there’s “some” level of satisfaction.

    I would also love to see some concrete evidence as to the actual effects of the “Acquiesence Bias” associated with agreement scales.

    • Hanna J

      Absolutely right! The best survey would have not just one question about satisfaction, but also have a question about dissatisfaction — and then these questions should be “counterbalanced” so that the order they’re presented varies. That way (on average) respondents are primed equally with satisfaction AND dissatisfaction thereby washing out any acquiescence bias. The acquiescence bias is a tried and true phenomenon in psychology…check out: Messick, S. & Jackson, D. N. (1961). Acquiescence and the factorial interpretation of the MMPI. Psychological Bulletin, 58(4), 299-304.

  • Robyn

    I’m a relative newcomer to survey design.
    One of the things I want to learn from my survey results is the extent to which respondants hold certain misconceptions on a particular topic.
    Later there will be training provided on the same topic. I want to avoid condescending to them (if it turns out that my perceptions of their present understanding have been a bit pessimistic) and I want to identify those areas on which I’ll need to work on changing attitudes.
    In surveys I’ve taken as a respondant, I’ve often encountered agreement scale questions aimed at identifying attitudes around a subject. And this was the approach I was going to take for that portion of my survey.
    In light of your article, do you recommend taking a different approach?

    • Hanna J

      Hi Robyn – We would suggest you try converting the agree/disagree statements you want to ask into questions.

      For example:
      Take this: How much do you agree with this statement? “I think my boss is productive.”
      And change it to this: How productive is your boss?

      You need to be careful that there isn’t acquiescence bias in the changed question as well – in other words, questions should be balanced, rather than assuming something is true. Those kinds of questions are bad for two reasons: 1) They can bias answers in one direction, and 2) People who are satisficing are going to be more likely to agree with whatever is assumed in the question. (For more about satisficing, see the following blog post: https://www.surveymonkey.com/blog/en/blog/2010/10/04/satisficing_surveys/)

      For example:
      Take this: How much do you like cheese? ß (This assumes the respondent likes cheese, or it assumes that the survey maker thinks they should like cheese. And those who are satisficing are going to be more likely to say that they like cheese.)
      And change it to this: Do you like, neither like nor dislike, or dislike cheese?

      If they need help choosing response options to use, they can check out the following blog post: https://www.surveymonkey.com/blog/en/blog/2012/01/24/words-speak-louder-than-numbers/
      If you need help wording your questions, check out this post: https://www.surveymonkey.com/blog/en/blog/2011/01/10/hidden-constructs/

      Hope that helps! Let us know if you have any more questions.

  • http://www.wellquestconsulting.com Tammy Horne

    Interesting article from the journal Survey Research Methods (2010) that compares agree-disagree against item-specific response options (such as the examples blogger Phil G provides) – and finds the latter to be superior. Public domain access through https://ojs.ub.uni-konstanz.de/srm/article/download/2682/3971
    (Complete reference info in the article.

    I have always preferred item-specific response options for most questions, when asking about behaviour, skils, knowledge, experiences, or satisfaction (though I have sometimes use A/D for guaging opinions about an issue – something beyon the respondent’s own expereince, such as their opinion about a policy).