Online Surveys = Ongoing Insights

Presidential Poll Tracker, Part 6: Election Day Wrap-up

 → 
 → 
Presidential Poll Tracker, Part 6: Election Day Wrap-up
Presidential Poll Tracker Logo

Now that the campaigns have officially come to a close, and the election results are in, we finally have our moment of truth here at SurveyMonkey for how closely our polling data estimated the actual election results.

So, without further ado, how did our model projections do??

Model: #1, raw election poll data (10/30 – 11/15) 

Score: 96% accuracy (49 out of 51)

Weights: 0 (that’s right, NO weights) 

Wrong calls: 2 (Florida, Ohio)

Cool factor: We think that this is cool because this is our raw data. No weights. No smoke. No mirrors. No liberal or conservative bias. Just raw responses from any American voter kind enough to fill out our poll over the span of the week leading up to the election.

Model: #2, “how” model of poll data (10/30 – 11/5)

Score: 96% accuracy (49 out of 51)

Weights: 2 (data volatility, undecided voters) 

Wrong calls: 2 (Florida, Ohio)

Cool factor: We think that this is cool because it suggests that the prejudice against internet samples is unfounded. This model used weighting to correct for any random error that might have come from the fact that the online medium may lead to disengagement from a survey. At the end of the day, however, it didn’t change our raw predictions.

Model: #3, “who” model of poll data (10/30 – 11/5)

Score: 96% accuracy (49 out of 51)

Weights: 2 (education level, party identification)

Wrong calls: 2 (Florida, North Carolina)

Cool factor: We think that this is cool because this shows that the representation of voters in our sample, although not perfect, is good enough—well, except in Ohio that is, there it really helped us get it right. On the flip side though, attempting to adjust on demographics of turnout actually led us to predict the incorrect outcome for North Carolina when our raw data had it right!

Also, overall, this model had the closest electoral vote number of any of our three models. And, although we called two states wrong, our electoral vote estimation was actually closer to the actual vote tally (332/205) than either of the two big league pollsters whose websites we were furiously checking. Nate Silver’s FiveThirtyEight blog estimated 313/225 (although his actual forced-choice state calls were all correct) and Real Clear Politics was even further off at 303/235 (they were wrong with us on Florida).

So, well done all of you respondents out there! Thanks for helping us get an accurate read on what you were thinking about this whole election. If you’d like to see read about our Presidential Poll Tracker from start to finish, click here for the six part series. And take a look here for a more in-depth analysis of our project. See you at the mid-term elections in 2014.

Have any more questions about specific results or the methodology of our election project? Sound off in the comments below…

Tags: , , , , , ,

Inspired? Create your own survey.

Inspired? Create your own survey.

PRO Sign Up Sign Up FREE or Sign in
Write Surveys Like a Pro

Ever wonder what SurveyMonkey’s really made of?

Ever wonder what SurveyMonkey's really made of?

Read our engineering blog »