How Twitter DIDN’T Predict the Iowa Caucus Outcome

Oh man, do I love some skepticism, and Trilogy Interactive’s Will Bunnett and Steve Olson have a healthy dose of it for us today. Remember those stories about social media “predicting” the outcome of this year’s Iowa Caucus? Apparently, they very much needed some cold water poured on ’em (and you wonder why I added so many caveats to the numbers we published here). Take it away:

(Mis)reading the Twitter and Facebook Tea Leaves

By Will Bunnett and Steve Olson
Infographic by Maureen Noone
Originally published on TrilogyInteractive.com

With all the excitement around the Iowa caucuses in New Media Land, you could be forgiven for thinking the biggest contest of the night was seeing who could most convincingly predict the results on Twitter and Facebook. As Mashable asked, “Did Twitter Predict the Iowa Caucus Better Than Pundits?”

After looking at several models, the answer is, unfortunately, no.

Several groups got in on the fun of trying, though:

  • Social media monitoring agency Ensomo looked at social media mentions, likes, and retweets of the GOP candidates, from December 23-30 (published on Epolitics.com).
  • SocialBakers looked at Facebook’s “people talking about” metric in the week leading up to January 2, and the total number of Facebook fans.
  • Sociagility used its proprietary PRINT score from mid-December.
  • Globalpoint looked at the total number of Twitter mentions about each candidate in the final week of December.

None of these metrics came even close to a significant correlation to the final caucus results, with one exception: Globalpoint, with a suspiciously strong correlation of 0.99 — almost perfect, and well ahead of the traditional gold standard, the Des Moines Register’s poll, which came in at a 0.86 correlation with the final results.

However, as Globalpoint themselves point out, their data appear to have been incomplete. Due to the high volume of Ron Paul mentions, Twitter withheld two full days of Paul tweets from GlobalPoint’s dataset. When we tried higher estimates of Paul volume to compensate for this loss, the Globalpoint model’s correlation came into the same range with the others.

It isn’t that surprising. Social media are real-time, and social media analytics are time-intensive. Most of the data in the models we looked at were at least a week old, if not more. The Register poll on caucus eve may have been successful, but traditional polls from a couple weeks earlier showed a strong, but eroding, Gingrich lead, and would not have come very close to correlating with the final tallies, either.

Additionally, hardly any of the social media metrics we tested were filtered by state and volume was low — as a TweetReach analyst told Dave Copeland of ReadWriteWeb.com, “We track more tweets in an hour about a single TV show than we have in five days about all nine candidates.” And granted, it can be very difficult to apply reliable and comprehensive location filters when collecting social media data, so we don’t fault anyone for trying. But there’s a reason people don’t generally use national polls with low sample sizes to predict state races.

We here at Trilogy Interactive would be the first to tell you that a strong social media campaign is critical to success for a modern campaign. But its power hinges on proper understanding of its capabilities. Twitter and Facebook “talking about” traffic do not appear to be capable of predicting caucus outcomes any more than gross Facebook followers can, as our research after the 2010 elections showed.

A strong social media campaign helps mobilize, persuade, shape the narrative, and raise money. But if we single out one dimension and attempt to relate it to actual votes cast, it looks deceptively meaningless. We hope that problems with the predictive models we’ve seen so far will not deter campaigns from fully engaging social media in proper contexts.

Written by
Colin Delany
View all articles
1 comment
  • The purpose of the Sociagility study was to see how the candidates were performing as at 21 December 2011 using our proprietary measure of social media performance normally reserved for brands. In doing so, we did find a statistically significant positive correlation between performance and voting intention as measured by Public Policy Polling data around the same time.

    However, it would have been foolish to use this as a basis to predict the final result, so we didn’t.