In addition to the fall of several high-profile Progressive Conservative (PC) incumbents, one of the surprising victims of Alberta’s recent election has been the legitimacy of the political polling data. Not only did these polls widely indicate a neck and neck race between the PCs and the Wildrose Party (WP), some even predicted a Wildrose majority. As it turned out, the PCs took more than three times as many seats as the WP (61 to 17), a result that no polls (and few pundits) predicted. Embarrassing to the WPs, certainly. But embarrassing to the pollsters as well?
Perhaps. The National Post ran an article titled “‘We were wrong’: Alberta Election pollsters red-faced as Tories crush Wildrose” and quoted a pollster as stating “we were all wrong…we were all equally wrong”. Wrong, Ian Large of Leger Marketing argued, for two reasons: “strategic” voting on the part of traditionally non-PC voters who voted PC to keep out the WP; and a large number of undecided voters who, the day of the election, decided to vote PC or at least, decided not to vote WP.
There’s a strange contradiction at work in this mea culpa and its explanation, however. On the one hand Mr. Large is acknowledging “he got it wrong” while on the other hand providing a perfectly logical explanation for why events did not turn out as the polling numbers predicted. Fundamentally – assuming the data collecting was technically accurate – one cannot believe in both ideas: that is, if the pollsters were wrong, the outcome of the election cannot be explained by the large contingent of undecided voters. Conversely, if the election was decided based on undecided voters, the pollsters were not wrong.
To better understand the contradiction at work here requires a bit more explanation on polls and statistics more generally. Statistics – in this case, polling data – are only as stable (as “good”) as the social relations they are drawing their data from. Certainly, most of us tend to understand the data collected by polls and then analyzed by statisticians as measuring behaviours and attitudes that are “out there” to be collected. In a memorable analogy, sociologist Bruce Curtis described this as the “mushroom picking theory” – information exists as mushrooms to be picked by data collectors.
Of course, we know this is not true. The kinds of questions we ask, who we ask, how we ask them, when we ask them and how respondents feel at the time of the questioning, all complexly shape the kinds of information we receive and thus, the data that can be analyzed. In this way, statistics can be thought of as a sort of conceptual “fisherman’s net”. The kinds of nets you use powerfully shapes the kinds of fish you catch. Changing the size of the net or where and when you fish changes your catch. Statistics work in an analogous manner but they may only do so when, to mix a metaphor, there are “fish to catch”.
If the predictive value of statistics only work as well as the “firmness” of the social relations they attempt to measure, their inability to predict results in an unstable political climate are not “wrong”: “stable” data cannot be collected in an unstable social context. Polling data does not “produce” social relations (despite the growing body of scholars who have attempted to argue its constitutive power), it is the other way around. As such, unstable social relations will produce unstable data, whether or not we position them as such.
There are many folks who are likely embarrassed by the results of Alberta’s provincial election. The pollsters should not one of them.