Venture bravely into the comment section of any article covering United States politics these days and you will be witness to a phenomenon I like to call “Poll Wars”.
While its long been accepted (if not entirely understood…) that data of any kind can be manipulated or packaged to prove essentially any point, for some reason polls appear resistant. Support Bernie Sanders? You’ll find multiple polls that show him ‘surging’. Support Hillary Clinton? Every primary and caucus result thus far has been more or less expected, if you’ve been following the right polls.
There are numerous reasons why advance polling tends to produce such diverse results.
For a start, the media have a vested interest in a competitive race. For example, Nate Silver over at FiveThirtyEight has pointed out that the media hand picked outlier polls that featured overly male, overly white samples in order to argue that Bernie Sanders was surging in the Democratic primary.
However, in reality, Sanders had picked up about as much support as would be anticipated for a previously unknown candidate gaining better name recognition.
Similarly, half the media machine has been spinning its wheels in an attempt to either explain away or otherwise minimize the Trump phenomenon whilst the other half has reported his unexpected rise to the forefront of a scattershot Republic field with a kind of maniacal glee.
But media spin is only half the problem.
The other half, I’d argue, is due to the inherent limitations of the methodology. For this reason, if you can stomach it, a deep dive into the flaws of political polling offers a great deal of insight into the failings of survey-based data more generally.
Don’t let anyone fool you: a truly representative survey sample is a bloody difficult thing to achieve. Despite some marketing firms offering up proprietary ‘corrals’ of consumers, and ever increasingly granular data sets derived from demographic data, social media, etc. – the fact remains that it is virtually impossible to design a survey that can be meaningfully disaggregated by the numerous factors that impact decision making.
We see this issue arise constantly with first time voters – random sampling rarely enables political pundits to project the behavior of first time voters, despite relatively large sample populations. There are simply too many factors to take into account. Clearly, this is also a problem for brands trying to capture or better communicate to new customers.
Moreover, survey samples tend to overemphasize demographics and geographics and thus often impoverish temporal factors.
For instance, a person who is currently stuck in an extra 30 minutes of traffic on a Monday morning is likely to respond somewhat differently to a question about train services than that very same person would when asked later that evening as they relaxed at home. The same is true if you asked them at the exact same time of day, but on a Friday when the roads are relatively clear.
However, because the majority of survey work (and political polling) is timed to take advantage of access, for instance, times when people are expected to be home, we capture them at a very specific moment of their day. This, I would argue, may not represent the same mood they are likely to be in when they are about to cast a vote or purchase a service – it only tells us what they are thinking and feeling in that very moment.
Right up until the actual vote, UK pollsters were projecting that the 2015 general elections would result in a hung parliament. The now notoriously poor political projections were linked to numerous issues related to survey work and polling.
First and foremost, it was suggested that ‘Shy Tories’ had evolved into downright ‘Lying Tories’; though many pollsters took into consideration the tendency of conservatives to downplay the strength of their political affiliation, in 2015 it appeared that Tories were willing to out and out lie about how they would vote.
This is not an uncommon phenomenon. People know when their opinions and behaviors are unpopular with certain populations; call center workers tend to be young, and very often, minorities, so it’s extraordinarily easy to understand where conservatives might hesitate to declare themselves. Consumers are similarly aware of their audience when contacted by marketing firms or company employees to respond to surveys.
Normally, confirmation bias can be controlled for, at least in part, by weighting certain responses (but this requires significant longitudinal data – like decades of voter records) or triangulation. However, surveys have to be quite long if you expect respondents to miss the slightly reworded question you’ve included to catch them up in a fib.
Given the cost of administering an even remotely representative survey, many firms compromise on the length of the questionnaire, meaning its nearly impossible to do anything but speculate as to who may be being economical with the truth, and more importantly, why.
In the case of the 2015 elections, however, considerable efforts went into trying to control for the ‘lying’ (or at least, the ‘shy’) Tories, and still the projections were wrong. Another factor we might then consider is the “Observer Effect’, wherein the very act of producing public information about consumer behavior or voter opinion leads to changes in that very behavior and opinion. In the case of 2015, it’s very likely that the specter of a hung parliament – created, let us be clear, entirely by publishing polling data in advance of the election – led voters to double down in an attempt to avoid another coalition government.
Similarly, consumers are known to change their responses when they know, or think they know, how others have responded to the same questions. This can work in a number of ways: 1) consumers want to ensure all opinions are represented equally, even if they do not hold them (Devil’s Advocate Effect); consumers want to be on the ‘right’ team (Band Wagon Effect); or they want to make a statement (let’s call this ‘Advocacy Effect’).
The later, for instance, helps to explain why Bernie Sanders does so particularly well in online votes (where individual respondents can refresh their browsers innumerable times), but significantly less well in more controlled, scientifically rigorous surveys, where this behavior can be accounted for.
- “What people say, what people do, and what they say they do…”
As Margaret Mead said, “What people say, what people do, and what they say they do are entirely different things”.
Sometimes it’s a matter of confirmation bias, other times its linked to a fleeting mood, but most of the time, discrepancies between poll or survey data, and actual voter or consumer behavior, can be chalked up to the amount of time the average person spends reflecting on their opinions, preferences, and life choices. Simply put, the average person doesn’t invest all that much energy into bringing what they say, and what they do into alignment.
The importance of this cannot be overstated. Even when we look at something as emotionally charged as a political election, where the outcome has a very real impact, and the vote itself is symbolically potent to the voter’s identity – predictive power is still limited.
Sometimes, we can try to control for this. For instance, it’s well known that Labour voters often respond strongly in polls and surveys, but then fail to pitch up for the actual vote. This same phenomenon exists in the United States.
However, this problem is exacerbated when emotional and logical arguments are not aligned. For instance, the arguments for EU membership tend to be very logically stated; EU immigration has a net benefit effect on the workforce, favorable trade relations contribute positively to the UK economy, etc.. Very often, these statements are made using statistics and are very calmly stated. Contrast this to anti-EU narratives full of vim, vigor, and vile – these arguments are meant to appeal to the fears of UK voters, and they are quite effective.
Any polling being done for ‘Brexit’ needs to take this into account, but it is brutally difficult to differentiate between those voters who really will vote with their heads and not their hearts when the pressure is on in the booth, and those that will ultimately ‘go with their gut’. This, in part, helps to make sense of the Trump phenomenon in the United States, and why its been so exceptionally difficult for pollsters to make heads or tails of it.
This phenomenon is also particularly acute when asking people hypothetical questions about the use of new products or services. When asked, “Would you be willing to change brands if the: a) price was better, b) packaging was nicer, c) came with loyalty points, etc.” – its very easy for respondents to respond positively. It requires no behavior change, no evaluative work, no change-risk to say you will hypothetically do something.
Indeed often this is a reflection of aspiration – what kind of person do you want to be? Maybe you are the kind of person who wishes you would be willing to buy the healthier snack, even when you know (or maybe you don’t…) that when it comes to point of purchase, you’re going to buy the same salty crisps you buy every week.
So what’s a brand to do?
Whilst I don’t have any suggestions for how to improve political polling – well, other than to say ‘don’t bother’ – given that there are numerous factors that impact what people actually do when it comes time to buy, it’s much more interesting to understand what those factors are than it is to ask people to anticipate their decision-making and behavior in the future.
Before sinking cash into yet another customer survey that asks potential or current customers or clients to guess at what kind of person they will be tomorrow or a week from tomorrow, consider how you might interrupt the decision making process as its happening in order to understand it.
One option, of course, is sited ethnography.
Capture people as they are in the processes of making a decision, and get them to reflect on what it is they are weighing up (or not) as they are doing it. You won’t have thousands of respondents, but the results will be much more powerful.
As the political pundits say – the only poll that matters is the one on Election Day.
Melyn McKay is a partner with Monticello LLP, a socio-cultural anthropologist and a contributor to The Library of Progress.