Burning Question: Can we trust the polls?
During this election year, the media have deluged us with polling results. As the general election approaches, the assault will continue. But how do the pollsters come up with their numbers? Why are they often so accurate, and why do they sometimes get it wrong? We asked Professor of Political Science Joseph Klesner.
We can trust political polls, but only if we treat them as approximations of what will happen or, in the case of exit polls, what did happen at the real polls where you and I cast our ballots. If we understand what can go wrong in political polling, we can guard ourselves against too blithely assuming that what the media report from polls will be precisely accurate.
Two steps in the process of conducting surveys pose special challenges to the election pollsters: drawing the sample and creating a likely-voter model.
Pollsters want their sample to accurately reflect the target population. For predicting electoral outcomes, the target population is likely voters. However, we have no census of likely voters against which to compare our samples, only the general census of the population from which we extract the eligible voters.
Even drawing a sample of eligible voters is challenging if we decide to contact people by telephone, by far the most economical approach. It is easy to hang up on pollsters, and many people now use only cell phones. To address these challenges, pollsters make callbacks to non-respondents or add respondents who otherwise resemble the non-respondents. As a last resort, they adjust the survey results by weighting the sample after the survey is completed so that it reflects the actual population.
Nevertheless, increasing rates of non-response (those who hang up) and concerns about not reaching some population segments at all (those who use only cell phones) should cause us to be cautious in interpreting polling results, because the samples may be modestly biased. We know that non-response among some segments of the white population has caused pollsters to overestimate the vote for African-American candidates in past biracial elections. As of this writing, Barack Obama may well be the Democratic presidential candidate, so this phenomenon could come into play.
Predicting which respondents will actually vote is especially tricky, yet it is critical to forecasting electoral outcomes. We tend to draw on past experience, asking which population groups voted at what rates in the last election. We can also ask respondents whether they plan to vote.
Neither strategy is without flaws. Because we regard voting as the most fundamental civic responsibility, even cynical respondents are likely to give the socially desirable response--they do plan to vote. Likely-voter models seek to adjust for over-reporting of turnout by using filters--each polling firm has its own formula--that discount the probability that members of particular respondent groups will actually vote, based mostly on historical experience. Young people vote at lower rates than retirees, to use an obvious example.
But candidates and campaigns complicate our efforts. A campaign strives to encourage its candidate's likely supporters to turn out on Election Day, and to discourage turnout by the opponent's likely supporters. (This is one aim of negative campaigning.) In addition, some campaigns seek to mobilize those who have been bystanders in the past.
Analyzing Mexico's historic 2000 election, I learned how likely-voter models can fail. That year, pollsters expected those in the ruling PRI party's die-hard support groups to vote at the same rates they had in the past, but the PRI party failed to turn out its base. Thus, pre-election polls gave the PRI's candidate an edge over Vicente Fox based on their likely-voter expectations, but Fox won by a comfortable margin, in no small part by running Mexico's first-ever negative campaign, designed to encourage PRI voters to stay home.
In this election, will John McCain's candidacy discourage religious conservatives from turning out, or would they do so despite reservations about his candidacy because they so fear a Democratic victory? The pollster can probe the religious conservatives' probability of voting with filter questions, but he will never know exactly whether he has it right.
We should also keep in mind that polling is a business. Every polling firm spends considerable effort to convince the media and other possible clients that its own predictions are the most accurate. Unfortunately, not all journalists and pundits are savvy about the uncertainties involved in creating likely-voter models.
It's important to temper our trust in the media's reporting on polls with an awareness of polling limitations. Most of the time, the polls get it right. But there's an art, as well as science, in predicting electoral outcomes. And that always opens the door to uncertainty.
Joseph Klesner, a member of the Kenyon faculty since 1985, is an expert on Mexican electoral politics. He teaches courses in comparative politics, international relations, and Latin American politics, as well as interdisciplinary courses in international studies.
Do you have feedback on this page?