The Conversation: What will pollsters do after 2016?

Elon's Jason Husser, director of the Elon University Poll and assistant professor of political science, writes about how to the polling industry can improve its accuracy going forward. 

​ By Jason Husser, Elon University

Clinton defeated Trump much like Dewey defeated Truman. Both election results were dramatic surprises because pre-election polls created expectations that didn’t match the final outcomes.

​Many polls were very accurate. For example, the polling averages in Virginia, Colorado and Arizona were within 0.1 percent of the election outcome.

That said, many polls missed the mark in 2016. Polls of Wisconsin in particular performed very poorly, suggesting Clinton was ahead by 6.5 percent before her ultimate loss by 1 percent.

If polls are going to remain a major part of the democratic process both in the United States and globally, pollsters have a professional duty to be as accurate as possible.

How will the polling industry improve accuracy after the 2016 election? The first step is to identify sources of error in polling.

Potential sources of polling error

When poll results are reported, they come with a margin of error – saying the poll is accurate within plus-or-minus a few percentage points. Those margins are the best-case scenarios. They account for statistically expected error, but not entirely for several other sources of error inherent in every poll.

Chief among these sources are the questions we ask, how we collect the data, how we figure out whom to ask and how we interpret the results. Each of these deserves a look.

The first two categories – question wording and data collection – are likely not the source of the systemic problems we saw in 2016. For one thing, pollsters have good techniques to test questions in advance and develop good standards. Interviewers may occasionally misread questions, but this is both rare and not likely systematic enough to cause problems outside of a few surveys each election cycle.

Sampling errors

The nastiest of all errors for pollsters happen in sampling – determining which people should be asked the poll’s questions. These errors are both the hardest to detect and the most likely to cause major problems across many polls.

At the most basic level, sampling errors happen when the people being polled are not in fact representative of the wider population. For example, an election poll of Alabama should not include people who are citizens of Mississippi.

It is essentially impossible to have a poll with perfect sample selection. Even with our best efforts at random sampling, not all individuals have an equal probability of selection because some are more likely to respond to pollsters than others.

Sampling errors could have crept into 2016 polls in several ways. First, far fewer people are willing to respond to surveys today than in previous years. That’s in large part because people are more likely to screen their phone calls than in the past.

Young people and those who aren’t interested in politics are particularly hard to reach. Those who did respond to pollsters may not have been representative of the wider group. Pollsters have ways to adjust their findings to account for these variation. One common technique is weighting. But these adjustments can still fall short. A single young black Trump supporter had a measurable difference in one poll because of this weighting.

Who is a ‘likely voter,’ anyway?

General population surveys, such as those of all adult residents of a geographic area, are not particularly prone to sampling errors. This is because U.S. Census Bureau data tell us the characteristics of any given community. Therefore, we can choose samples and weight responses that reflect the specific population.

Election “horse-race” polls are more difficult, primarily because pollsters must first determine which people are actually going to vote. But voter turnout in the United States is voluntary and volatile. Pollsters do not know in advance how many members of each politically relevant demographic group will actually turn out to vote.

One way pollsters can seek to identify likely voters is to include several questions in the poll that help them decide whose responses to include in the final analysis. Though the big surprises on election night were polls biased against Trump, pollsters also were biased against Clinton in states such as Nevada.

When looking back at 2016 polling problems, some pollsters may find that they were too restrictive in identifying likely voters, which often favors Republicans. Others may have been too lax, which generally favors Democrats. The challenge we will face, though, is that a likely voter screening technique that worked well in 2016 might not work well in 2020 because the electorate will change.

Interpretation challenges

A major problem polls faced in 2016 was not in their data specifically, but in how those data were interpreted, either by pollsters themselves or by the media. At the end of the day, polls are but rough estimates of public opinion. They are the best estimates we have available, but they are still estimates – ballpark figures, not certainties.

Many people expect polls to be highly accurate, and they often are – but how the public often thinks of accuracy is different from how pollsters do. Imagine an election poll that showed a one-point lead for a Democrat, and had a margin of error of four percentage points. If the Republican actually wins the election by one point, many people would think the poll was wrong, off by two points. But that’s not the case: The pollster actually said the race was too close to call given typical margins of error and somewhat unpredictable undecided voters.

Organizations that aggregate polls – such as FiveThirtyEight, the Upshot and Huffington Post Pollster – have added to this tendency. They combine many polls into one complex statistic, which they then argue is more accurate than any one poll on its own.

Those poll aggregators have been accurate in the past, which led the public to rely more heavily on them than they probably should. Without an expectation of extremely accurate polling, the surprise of election night would have been far less dramatic.

Personally, I paid less attention to aggregators and more attention to a handful of high-quality polls in each swing state. As a result, I entered election night realizing that most swing states were really too close to call – despite some aggregators’ claims to the contrary.

Changes in the context of the race

Technically speaking, polls are designed to measure opinion at the particular point in time during which interviews were conducted. In practice, however, they are used to gauge opinion in the future – on Election Day, which is usually a week or two after most organizations stop conducting polls.

As a result, late shifts in public opinion won’t always be apparent in polls. For example, many polls were conducted before the announcements by FBI Director James Comey about Hillary Clinton’s emails.

A shift in public opinion after a poll is taken is not technically an error. But as happened this year, unpredictable events like the Comey announcements can cause polling averages to differ from the actual election outcome.

‘Secret’ Trump voters?

It will take time to assess the extent of supposed “secret” Trump voters, those people who did not appear in polls as Trump voters but did in fact vote for him. Pollsters will need several months to determine if the extent of their existence is likely due more to sampling errors, such as Trump voters being less likely to answer the phone, than to people being embarrassed about their vote intention. Still, pollsters need to do more to test this potential form of social desirability bias.

When the 2016 polling postmortem is done, I suspect we will find few “secret” Trump voters were lying to pollsters out of political correctness. Rather, we’ll discover a group of Trump voters who simply didn’t normally take surveys. For example, Christians who believe the Bible is the inerrant word of God are often underrepresented in surveys. It’s not because they are ashamed of their faith. It’s because they don’t like to talk to survey researchers, a form of sampling bias.

Polling after 2016

Pollsters were aware of the challenges facing them in the 2016 election season. Most notably, they identified declining response rates – fewer people willing to be asked polling questions. They reported that concern, and others, in a poll of academic pollsters I conducted in 2015 with Kenneth Fernandez of the College of Southern Nevada and Maggie Macdonald of Emory University.

Many pollsters (73 percent) in our survey were using the internet in some capacity, a sign they were willing to try new survey methods. A majority (55 percent) of pollsters in our sample agreed that poll aggregators increased interest in survey research among the public and the media, an opinion suggesting a win-win for both aggregators and pollsters. However, some (34 percent) also agreed poll aggregators helped to give low-quality surveys legitimacy.

Many pollsters have embraced an industry-wide transparency initiative that will include revealing their methods for determining who is a likely voter, and weighting their responses to reflect the population. The polling industry will figure out what happened in places like Wisconsin, but surveys are a complex process and disentangling hundreds of surveys across 50 states will not be immediate. The American Association of Public Opinion Research, the largest professional association of pollsters in the country, has already convened a group of survey methodologists to examine the 2016 results.

Polls remain a valuable resource for democracy. Without polls we would base our understanding of elections more on “hunches” and guesses based on rough trends. We would know little about why people support a given candidate or policy. And we might see more traumatic major swings in the partisan composition of our leaders.

If political polls were weather forecasts, they would be good at saying whether the chance of rain is high or low, but they would not be good at declaring with confidence that the temperature will be 78 degrees instead of 75 degrees. In modern politics with narrow margins of victory, what causes someone to win an election is closer to a minor change in temperature than an unexpected deluge. If I’m planning a large outdoor event, I would still be better off with an imperfect forecast than a nonexistent perfect prediction.

The Conversation

This article was originally published on The Conversation. Read the original article.