Op4G Blog

You’re Fired! How Pollsters Failed in the 2016 Election and the Lessons for Survey Design

Posted by Meghan Sullivan on January 24, 2017

Less than a week ago, Donald Trump was sworn in as the 45th President of the United States. It was an unexpected outcome for those who followed polls in 2016. Even on the eve of the election, the New York Times poll predicted that Clinton had an 84% chance of winning. Pollster Nate Silver and the Princeton Election Consortium calculated a 71% chance and 95-99% chance, respectively. So why did “serious predictors completely misjudge Trump’s chances of victory”? Experts point to several likely factors, including:

Polling Level

Many of the leading presidential polls were national. In other words, they polled voters from across the United States. But national popularity does not determine the presidency. Instead, what matters is the Electoral College outcome, determined on a state-by-state basis.

Of course, state-level polls also had their weaknesses. Due to limited budgets, the “vast majority of state polls [were] not done with scientific, random samples of residents”. Their sample sizes were “often much lower” than for national polls. (Moreover, some hypothesize that state-level polls in the Midwest failed to “properly control for the urban and rural difference in the electorate”.)

Polling Methodology

Years ago, pollsters would randomly call landlines listed in the phone book. Today, however, at least 47% of US households lack landlines, depending only on cellular phones. Omitting this group could skew survey results.

For this reason, some pollsters try to incorporate cell phone polling. Other companies, however, lack these resources. After all, cell phone numbers cost money (they are seldom publicly listed). Calling cell phones also takes more time, as they are legally protected from autodialing.

Polling Assumptions

Pollsters often make assumptions about their sample. For example, they often assume the voter turnout by age, race, class, education level etc. But these estimates are not always accurate. In the 2016 election, “Trump may have succeeded in bringing out rural low education voters who usually don’t make it to the polls”. Furthermore, Clinton voters seem to have overstated their likelihood of voting. 

Pollsters also generally assume that voters are open about their voting preferences. But the top Trump pollster, Adam Geller, largely credits “undercover Trump votes” for Trump’s win. 

Exacerbating Factors

In recent years, poll aggregators have gained in popularity. By gathering and averaging polling results, they are supposed to “smooth out the kinks” of individual polls. However, “poll aggregators are only as good as the polls themselves”. When all the polls have a bias in the same direction, the aggregator will show—or amplify—the bias.

Additionally, people have a tendency to interpret new evidence as confirmation of their existing beliefs or theories. Known as “confirmation bias”, this may have led some analysts to ignore or downplay the (rare) polls that showed an advantage for Trump, the presumed “underdog”.

Lessons for Developing Op4G Surveys

The failure of 2016 election polls offer lessons for survey development:

  • The more granularity, the better: If you seek the views of small groups within a large population, survey each subset. Op4G can help you target even niche demographic groups.
  • Strive for large, representative samples: Limiting sample size and scope to reduce costs can backfire. Op4G's sizable panel can support your needs.
  • Supplement phone surveys with other methods: As of 2015, 85% of Americans use the internet. Thus, internet surveys may help you extend your reach. The Op4G platform supports internet surveys, which members can complete on their desktop computers, laptops, or mobile phones. It also allows for follow-up questions via land-lines or mobile phone (for members who opt-in).
  • Limit your assumptions to the extent possible: Ask more questions, if necessary.
  • Beware of confirmation bias: Avoid writing “leading questions” that prompt or encourage a desired answer.

 

 

Need Help Getting Started?

Op4G is here to help. Our team uses propreitary technology to provide programming, hosting, and sample recruitment for quanitative and qualitative studies. Our unique approach to recruitng yields a highly engaged group of people who have a vested interest in their community.

Contact Op4G today to learn more about our capabilities. 

Contact Op4G

Photo: Geoff Livingston

 

Topics: Election, Polls, Market Research, Politics, Trump, Surveys

Written by Meghan Sullivan

Meghan works part-time for Op4G, assisting primarily with Op4G’s social media, blog, and monthly newsletter. Based in Ottawa, Canada, Meghan completed her Bachelor of Arts degree at Wellesley College and Oxford University. She then earned a Master of Public Policy degree at Harvard University’s Kennedy School of Government. In addition to her work at Op4G, Meghan is a Senior Policy Advisor. She is an avid athlete, playing both varsity basketball and softball in university. Meghan is also passionate about international and environmental issues and volunteers for several related non-profits.