This is a volatile election year, to say the least. The two major-party candidates are far less than perfect, routinely commit gaffes (or perceived gaffes), and have been hurt by a variety of negative disclosures and actions. Two other challengers have gained a degree of attention and apparent support not seen since Ross Perot’s presidential runs in the 1990s. Meanwhile, mistrust of the establishment press is at or near an all-time high, and several journalists have publicly decided that the idea of even trying (or pretending) to report in a fair and balanced manner is not appropriate this year.
In this environment, the nation’s pollsters, who have seen huge prediction failures during the past several years — virtually all understating support for conservative candidates and causes — still expect the public to believe that the tiny percentage of people they contact who actually complete their surveys and interviews reflect the opinions of everyone else.
The experience of the past two years, and the wide variations seen in the 2016 presidential election polls thus far, make a mockery of that expectation.
After the 2014 elections, I wrote (bolds are mine throughout this post):
Despite all of their supposed science, improved methodologies, and sophisticated turnout models, nation’s pollsters have just suffered through their worst midterm elections drubbing in 20 years. The last time they were off this badly was when they woefully underestimated Republican gains in the Newt Gingrich “Contract with America” midterms of 1994.
In this year’s U.S. Senate races, preelection “tossup” predictions really meant “comfortable Republican wins” in three instances — Georgia, Iowa and Kansas, where Republican victory margins were eight, nine, and 11 points, respectively. Four of the others — Alaska, Colorado, Louisiana, and North Carolina — went the GOP’s way, or appear destined to. The Democrats’ sole tossup triumph was in New Hampshire. Additionally, soon-to-be Senate Majority Leader Mitch McConnell’s race in Kentucky and the Arkansas Senate contest were both supposed to be fairly close. Instead, they were 16-point and 17-point blowouts, respectively.
… The polling fails in governors’ races were in some respects even worse, especially since two ordinarily solid blue states (Maryland and Illinois) went red.
In 2015, though there were far fewer significant contests, polls in several important ones also poorly predicted actual outcomes:
… (In the Kentucky Governor’s race) the pre-election polling average at RealClearPolitics.com had Republican Matt Bevin trailing Democrat Jack Conway by an average of three points. Bevin rolled up a nine-point victory margin accompanied by an unheard of near-sweep of statewide offices by the GOP.
In Ohio, Issue 3, the “ResponsibleOhio” initiative, (which) would have legalized recreational marijuana use, … pre-election polls indicated either that the election was too close to call or that the ballot measure had the upper hand. … The initiative went down in flames by 28 points.
… In Houston, the city’s horribly misnamed “Houston Equal Rights Ordinance” had a nine-point lead in the polls in mid-October.
Closer to the election, “HERO” was either “too close to call” or losing slightly …
HERO failed by over 20 points.
UK pollsters have ended up with egg on their faces twice in the past two years. This past June, concerning whether or not the UK should stay in the European Union, ”the vast majority of polls predicted the remain side would prevail, however the final results gave the leave side a victory margin of more than one million votes.” A year earlier, in May 2015, polls said that parliamentary elections were too close to call. Instead, conservatives won “a clear 15-seat working majority.”
A post-mortem study of the polling after the May 2015 debacle concluded that ”the emerging upshot is that the (polling) companies are going to have to be more imaginative and proactive in making contact with – and giving additional weight to – those sorts of respondents that they failed to reach in adequate numbers in 2015.” Specifically, as summarized by the UK Guardian, the problem was “pollsters’ failure to reach enough Conservative voters.”
In the U.S., according to a June 2015 article in the New York Times, the poll completion rate, which fell from 36 percent in 1997 to 9 percent in 2012, declined to just 8 percent in 2014. In August 2014, pollster Nate Silver wrote that “Even polls that make every effort to contact a representative sample of voters now get no more than 10 percent to complete their surveys.” It’s hard to imagine that these dismal results have bounced back at all in the past two years.
The premise that the 90 percent to 92 percent who don’t respond or fail to finish surveys hold the same general opinions and preferences as the 8 percent to 10 percent who complete them, already intuitively shaky and documented as a major reason for UK polling failures, is arguably less tenable because of the nature of this year’s U.S. presidential race.
The premise that the 92 percent who don’t respond or fail to finish surveys hold the same general opinions and preferences as the 8 percent who complete them, already intuitively shaky and documented as a major reason for UK polling failures, is arguably less tenable because of the nature of this year’s U.S. presidential race.
Late-September and early-October national polls, when compared to polls during the same period in 2012, reflect this shakiness.
The wider variation in this year’s polls may have a lot to do with the growing distrust of the “mainstream media,” especially given that Donald Trump has propagated the “media bias” narrative as consistently as any presidential candidate ever has.
In 2012, one of nine polls listed at Real Clear Politics which had an end date of September 30 through October 4 showed President Obama in a tie with Mitt Romney, while the remaining eight gave Obama leads of between 1 and 7 points. This year, the analogous range goes from Hillary Clinton up by 7 to Donald Trump up by 4. The RCP list has just one positive poll for Trump, but inexplicably excludes Rasmussen this year after including it in 2012. Rasmussen has shown tiny leads for each major candidate in the past several days. UPI, which RCP has also chosen to exclude, currently shows Trump with a 2.5-point lead.
With the exception of Fox, most of the polls showing Mrs. Clinton with a 5-point or greater lead are clearly associated with left-leaning establishment press organizations, e.g., CBS, CNN, NBC, etc. (One of them, Reuters, has “reworked” its polls twice this year when they seemed to be supposedly disproportionate support for Trump.) It’s quite reasonable to believe, given that they are far more disapproving of the press and feel that it’s biased against them, that conservatives contacted by those organizations are far more likely to refuse to participate in polls when contacted than moderates and leftists (even Fox could be explained away by the fact that its viewers have learned not to trust any polling attempt).
Rasmussen and United Press International, two of the three polls showing Trump ahead, don’t suffer from an obvious media-association problem (almost no one knows who UPI is). Whether the association problem exists in the LA Times/University of Southern California poll which currently has Trump ahead by 3.6 points, would depend on how its interviewers present themselves. The “I don’t talk to anyone in the media” effect may not be present if they describe themselves being from USC, or lead with their USC association.
Pollsters routinely fail to disclose the percentage of those they contact who complete their surveys. The reasons the UK post-mortem study cited to explain why conservatives were not sufficiently included as respondents would seem to have relevance here in the U.S.:
The oldest voters: the over-70s, who broke heavily for the Tories (i.e., Conservatives), were not reflected in YouGov’s online internet panels.
Young non-voters: the under-30s generally lean left, but very often fail to turn out on polling day. The pollsters, however, reached an atypical group of youngsters, who were unusually engaged with politics and committed to voting.
Busy voters: in the face-to-face British Social Attitudes survey, Labour was six points ahead among respondents who answered the door at the first visit, whereas the Tories enjoyed an 11-point advantage among interviewees that required between three and six home visits. Even after adjusting for social class and age, those easy-to-reach voters are less Conservative than the “busy” respondents the pollsters have to work hard to chase.
There also may be a time and cost constraint problem at work here. Pollsters can’t afford to try to contact interview subjects four or more times. At some point, many of them may say “the heck with it” — which might explain why so many polls are more heavily weighted toward liberals and leftists who are more reachable respondents.
All of the serious polling failures in the U.S. in 2014 and 2015 cited earlier underestimated support for relatively more conservative candidates and issue positions. To my knowledge, no major poll discloses its overall response rate. They should. Let news consumers decide whether a poll is achieving enough success in finding respondents, whether those respondents really reflect the general population, whether the poll deserves the respect it seeks — and how much readers should mentally adjust the poll results in light of the response rate reported.
Until pollsters routinely disclose on a poll-by-poll basis how many people they had to contact to get the number of responses they finally obtained, my admonition after their 2014 debacle stands: “If they’re right from now on, it will it only be by accident.” As long as they continue to be so wrong in underestimating conservative strength and don’t act on the problem, the default assumption has to be that their polls are being conducted to influence elections and not to genuinely measure voter sentiment.
Cross-post pending at NewsBusters.org.