Please consider donating to Behind the Black, by giving either a one-time contribution or a regular subscription, as outlined in the tip jar to the right. Your support will allow me to continue covering science and culture as I have for the past twenty years, independent and free from any outside influence.
In a strained attempt to explain the failure of pollsters to predict the election results yesterday in Great Britain, pollsters and pundits seem unable to see the elephant in the room that explains their problems.
And what is that elephant? Take a look at this list of bad polling predictions provided by Nate Silver, the mainstream media’s big polling guru because he correctly predicted both Obama victories:
- The final polls showed a close result in the Scottish independence referendum, with the “no” side projected to win by just 2 to 3 percentage points. In fact, “no” won by almost 11 percentage points.
- Although polls correctly implied that Republicans were favored to win the Senate in the 2014 U.S. midterms, they nevertheless significantlyunderestimated the GOP’s performance. Republicans’ margins over Democrats were about 4 points better than the polls in the average Senate race.
- Pre-election polls badly underestimated Likud’s performance in the Israeli legislative elections earlier this year, projecting the party to about 22 seats in the Knesset when it in fact won 30. (Exit polls on election night weren’t very good either.)
Does anyone notice a trend? I could also reference other elections that pollsters badly predicted, such as the Sandinista defeat in Nicaragua in 1994, the Republican victory in 1994, Bush’s victory over Kerry in 2004 and practically every vote for or against the European Union. And there are others. For a bunch of so-called intellectuals who claim to be experts in predicting human behavior, they seem very oblivious to the obvious.
The obvious is that almost every time pollsters have gotten it wrong, they have gotten it wrong by favoring the liberal side in the election. Sometimes they came close. Sometimes, like in 2006 and 2008, it turned out that their predictions were right and the left won handily. But in almost every case in the past three decades, polling has consistently over-estimated the liberal/leftwing vote and underestimated the conservative/rightwing vote.
The easy, cheap, and paranoid explanation is that most of the pollsters are liberal and that they are really doing push-polling for the left. By portraying the left as always in the lead, they hope to help the left win elections. Sometimes their push-polling is simply right. Sometimes this push-polling actually helps the left win. And sometimes, more often than not, the push-polling gets it spectacularly wrong.
While I do think this paranoid explanation actually applies to a large number of pollsters, who really are Democratic operatives in disguise, I also think it is not the main explanation. What I think is really happening as that most of these pollsters live in the modern cocoon-like leftwing intellectual community. No one they know is conservative. Nothing they read is conservative. They really have no idea that a large conservative majority exists that disagrees with them. As New Yorker film critic Pauline Kael supposedly said after Nixon won in 1972, “How could Nixon win? No one I know voted for him!” (The actual quote is different, but this inaccurate quote is considered so believable because it illustrates the point so well.)
The pollsters therefore are too easily prone to dismiss data that favors the right, and take seriously all data that favors the left. The result is that they are often caught with their pants down, surprised by election results that should not have surprised them.
As obvious as this is, they still refuse to see it however, based on the analysis at the linked article. Trapped in their liberal bubble, they seem to have a genetic imperative that prevents them from leaving that bubble to entertain different perspectives.
Expect them to get more elections wrong in the coming years.