Why did pollsters get the Israeli election so wrong?
In what seems to be a fight for the legitimacy of opinion polling itself, they claim there were mitigating circumstances behind the wide gap.
“You (the media) are judging it wrongly,” Tzemach claims. The media looks at the gap in number of seats between Zionist Union and Likud, but our forecasts were given for each party individually. By that measure, we weren’t far off – we said on Channel 2 that Likud would get 28 seats and it got 29 [now 30], and we said Zionist Union would get 27 seats and it got 24. Regarding all the other parties, we were right on hierarchy and order. Most of the important things were spot-on.”
According to her, the choice of voting stations for sampling was correct, as the actual results of those stations were very close to those of the nation. The only mistake, in her view, was that they closed the sampling booths at 8:30 P.M., but saw that the later the hour got, the stronger Likud became. “It may have been a mistake not to say that when we presented the sample results. At 11 P.M., when I got the results of our sample poll until closing, we saw the gap had grown in favor of Likud. We sent it to the Channel 2 control but it didn’t get there,” Tzemach says.
Fuchs attributes the gap to three causes, none of which were under the polling companies’ control. One was a high proportion of people refusing to be sampled: Tsemach had a 15% refusal rate and they had 30%, he said. Second was the sample stations closing down before voting ended: His closed at 8:45 P.M. and Mina’s at 8:30 P.M., Fuchs says. And third: People lie.
However, Fuchs argues, “The deviation is entirely reasonable, statistically speaking. The whole deviation is eight seats among some of the 11 parties, meaning 0.7 seats per party. But since most of it was concentrated in the 5-seat gap between two important parties, it became a colossal political failure. If the deviation had concentrated on the Arab list or Shas, nobody would have taken it that hard. The surveys we did (as opposed to the sample polls) that showed Zionist Union beating Likud were correct. They simply affected Netanyahu, who began running around and giving interviews – and consequently, influenced voters. They were correct as of their publication.”
'Internet isn’t representative'
Prof. Avi Degani of the Geocartography Institute claims the mistake lies in integrating Internet into the working model of surveying. “All along, except in the last week, I was the only one saying that the Likud was in the lead, not Zionist Union,” he says. “While the others had surveyed by Internet, we did all our surveys by phone. Internet does not represent the people of Israel, but the population of Internet.”
It is true that in his last survey, on Channel 1, Degani showed 21 seats to Likud and 24 to the Zionist Union, he says. “But 14% were undecided and half of them said they would apparently vote Likud, which is another two or three seats, and I said that. Also, we forecast 11 seats for Naftali Bennett and he got eight, because right-wingers got stressed and moved to Likud. I was the only one who was predicting 27 seats for Likud throughout.”
Perhaps the whole system of sample polling needs revising?
“Nobody expected Netanyahu to get 30 seats and to take votes from Habayit Hayehudi and Yesh Atid,” says the owner of a surveying company. “I really don’t know how you got to bed with a sample showing a tie and wake up with a 6-seat difference. Maybe they don’t know how to sample.” Everybody has to recheck their systems, he says.
Some polling companies stay away from politics entirely and stick with commercial customers. One reason is that the usual rule in polling, that people who don’t answer behave the same as people who do answer, doesn’t apply in politics, says Prof. Israel Oleinik, chairman and CEO of the Shiluv Millward Brown market research company. For instance perhaps Haredim or Arabs will refuse to answer, or people will be ashamed to admit who they’re really voting for.
People will forget and move on, says Oleinik. The next time around the polling companies will use exactly the same surveys and samples, and at most will say they improved their models. The bottom line is that mistakes can happen and the thing is to try to prevent them in advance. But it isn’t always possible. That’s why I don’t do political polls.”
Apparently, for all their flaws, political polling is here to stay. Fuchs points out that in 1948, the Gallup Institute called Dewey to win, but Truman did. “Until somebody finds a better method to measure public opinion, polls will continue and I mean to carry them out,” he says. “You have to take into account that these things can happen.”
Also, in and of themselves, polls have an influence, points out Prof. Avraham Diskin of the Interdisciplinary Center in Herzliya. Earlier polls showed a chance that Herzog would be prime minister, then right before the election, the polls showed a tie with Likud; the result was that more people felt impelled to vote, and supporters of the satellite parties began moving to the big ones, he says. The pollsters’ big mistake was to finish sampling early. “Traditionally, voters of lower socioeconomic status and who vote Likud are less politically involved and come [to the polls] later. Left-wing voters, who are mainly of higher socioeconomic status, are more politically involved and vote first thing that day.”