The real "crisis" in polling
Issue polling is difficult and imprecise. But if anything, the crisis is in political analysis
Dear reader,
I joined this week’s 538 politics podcast to discuss the alleged “crisis” in issue polling — a case argued recently by New York Times chief political analyst Nate Cohn. In an article, Cohn writes that “there’s a case that ‘issue’ polling faces a far graver crisis than ‘horse race’ polling” because — as he sees it— pre-election polls in 2022 did not give him issue-based reasons to expect that Democrats were going to do as well as they did in the House or Senate. The issue polls, per this account, failed him. He writes:
But although [2022 horse race] polls performed well, they simply didn’t explain what happened. If anything, the polls were showing the conditions for a Republican win. They showed that voters wanted Republican control of the Senate. They showed that a majority of voters didn’t really care whether a candidate thought Joe Biden won the 2020 election, even though election deniers wound up being clearly punished at the ballot box. Voters said they cared more about the economy than issues like abortion or democracy, and so on.
The Times/Siena polling wasn’t alone in this regard. Virtually all of the major public pollsters told the same basic story, and it’s the opposite of the story that we told after the election. If we judge these poll questions about the issues by the same standard that we judge the main election results — a comparison between the pre-election polls and what we believe to be true after the election, with the benefit of the results — I think we’d have to say this was a complete misfire.
Let me say right off the bat that I do believe that there are problems with issue polling. I even think the steps Cohn later proposes for improving issue polling at the Times will be an improvement in how the press covers public opinion. But on the matter of whether this evidence constitutes a “crisis” in polling, I disagree. In the podcast I detail 4 major objections to Cohn’s argument:
Issue polls are non-falsifiable. We do not experience “issue poll” opinions in reality — as we do for “horse race” opinions — and so have little quantitative basis on which to grade their accuracy. Cohn actually concedes this point in the text, but brushes off the caveat to engage in what is really an even more non-falsifiable argument about his own personal poll-informed worldview from 2022. This is not strong footing for the argument.
National polls do not meet voters where they are. If you do want to evaluate it the merits of the evidence, consider this: State-level polls in places like Michigan, Pennsylvania, Arizona and Georgia in 2022 revealed a portrait of public opinion that was much closer to the released issue importance of the electorate, according to Cohn. So if we use a different type of “issue” poll, we come to a different conclusion. Surely a “crisis” would be broader than just national issue polls.
Is this a crisis of issue polls, or of punditry and political analysis? Whatever indictment Cohn is issuing here is not really against issue polling as a tool, but rather against a particular form of analysis based on particular poll questions about particular opinions about one single recent election. Other polls — about abortion ballot referendums in Michigan, say, or about real-life election deniers on the ballot in — gave us reason to believe that Democrats would be propelled by the issue where it was most salient.1 Indeed many other analysts came to different conclusions about the election than Cohn did using the same issue polls.
“Issue polls” are not set up for this. Because of fundamental limitations on survey research, national toplines cannot give us the granular insights into voter psychology that the national political press asks of them. Other types of polling — such as experiments or “deliberative” polling may do a better job, but they still face shortcomings. There is a long history of political science research into things like “non-attitudes,” response error and question-wording/priming effects, for example. Surely, too, most survey researchers would recoil at the way many journalists paint in broad brushes about politics using singe towlines from so-called “issue” polls. At a certain point, we’re just abusing the tool — using a screwdriver to split wood.2
I want to be clear: None of this is to say that issue polling doesn’t face challenges. Indeed, the landscape of issue polling is particularly fraught with partisan advocacy organizations and biased surveys. But these problems are well understood by high-quality pollsters, even if the press gets things wrong. If we do have worse signals about public opinion now than we did in the past (and I am not convinced that we do!) then the people misusing the tool to craft their narratives bear some of that blame.
. . .
These points echo many themes from my book and the archives of this newsletter, so I thought I’d flag the pod for you. FYI, we do talk about Nikki Haley’s presidential campaign first, so if you want to skip that fast-forward to about the 36-37 minute mark.
Meanwhile, I have been very busy recently with a few projects related to pollster accuracy and transparency, and the fruits of my labor are nearly ready to share. More soon.
Elliott
Note that this is what Zaller’s RAS model (1992) would lead us to expect from polling different voters about the same issue across geographies
Natalie Jackson, of GQR Research, made a related point in a recent column for the National Journal.
I really appreciated your comments on the 538 podcast about this topic — salience to the election at hand is key. I’m a volunteer grassroots activist in the NY19 congressional district and have been leading canvassing teams contacting ‘drop off’ Dems and ‘high scoring’ Dem friendly unaffiliated voters all spring and summer.
In that context — people know that I’m canvassing for Dems & giving them election info — when asked the open-ended question “what issues concern you,” I’m very consistently seeing these issues as the most frequently mentioned by all ages of rural/exurban/town voters: Medicare/Social Security (I think that this is a bit of a stand-in for all govt programs of this type incl the ACA, Women’s Reproductive Freedom/Abortion, Climate, Housing, and to a lesser degree, Inflation/Economy. A good sign for Dems in my opinion is that these voters don’t trust Republicans on M/SS & ACA, Abortion and Climate. They proactively tell us this — and that they don’t like extremist Republicans. Many were willing to sign petitions to direct Congress to not cut M/SS and in support of Biden’s plan for M/SS solvency.
In contrast to 2022, here Dems won many local elections this year due to (still looking at this) what appears to be increased Dem turnout and somewhat decreased R turnout. In some cases by one or two votes at the local level! I do think — no data, just my gut feeling based on door knocking — the press coverage of how right wing religious the new GOP House Speaker hitting right around our early voting period DID impact Dem voters, encouraging them to turn out to be able to make some sort of stand.
I do wonder a bit if the nationwide Dem turnout dynamics flip is related to who’s now in the Dem coalition — folks more likely to vote — but not sure that applies in my area, as college grad is a pretty low percentage of the population. Will be taking a closer look at town by town turnout in Dec.
I know that what I’m sharing are just a slice of life, but maybe it’s helpful as you analysts think about what questions to ask…
Compared to the attention given to sampling, survey research is blindfolded on what it is doing as a social process built upon the social processes going on in society. Hence, when the underlying modes of discourse and signaling change in the society, survey researchers are clueless about what is happening to their attempt to elicit meaningful responses. Nowhere truer than for issue polling.