NOTE: This was previously a gated post for subscribers only. I have unlocked it on request from several readers.
Let’s talk briefly today about pollster quality.
Whether or not a poll is conducted properly is, technically, a matter of analyzing a set of specific scientific questions about how the survey was conducted. (We could also get into analyzing track records quantitatively, as FiveThirtyEight does, but that is not a holistic metric and misses key details, particularly for new pollsters. More on that another time.)
However, while it is necessary to parse methodological details to tell a great poll from a good poll, we can use some simple heuristics to tell which ones are, frankly, bad.
For example, here is a screenshot of the methodology for a good-to-great poll, the latest New York Times/Siena national survey:
Here is a (less detailed) methodology for another good poll, the latest Monmouth University poll in Pennsylvania:
And here is the methodology of a bad poll:
In this case, the heuristic is transparency. Transparency matters for two reasons. The first deals with motivation: Good (public) pollsters have little to hide about their methods. They survey the people to capture the general shape of public opinion and are interested in developing and sharing accurate methods. When a pollster refuses to publish a detailed account of their methods, that suggests they may have different intentions—or may not be doing things correctly.
The two good pollsters above also share a plethora of crosstabs on their polls, showing breakdowns of how groups feel about elections, parties, candidates and issues. And this brings us to the second dimension of transparency: it lets us validate that the poll is correctly aggregating opinion under the hood—that the so-called “data-generating process” or DGP—is working correctly.
In the case of the Times and Monmouth polls above, we can validate that we are getting a good portrait of public opinion because we see that subgroup estimates make sense. But this is not the case, for instance, with Trafalgar’s polls:
(Here is more validation for this.)
So, you could look at how pollsters’ toplines compare to election results historically. Or you could (try to) look under the hood, where problems really become apparent. For this poll and other bad polls. And, if you can’t look under the hood, that’s a big red flag.