10 Comments
Sep 28, 2023Liked by G. Elliott Morris

Hi Elliott,

Great article on breaking down polls. I hope you can bring more data-based journalism to a broader audience at ABC/538.  

I hope you are doing well and everything is going well at 538!

-Elliot

Expand full comment
author

Hi Elliot,

Thanks! Miss your comments. Hope you're doing well.

-Elliott

Expand full comment

Elliot, thanks for this article. I am also interested in a candidates 'internal polling', that we often read shows different results from 'public polling'. Can you describe the main differences in the two? Or direct me to a good read that does?

Expand full comment

A few things I am confused by here:

1. How much to update a polling average is solely related to the precision of the poll (or the variance in its result) assuming the poll is independently sampled from other polls in your data (i.e. no herding). It is true that you'd expect polling firms that are consistently often far from the average (in both directions) or individual polls to have lower precision than polls that are closer to the average but I think you have a far better estimate of the precision from pollster ratings and sample size than basing it on each individual poll and that this isn't really a Bayesian way to do this. In addition doing this makes me really scared of rewarding herders.

2. I don't totally get what finding weird cross tabs really gets you. Assuming the same ratio of precision for topline versus each cross tab, the weighting of each subdivision should be the same as the weighting for the topline. Here the crosstab has sampling error of like 10 points which is massive so the outlier there isn't as surprising as it seems or changes precision of overall poll. Also don't really think the normalcy of the distribution of Biden's support across subdivisions changes how much you should weight it unless it changes your belief in precision.

I do think it's possible that polls showing weirder cross tabs and overall results have higher non-sampling error and are worse polls that should be weighted lower but would think that's likely due to survey design and methodology and the way to include this is pollster ratings.

Sorry if I'm totally off-base here, happy to be told I'm wrong.

P.S. Included some thoughts on new polling averages in an email to polls@fivethirtyeight.com when the average came out and was curious if you saw or had any thoughts on it.

Expand full comment

That, and pollsters are usually wrong.

Expand full comment

"Trust the average."

I have some problems with this assessment.

1. "It doesn't matter who votes, it matters who counts the votes." Attributed to Stalin, perhaps erroneously, but the observation, as we saw on January 6 et seq, is horrifically relevant. So thinking that any poll will be reflected at the electoral college or in Congress is false thinking.

2. Polls do drive opinion. I wish there were away around this.

3. At present, the electoral college is comprised of people chosen by governors, state legislators, state parties' conventions, state parties' central committees. In 2020, the U.S. Supreme Court ruled that the Constitution does not require that people elected to serve in the Electoral College be free to vote as they choose. Instead, the Court held, states have the constitutional power to force electors to vote according to their state’s popular vote. But while the ruling says states can prevent faithless electors, it does not require that they do so.

At the time of the Court’s decision, 32 states had passed laws that bind electors, while 18 states had laws on the books giving electors the freedom to vote independently—ensuring that in more ways than one, the Electoral College could continue to provide drama for the foreseeable future.

Expand full comment

Nice write up!

What still puzzles me about this ABC/WP poll is that, if it is a nonresponse problem, it goes on the opposite direction from what we would expect based on the last couple of presidential cycles, especially thinking on the potential of non-ignorable nonresponse, which presumably nonrespondents were disproportionately more Trump voters than other Republicans.

Expand full comment
author

Yeah, Raphael, I agree it's puzzling. FWIW, I looked at the crosstabs and the unexpected R support is coming primarily from a very R sample of 2020 non-voters. When I MRP'd the poll and accounted for both past vote (so, 2020 Biden/R) and party ID (both of which I put on the joint distribution of my post-strat frame via an earlier large-n MRP) things looked more sensible. So, I'm fairly confident that non-response among Dems, especially young people, is the issue. But why! That's the eternal question.

Expand full comment

Would you all consider trying a multi mode to try to access more young folks if non response remains high? Which mode do you think has the most promise for reaching younger folks?

Expand full comment

2020 non-voters, huh? This is quite interesting, thanks for sharing!

For the past vote, you also included a category for non-voters then, right?

Nonresponse among young people is definitely more of a problem in RDD polls. But still, I agree with you, why particularly young Dems? We will definitely have to keep an eye on that for ANES 2024, even though we won't use RDD.

Expand full comment