I’m collecting responses to this question for a larger project I’m working on. What do you want to know about polling?
(Maybe you think you know everything. In that case, what do you find most important about polls?)
Ok, discuss!
EDIT 10:31 PM EST: I’m going offline now—thanks all for the engagement. I think we all learned a lot and I got some great ideas for my project. Do feel free to leave residual comments, however insightful!
What I want to see in polls is a question that asks the respondent where they primarily get their news from. My hypothesis is that the best indicator for support / opposition to an issue is the answer to that question. And yet, I rarely ever see that question asked.
I feel like I've seen aspects of this before somewhere, but I think it would be interesting to have graphs/discussion of the (theoretical) relationship between sample size and margin of error in polls, compared to the the empirical relationship across the past few elections. Especially topical given the discussion over the small sample size of many democratic primary polls.
Although polling has been quite good in recent years at predicting vote share, isn’t it still reasonable is it to expect at least one polling miss to occur during the primaries like we saw in Michigan in 2016? If so, which states would you watch out for a potential poll miss?
As you approach more and more polls from MRP / model-based approaches, how does that change the way you quantify and, more importantly, think about uncertainty?
Response weighting prevent response bias from "skewing" the top line results, but wouldn't the variance of undersampled groups be higher than for more-sampled groups? Would it be correct to claim that projections of a candidate's standing with frequently undersampled groups (younger, Latino, less educated, etc.) based on polling ought to have wider error bars, so to speak?
Sorry this is a late comment.. Whenever you might have time, I'd be very interested in some big-picture thoughts on comparing the pro's and con's of probability vs. non-probabilty polling. As well as comparing the value of modes: phone vs. online. AI'd be grateful for any guidance or cautions for all of these considerations, including for mixed mode designs.
Do head-to-head polls take into account differing turnout rates between, say, dem nominees vs Trump, or do they assume the same group of voters would be voting in each scenario?
How do pollsters deal with most people not answering their phones unless they know who it is? Seems like there could be a bias there. Like, what subset of voters always answers their cell?
Sorry, my mind is filled with polling questions now lol
How would practices differ if we elected officials with other electoral systems? Going from the electoral college to a national popular vote would mean looking more at the national polls than state polls. But what would pollsters do for an approval-based voting system? Or ranked choice? Or condorcet systems?
Your responses will help me procrastinate....
What I want to see in polls is a question that asks the respondent where they primarily get their news from. My hypothesis is that the best indicator for support / opposition to an issue is the answer to that question. And yet, I rarely ever see that question asked.
I feel like I've seen aspects of this before somewhere, but I think it would be interesting to have graphs/discussion of the (theoretical) relationship between sample size and margin of error in polls, compared to the the empirical relationship across the past few elections. Especially topical given the discussion over the small sample size of many democratic primary polls.
Although polling has been quite good in recent years at predicting vote share, isn’t it still reasonable is it to expect at least one polling miss to occur during the primaries like we saw in Michigan in 2016? If so, which states would you watch out for a potential poll miss?
As you approach more and more polls from MRP / model-based approaches, how does that change the way you quantify and, more importantly, think about uncertainty?
I'd like to see some applied theory about the relationship of opinion formation, salience, and language usage to question asking.
Response weighting prevent response bias from "skewing" the top line results, but wouldn't the variance of undersampled groups be higher than for more-sampled groups? Would it be correct to claim that projections of a candidate's standing with frequently undersampled groups (younger, Latino, less educated, etc.) based on polling ought to have wider error bars, so to speak?
Clear up confusion about error of measurement....Discuss other types of errors in polling data
Sorry this is a late comment.. Whenever you might have time, I'd be very interested in some big-picture thoughts on comparing the pro's and con's of probability vs. non-probabilty polling. As well as comparing the value of modes: phone vs. online. AI'd be grateful for any guidance or cautions for all of these considerations, including for mixed mode designs.
good morning!
I wonder if the "who do you think is going to win" question has been looked at recently for validation
Do head-to-head polls take into account differing turnout rates between, say, dem nominees vs Trump, or do they assume the same group of voters would be voting in each scenario?
How do pollsters deal with most people not answering their phones unless they know who it is? Seems like there could be a bias there. Like, what subset of voters always answers their cell?
Sorry, my mind is filled with polling questions now lol
How would practices differ if we elected officials with other electoral systems? Going from the electoral college to a national popular vote would mean looking more at the national polls than state polls. But what would pollsters do for an approval-based voting system? Or ranked choice? Or condorcet systems?
any chance we get to see how you implemented the dynamic dirichlet regression model?
How do you handle questions where the poll respondent may not want to answer truthfully or may unconsciously give a false response?
Like: Do you prefer a male/white/straight candidate or not?
Basically, how do you reliably account for biases people don’t know they have?
Education aside, what are the most important demographics to weight by?
How much does it cost a news outlet to do a reliable high-quality poll and what is the median cost of news outlet polling?