Less than a week from Election Day, the polls show U.S. presidential candidates Kamala Harris and Donald Trump in a dead heat—an unsatisfying result for anyone looking for a way to reduce the suspense.
“The stakes are so high,” says David Karpf, who researches technology and elections at George Washington University. But, he says, polls can tell us only the same thing they’ve been predicting for a year and a half: “it looks like it’s going to be close.”
Polls are a staple of preelection coverage and postelection scrutiny in the U.S. The results of these political surveys drive news cycles and campaign strategy, and they can influence decisions of potential donors and voters. Yet they are also growing more and more precarious.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“These days, we are using this technique that’s very vulnerable” to making huge mistakes, says Michael Bailey, a professor of American government at Georgetown University and author of the recent book Polling at a Crossroads: Rethinking Modern Survey Research.
Those mistakes may be familiar for those who followed the last two presidential elections, when polls underestimated Trump’s support. Pollsters are hoping to learn from their mistakes, but their results are still largely a judgment call. Here’s why.
People don't respond to polls anymore
For decades, pollsters have been dealing with an “ongoing crisis” of falling response rates, Karpf says. Polls are only as good as their sample: the wider, more representative swath of the public that responds to polling calls, the better the data. The ubiquity of the landline telephone in the latter half of the 20th century was a unique gift to pollsters, who could rely on around 60 percent response rates from randomly dialed phone numbers to hear from a representative slice of the population, Bailey explains.
Today technological changes—including caller ID, the rise of texting and the proliferation of spam messages—have led very few people to pick up the phone or answer unprompted text messages. Even the well-respected New York Times/Siena College poll gets around a 1 percent response rate, Bailey points out. In many ways, people who respond to polls are the odd ones out, and this self-selection can significantly bias the results in unknowable but profound ways.
“The game’s over. Once you have a 1 percent response rate, you don’t have a random sample,” Bailey says.
To turn these limited data into usable insights, pollsters are relying on more and more complex modeling, Karpf says. These techniques “weight” some participants’ responses to make their skewed sample match the general voting population on key variables, such as age, race, gender and political affiliation. This allows pollsters, in theory, to extrapolate information about the general voting population from a biased handful of responses.
In the golden age of random sampling, polling “was based on a scientific method, with a defined procedure that would produce a defined probabilistic outcome,” Bailey says. Whereas “now you just have to throw modeling decision after modeling decision” at raw polling data and hope your assumptions hold true.
The assumptions in these models could easily be wrong
Pollsters are generally making defensible, good-faith decisions about how to stretch and compress their data into the shape of the voting electorate. But these are still educated guesses, and reasonable minds may differ.
“Even though they are all reasonable assumptions, they are different ones. Which assumptions are right, we don’t know,” Karpf says.
Pollsters’ accuracy hinges on what the electorate will actually look like on November 5. This is fundamentally unknowable and something that pollsters have gotten wrong for the two prior elections. In 2016 88 percent of national polls overstated Hillary Clinton’s support. Analyses found that they missed significant pockets of support for Trump from non-college-educated white voters because they largely did not weight their data based on education.
So in 2020 polls weighted education. Yet they experienced a similar problem, this time by neglecting to include factors other than demographic ones. The polls were correct that Joe Biden would win, but 93 percent of them overstated his lead. “It didn't feel like such a disaster, but just from an accuracy perspective..., it’s kind of chilling,” Bailey says. “You see the exact same problem happen again.”
The 2020 election showed that there were aspects of Trump’s support that could not be fully accounted for with the demographic variables that pollsters had come to rely on. So this year many are using a blunter technique to compensate: weighting respondents’ answers based on who they say they voted for last time around, a method called recall-vote weighting. This makes the 2024 polls conform to 2020’s turnout—and, in practice, inflates Trump’s support.
Pollsters are “leaning hard” into recall-vote weighting this time around, Bailey says. But this technique has a few key limitations. First, it’s not clear that the electorate in 2024 will look like 2020, especially given the high turnout among female voters in the 2022 midterms after the Supreme Court overturned Roe v. Wade. “It’s kind of an existential and fundamental-values issue for women voters in particular,” said pollster Anna Greenberg of the Democratic polling firm GQR in an interview with Ms. Magazine. Some pollsters are betting that this year’s election will look more like those midterms and are weighting their data accordingly.
The electorate can also change a lot in four years: voters die, a crop of new ones turn 18 and become eligible to vote, and many move to different states. Additionally, people may not give reliable responses when asked who they voted for four years ago.
But Bailey is most worried about a more fundamental problem with the technique. Not only must pollsters get the correct percentage of Trump voters in their samples, they also need to get the “right” Trump voters. If their sample doesn’t pick up a representative sample of former Trump voters, recall-vote weighting won’t fix that. “Imagine you voted for Donald Trump in 2020, but you’re sick of him. You might not answer polls right now,” Bailey says. The same dynamics could be at play on the Democratic side as well. All of this could lead to skewed poll results, even with recall-vote weighting.
Recall-vote weighting would have made polls less accurate in every election since 2004, the New York Times has reported. But many pollsters are choosing to use the technique now to avoid making the same mistake that occurred in 2016 and 2020.
“Fool me twice, I guess, don’t fool me a third time,” Bailey says.
Election simulations won’t tell you much, either
If individual polls are unreliable, what about polling aggregators? These sites combine results from dozens, if not hundreds, of surveys, and many run a style of election simulation popularized by Nate Silver, who foundedFiveThirtyEight (now 538). These aggregators take polling data and run simulations of an election some 10,000 times to predict its likely outcome.
For the average person, these simulations aren’t very helpful. In 2016 FiveThirtyEight reported that Clinton won in 71.4 percent of runs. What should onlookers have made of the fact that we live in the 28.6 percent of realities where Trump won instead? These attempts to predict the results of the election, rather than just present a snapshot of the candidates’ support at a given moment in time, was criticized heavily in an American Association for Public Opinion Research report that followed the 2016 polling failures.
“There was a very large presence in the media and political discourse about these polling aggregators.... They were using inputs in their models with unknown errors, and they weren’t really being transparent,” said report co-author Kristen Olson of the University of Nebraska-Lincoln in a news release. Forecasting models, the report warned, “attempt to predict a future event. As the 2016 election proved, that can be a fraught exercise.”
Karpf cautions against dealing with election anxiety by refreshing 538 or other polling aggregators. “That type of modeling exercise tells us so very little,” he says.
Additionally, as Bailey points out, they could be aggregating unreliable sources. “I was just showing my class today: if you look at 538 ... and you start clicking on some of those [pollsters], it’s like, ‘Who are these guys?’ And it just shocks me that we have no idea who they are in some cases,” he says. Some are partisan, and others have no methods listed on their websites, which is “incredibly far from that scientific ideal, on balance.”
Elections are just really close now
Another problem is that elections nowadays are very close. “They’re close enough that if [the outcome] is 3 percent in the direction of Trump, then this is going to look like a Trump blowout,” and vice versa for Harris, Karpf says. That’s within the polls’ margin of error statistically but might be perceived by the public as a big miss.
And as FiveThirtyEight points out on its forecasting page, “A close race in the polls does not necessarily mean the outcome will be close.” While the odds for each candidate seem close to even right now, the winner could still take the presidency with a significant margin in the electoral college.
“Right now the difficulty for the public is that we are looking at these polls, wondering what the future will be,” and they simply can’t provide that answer, Karpf says.
It’s a high-stakes time for polling to be at a methodological crossroads. The threat of political violence looms over all these discussions. If the polls underestimate Trump’s support again and he loses, even if the final results are within the margin of error, they will likely be used to support claims of election fraud, Karpf says. While that problem is not one for pollsters to solve, it’s part of the reality they are facing and may factor into their decision to use recall-vote weighting or not.
During this election, more than ever, looking to the polls has provided little comfort for anyone. For the next week Karpf recommends drinking water, getting good sleep and not checking social media. What will happen on Election Day won’t be knowable until it happens.
Even pollsters themselves agree. As Greenberg advised in Ms. Magazine, “Try not to look at the public polling.... It’s been stable, more than anything. It really hasn’t changed all that much since September. Everything’s close.”