[ad_1]
“We could rule out some things, but it’s hard to prove beyond certainty what happened,” said Josh Clinton, professor at Vanderbilt University and chair of the 2020 Election Task Force. association. “Based on what we know from the polls, from what we know about politics, we have good suspects as to what might be going on. “
These “prime suspects” will come as little comfort to pollsters and those who depend on them, from political campaigns to the media. The most likely – though far from certain – culprit of lagged poll results is that key groups of people don’t respond to polls in the first place.
Falling response rates have been a major concern for pollsters for more than a decade. But the politicization of polls during the Trump era – including the feedback loop from the former president, who falsely decried the results of polls he doesn’t like as “bogus” or deliberately aimed at suppressing enthusiasm to answer polls among GOP voters – appears to be biasing the results, with a certain segment of Republicans refusing to participate in the polls.
But pollsters say they can’t be sure that’s the main reason, because you never know exactly who you’re not talking to.
This makes the poll problems much more difficult to resolve than the diagnosis four years ago, which mainly focused on adjusting the polls to account for Trump’s popularity with voters who didn’t get a vote. university degree and its corresponding weakness with university degree holders.
“It seems plausible to the Task Force that, perhaps, Republicans who participate in our polls are different from those who support Republican candidates who do not participate in our polls,” Clinton said. “But how do you prove it?”
The task force’s first job was to assess the performance of the 2020 public election polls. On this measure, the poll received a fail rating. While national polls were the worst in four decades, state-level polls for the presidential, senatorial and governorate elections were as bad as they were as far back as there are records (20 years).
National presidential race polls conducted in the last two weeks of the election were down 4.5 percentage points on average, while state polls were down slightly more, the report said. of 5 points. Most of the errors were in one direction: Looking at the voting margin, national polls were too supportive of current President Joe Biden by 3.9 points, and state polls were 4.3 points too supportive of Biden.
Most of the errors came from underestimating Trump’s support, as opposed to overestimating Biden’s. Comparing the final election results to the poll numbers for each candidate, Trump’s support was underestimated by a whopping 3.3 points on average, while Biden’s was overestimated by one point – transforming what looked like a solid lead from Biden in a closer, though still decisive, run.
It wasn’t just a Trump effect, either. The polls of the Senate and Governor races were even further: 6 points on average.
“Within the same state, the error of the vote was often more important in the senatorial elections than in the presidential competition”, one reads in the report of the AAPOR. “Whether candidates run for president, senator, or governor, poll margins overall suggest that Democratic candidates would do better and Republican candidates would do worse than the certified final vote. “
No methodology has succeeded in surpassing the others. According to the report, there were only “minor differences” whether the polls were conducted by phone, over the internet or using a mixed methodology, including texts and smartphone apps – or whether they contacted them. random voters from a list of registered voters. “Each interview mode and each sampling mode overestimated the Democratic-Republican margin over the final certified voting margin,” the report said.
After the 2016 election, the AAPOR autopsy blamed that year’s poll errors on a number of different factors. First, the organization said, a larger than usual number of indecisive voters measured in the polls disproportionately flocked to Trump at the very end of the race, giving him an advantage that would be impossible to achieve. measure in advance.
But the 2020 error cannot be blamed on late decision-makers: just 4% of voters were not behind one of the top two candidates in state polls over the past two weeks, and polls in the exit suggests that late-deciding voters are split roughly evenly between Biden and Trump.
Another of 2016’s problems – the failure of many pollsters to be weighted by education – was also not to blame last year, according to the report. Four years earlier, many pollsters adjusted their results to get the right mix of voters by race and gender. But it missed a key and emerging dynamic in the electorate: More and more, white college-educated voters backed Democrats, while those who did not graduate quickly flocked to them. republicans. Studies show that voters without a college degree are less likely to participate in polls.
In 2020, however, the majority of state polls made adjustments to get more non-university voters in their polls. But they were still wrong.
Other 2016 style factors were also ruled out: Voters weren’t lying to pollsters about who they would support due to some sort of ‘shy Trump’ theory (otherwise the mistakes wouldn’t be bigger in the races. descending ballot). It’s not that a candidate’s supporters didn’t show up to vote (as evidenced by the record turnout in last year’s race). And the estimate of how many voters would vote early compared to showing up on election day was also not to blame (the polls mostly nailed this split).
The report is clear on what did not cause the 2020 ballot to fail. But it says “conclusively identifying why the polls overestimated the Democratic-Republican margin over certified voting seems impossible with the data available” .
The most plausible – but still unproven – theory is that the voters the polls reach are fundamentally different than the voters they aren’t. And Trump’s rants that the polls are “bogus” or rigged only exacerbates this problem.
“If voters most in favor of Trump were the least likely to participate in the polls, the poll error can be explained as follows: Self-identified Republicans who choose to answer polls are more likely to support Democrats and those who choose not to answer polls are more likely to support Republicans, ”the report read. “Even if the correct percentage of self-identified Republicans were surveyed, the differences between Republicans who responded and those who did not respond could produce the observed survey error.”
AAPOR is not the only organization struggling to determine where things went wrong. A collaborative report by five of the biggest Democratic campaign poll companies, released this spring, said “no consensus on a solution has emerged” to correct the 2020 mistakes.
While explanations remain elusive, pollsters and their clients are working hard to change methodologies. Solving respondents by text – or entirely by text – is growing in popularity as fewer Americans are willing to take a 15-minute phone survey. Online surveys also continue to grow.
Public polls commissioned by the media are also evolving. NBC News and the Wall Street Journal ended their more than 30-year polling partnership late last year, a Wall Street Journal spokesperson confirmed to POLITICO. The two news outlets had long worked with two large bipartisan polling firms on regular telephone polls.
Without definitive answers on what is causing the 2020 failure, however, pollsters aren’t sure they can get it right in 2022, 2024 or beyond.
“Even seven months after the fact, you would think you might know exactly what happened,” Clinton said.
“How sure are we that we can solve this problem in the future? Well, it’s not clear, ”Clinton added. “We’ll have to wait and see what happens, which is not a particularly reassuring position. But I think that’s the honest answer.
[ad_2]
Source link