With the 2025 elections now behind us — and the staggering Democratic wins across the board — we’ve received our first new dataset of polling accuracy since last year’s presidential election. American politics moves at nearly the speed of light, so reflecting back 12 months can be a formidable effort. But two key names are bound to ring bells for the political junkies far and wide: Ann Selzer and Nate Silver. Both of these pundits saw sweeping success in predicting election outcomes in the past but chose to take divergent paths in last year’s presidential election. While the heydays of these two are more likely than not behind them, they still embody the good and bad of election forecasting and an alarming trend that plagues the profession: herding.
It’s no secret that predicting elections is an incredibly delicate job, one where reputation is everything. It’s a profession with clear winners and losers and where each contestant is placed in direct and public competition with all the others. Sites like Project538 and RealClearPolitics update daily with new polls and create polling aggregates of state and national races. This makes any particularly bold poll that deviates from the average stick out like a sore thumb. Being right can turn you overnight into an election guru and cable news darling, while being wrong can cast you into the abyss of failed pollsters. This dichotomy makes any deviation from the average a risky business, and arguably, not a game worth playing. These facts leave pollsters with a single, unrelenting goal: not to be too wrong. But this mantra is more than just cautious — it’s dangerous and undermines the integrity of all political polls.
No one knows the epic highs and hopeless lows of being a pollster better than Ann Selzer. She spent years in partnership with The Des Moines Register as the undisputed guru of Iowa elections. Her track record was remarkable: she had a margin of error of less than two points for four of the last eight presidential elections in Iowa, which only a decade ago was one of the nation’s fiercest swing states. Regrettably, this storied legacy was irreparably tarnished last October when she released a poll placing Kamala Harris three points ahead of Donald Trump. Trump went on to carry the state by 13 points, leaving Selzer responsible for one of the most disastrous polling failures of the election cycle. She announced her retirement from polling less than two weeks later.
While Selzer’s story is somewhat laughable and turned her into a late-night punchline, it also shows a noble quality that so many pollsters presently lack: the guts to break away from the pack. Nate Silver holds the cake for becoming the absolute antithesis of this quality. Remember that name, Nate Silver? It used to be everywhere: the baseball statistician turned political sage, dubbed “The Kurt Cobain of Statistics,” who could ostensibly do no wrong in his election predictions. While Silver had some past triumphs, such as correctly predicting the outcome of the 2008 presidential election in 49 states, in recent years he has fallen into the inevitable rut of herding his results.
On his Substack Silver Bulletin, his final forecast on the morning of election day gave Harris a 50.015% of winning to Trump’s 49.985%. However, he had already prefaced this prediction by declaring that “my gut says Trump.” Moreover, the day before, Silver tweeted that his forecast for the race might just be decided by “luck.” To have your model say one thing, your gut say the other and then put it all down to luck is the opposite of what any election predictor acting in good faith should do.
Silver was far more occupied with protecting his reputation for future political cycles than actually making any substantive claims. I had a chance to air my grievances about the forecaster at a recent Dartmouth Political Union dinner. Sean Westwood, director of Dartmouth Political Polarization Lab, who was sitting at my table, remarked, “Nate Silver thinks that because he’s good at predicting baseball, he’s also good at predicting politics.” He couldn’t be more spot on.
I labeled Ann Selzer as having one of the most — not the most — disastrous poll of the 2024 election cycle because that title unfortunately belongs to Dartmouth College. The November 2024 pre-election survey conducted by the Rockefeller Center for Public Policy placed Harris ahead of Trump by 28 points in New Hampshire. Harris went on to win the Granite State by 2.8 points, resulting in a poll that missed the mark by a staggering 25 points. Though it doesn’t need to be said, the people at Rocky most definitely got it wrong. However, getting it wrong with ethical polling methods is preferable to skewing your results to achieve an aggregate to save face any day of the week.
Outlier results, such as those found in the Dartmouth poll, are expected — and even beneficial for aggregates. An honest polling average isn’t composed only of polls with results in proximity to one another, but also includes results that deviate far from the mean in both directions. When election forecasters become content to hide their results amongst a sea of nearly identical findings, it creates artificial convergence, leading to widespread inaccuracies. It takes courage to publish a defiant poll and is often a sacrificial act, making more precise averages at the expense of the credibility of its creator. To foster more ethical and accurate polling, let’s value disparate failure more than homogeneous success.
Opinion articles represent the views of their author(s), which are not necessarily those of The Dartmouth.
Correction Appended (Nov. 17, 1:11 p.m.): Professor Sean Westwood’s name was misspelled in a previous version of this article.



