I only heard about Philip Tetlock’s “Expert Political Judgment: How Good Is It? How Can We Know?” (2005) in late 2024. Vinod Khosla, the legendary co-founder of Sun Microsystems and later venture investor, was speaking at the TechCrunch Disrupt conference in San Francisco. It was an casual style discussion on stage discussing the state of venture investing. At one point Khosla called out “Expert Political Judgment” (EPJ) and encouraged everyone to read it. I purchased the book using my Amazon app before the talk was over.
EPJ is a landmark study on the accuracy of political forecasting, but it’s hardly what I expected coming from a longtime Silicon Valley investor. Tetlock’s macro-level insight is that cautious, thoughtful observers are much better forecasters than bold, confident ones. The British political philosopher Isaiah Berlin argued in 1953 that when it comes to classifying thinkers and writers, there are two basic types: foxes and hedgehogs. “The fox knows many things, but the hedgehog knows one big thing,” Berlin argued. Hedgehogs see the world through a single, unifying idea or principle, and include such greats as Plato, Marx, and Nietzsche. Foxes, on the other hand, take a more scattered, flexible approach, drawing from many different ideas. Notable examples might include Shakespeare, Aristotle, and Tolstoy. Simply put, hedgehogs tend to be deductive thinkers; foxes are inductive thinkers. Tetlock stresses that hedgehogs come in all ideological shapes and sizes – how they think is what unifies them, not what they think.Tetlock uses this basic construct and the results of 27,451 forecasts collected over several years in a controlled environment to categorize political prognosticators in “Expert Political Judgment.”
Tetlock calls his research program “unabashedly neopositivist” – his methods are scientific, his evidence is empirical, and his measurements are objective (even if his subject doesn’t naturally lend itself to scientific, empirical, or objective measurements). He evaluates forecasters based on two scoring models: calibration and discrimination. Calibration means how well a forecaster’s confidence matches reality (i.e. a well-calibrated forecaster doesn’t overestimate or underestimate how likely things are). Discrimination means how well a forecaster can sort likely versus unlikely events (i.e. a forecaster with good discrimination gives higher probabilities to things that actually happen, and lower probabilities to things that don’t).
The author hammers home his main idea that hedgehogs who know “one big thing” inevitably and aggressively extend the explanatory reach of that big thing into a variety of new domains, even if it doesn’t make much sense. Indeed, hedgehogs are a lot like Winston Churchill’s definition of a fanatic: Someone who cannot change his mind and will not change the subject. Foxes, meanwhile, tend to be centrists who are skeptical of grand schemes. Tetlock’s core argument is that the hedgehog versus fox paradigm is the only dimension that distinguishes superior contemporary forecasters across regions, topics, and time. In other words, knowing someone’s gender, educational background, political views, or natural outlook (i.e. optimist vs pessimist) will tell you almost nothing about their ability to forecast the future. However, their cognitive style – how they thought, not what they thought – will tell you a lot. When it comes to the basic measures of forecasting accuracy – calibration and discrimination – Tetlock says “foxes dominate hedgehogs.” Hedgehogs are far more likely to label proposed outcomes “impossible” or “sure things”, although according to Tetlock’s data, roughly 15 percent of impossible outcomes did happen and 25 percent of sure things didn’t. Foxes are far less likely to ascribe such certainty to future events. In short, he writes, “Good judges tend to be moderate foxes: eclectic thinkers who are tolerant of counterarguments, and prone to hedge their probabilistic bets and not stray too far from just-guessing and base-rate probabilities of events.”
Hedgehogs may be statistically poor forecasters, but their simple, decisive and often sweeping statements make them great guests on talk shows and news programs, thus ensuring far more visibility than their cautious fox peers. In the marketplace for media sound bites, Tetlock says “only the overconfident survive, and only the truly arrogant thrive.” In other words, foxes may outperform hedgehogs on forecast accuracy, but hedgehogs dominate foxes in creating attention grabbing sound bites. Tetlock says that hedgehogs are also great excuse makers. He catalogs a full set of “belief system defenses” that hedgehogs typically use to explain away their forecasting errors: 1) exogenous-shock (the unexpected Black Swan event); 2) close-call counterfactual (“I almost got it right”); 3) off-on-timing arguments (“why I predicted will happen eventually”); 4) “politics is hopelessly cloud-like” defense (“your crystal ball worked better than mine”); and 5) the right mistake (“crying wolf is the price of vigilance”). Interestingly, Tetlock says that the experts never lean on these excuses to explain how they got certain things right.
Tetlock’s ideas and arguments are pretty straightforward, but the same cannot be said about his prose. Unfortunately, much of EJP is jargon-laden mambo-jumbo. Any prospective reader should be warned ahead of time that most of this book reads like this: “We risk making false attributions of good judgment if we treat political reasoning as a passionless exercise of maximizing aggregate accuracy.” A big part of reading this book is mentally translating Tetlock’’s basic arguments into comprehensible layman’s English.
The author acknowledges that never everyone agrees with his hypotheses and methodologies. For instance, he notes that a significant portion of his reading audience believe that good judgment and good luck are essentially one and the same. The world is far too complex to allow for anything remotely approaching forecast accuracy no matter how well-educated or skilled. He calls these non-believers radical skeptics. Meliorists, on the other hand, believe that man is capable of most things. “Our uncontrollable need to believe in a controllable world and our flawed understanding of the laws of chance,” is what propels their faith, he says.
So what to make of all of Tetlock’s research, literally decades of people making all kinds of bets about future events? The first and arguably more important insight of the book is that political and economic experts are often no better at predicting events than simple statistical models—or even chance. In fact, Tetlock writes, “It is impossible to find any domain in which humans clearly outperformed extrapolation algorithms.” Yet, we still turn to experts for this guidance on all sorts of topics. Why? Mainly because Americans are prone to falling for the “mystique of expertise,” he says – the belief that certain folks among us are special and far-sighted, the way earlier cultures turned to witchdoctors and oracles. But it’s just not true. “People who devoted years of arduous study to a topic were as hard-pressed as colleagues casually dropping in from other fields to affix realistic probabilities to possible futures,” he says. If anything, experts overpredict change, and this over confidence significantly and negatively impacts their forecasting scores.
Second, hedgehogs are the ones with the most confidence. They tend to over-rely on grand theories, making them poor at adjusting to new evidence. Foxes, on the other hand, who integrate multiple perspectives and update their beliefs, consistently made more accurate predictions. Foxes are, in short, better Bayesians (i.e. consistently updating what you believe based on new evidence, using probabilities to reflect how confident you are).
Third, deep expertise has limited marginal utility. That is, knowing a lot more about a subject does not translate into enhanced predictive power. In fact, if anything it erodes forecasting accuracy because expertise often generates over confidence and a tendency toward confirmation bias. Tetlock found that subject matter experts were more likely to recall their incorrect predictions as “almost right” rather than outright failures, while many others ignored contradictory evidence and stuck to their original views despite new data.
In closing, as Oscar Wilde once famously said: “The truth is rarely pure and never simple.” People who value simplicity, consistency, and closure – that is, hedgehogs – are more likely to err in forecasting. Complex political and economic events are inherently unpredictable due to randomness and unknown factors.Tetlock encourages a probabilistic, self-critical, and adaptive approach to forecasting. He recommends using probabilistic reasoning rather than absolute certainty, and says that experts, in particular, should strive to stay humble and actively seek a diverse set of perspectives when making political or economic judgments. “Good judgment,” Tetlock concludes, “is a precarious balancing act.” He encourages us to “cultivate the art of self-overhearing” in order to rethink core assumptions while preserving our existing worldview.

Leave a comment