The Fat Tails of Violence: Nassim Taleb's Critique

The Fat Tails of Violence: Nassim Taleb's Critique

Here's a statistics problem: In the years 1988-2000, the annual fatality rate from commercial aviation disasters averaged about 1,000 people globally. In 2001, a single attack killed 3,000 people using commercial aircraft. What's the "trend" in aviation deaths?

The answer depends on your statistical framework. If you're thinking in terms of averages and normal distributions, 2001 was an outlier—a fluke that distorts the underlying trend. But if you're thinking in terms of fat-tailed distributions, 2001 revealed something the previous years had concealed: the system contained catastrophic risks that weren't visible in ordinary data.

This is the core of Nassim Nicholas Taleb's critique of Steven Pinker's declining violence thesis. The claim isn't that Pinker got his numbers wrong. It's that his entire statistical framework is inappropriate for the phenomenon he's studying.


What Are Fat Tails?

In statistics, a "tail" refers to the extreme ends of a probability distribution—the rare events, the outliers, the catastrophes. Different types of distributions have very different tails.

In a thin-tailed distribution (like height or IQ), extreme values are bounded and rare. You'll never meet someone 20 feet tall. Outliers exist but they're close to the mean. The average is a good summary of the data.

In a fat-tailed distribution (like wealth or war deaths), extreme values are unbounded and dominate the total. Most years have few war deaths, but occasional catastrophes kill millions. The average is meaningless because a single observation can dwarf all previous data combined.

Here's the key insight: In fat-tailed domains, the past is a terrible guide to the future. You can observe decades of stability, calculate declining averages, plot reassuring trend lines—and then a single event renders all your analysis obsolete.

Financial markets taught this lesson in 2008. For years, models said housing-backed securities were safe because housing prices had never fallen nationwide. The models were based on historical data. The data showed stability. Then something happened that the historical data hadn't included, and the global financial system nearly collapsed.

Taleb argues that war deaths have the same statistical structure as financial crises. And if he's right, Pinker's conclusions don't follow from his data.


The Distribution of War

With Pasquale Cirillo, Taleb analyzed warfare data going back 2,000 years. Their findings:

War deaths follow a power law distribution. In a power law, the frequency of events is inversely proportional to their magnitude. There are many small events and few large ones—but the large ones are much larger than normal distributions would predict.

The tail is extremely fat. Wars capable of killing 10 million people, 50 million people, even hundreds of millions are not impossible "black swans"—they're part of the distribution. World War II wasn't a freak occurrence; it was a tail event in a distribution that produces tail events.

To visualize the problem: if you calculated the "average" deaths from war in, say, 1937, World War II would have been completely invisible in your data. Nothing in the historical record predicted a conflict that would kill 70-85 million people. Then it happened, and it retrospectively dominated all war statistics for centuries.

The data doesn't support trend claims. Given the variance in the distribution, we can't statistically distinguish between "violence is declining" and "violence is fluctuating randomly." The recent peaceful decades could be signal or noise—the data literally cannot tell us which.

We're undersampling the tails. We've only had one World War II–scale event in the data set. To have statistical confidence about trends in events that rare, we'd need thousands of years of observations—or multiple civilizations' worth of data.


The Turkey Problem

Taleb's favorite illustration is the turkey before Thanksgiving. From the turkey's perspective, every day brings food, warmth, and safety. Day after day, the evidence accumulates: the farmer is benevolent. The trend is clear. The turkey's confidence in continued prosperity grows with each passing day.

Then comes Thanksgiving.

The point: absence of evidence is not evidence of absence. The fact that no catastrophe has occurred recently doesn't mean catastrophes can't occur. It especially doesn't mean the probability of catastrophe is declining. The turkey had excellent historical data showing a clear positive trend—and that data told it nothing about what was coming.

We may be turkeys. The Long Peace is now 80 years old. From inside it, the trend looks clear. Violence is declining. But 80 years is one observation of great-power peace. We don't have enough observations to know if this is a trend or a fluctuation.

Consider: before 1914, optimists pointed to decades of relative European peace, unprecedented trade integration, and spreading liberal values. Norman Angell published The Great Illusion in 1909, arguing that war between industrial powers had become economically irrational and therefore unlikely. Five years later, the Great War began. Angell's argument wasn't wrong—war between industrial powers was economically irrational. It happened anyway.

Were the pre-1914 optimists wrong? Or did they just experience an unusually bad tail event? We can't actually tell from the data—and that's Taleb's point. The statistical structure of the problem prevents confident conclusions. The comforting trend can be real, or it can be the setup for disaster. The evidence looks identical until the moment it doesn't.


Silent Evidence

There's another problem: we only observe the histories where observers survived.

If nuclear war had started in 1962 or 1983 (both near-misses), there might be no one around to collect statistics on violence trends. The observer effect biases our data—we see the timelines where peace happened because those are the timelines where we exist.

This "silent evidence" problem plagues all tail-risk analysis. Civilizations that experienced existential catastrophes aren't around to report their data. We have selection bias toward observing success, which makes catastrophe look rarer than it is.

The financial version: banks that took huge risks and got lucky appear in the data as geniuses. Banks that took huge risks and failed are dead—their absence makes risk-taking look smarter than it was.

Applied to violence: we observe that nuclear weapons haven't been used since 1945 and conclude something about the robustness of deterrence. But in the timelines where deterrence failed, no one is around to update the statistics.


The Policy Problem

This isn't just an academic debate. If Pinker is right and violence is declining due to identifiable historical forces, we might cautiously relax—maintain the institutions that created peace, but recognize we've made real progress.

If Taleb is right and the decline is indistinguishable from noise, complacency becomes dangerous. We might be lulled into false confidence during what is actually just the quiet period before the next catastrophe.

Worse: if the probability of extreme events hasn't changed, but our capacity for destruction has increased, we're in more danger than ever. A 1% annual probability of great-power war was bad in 1900. It's catastrophic when great powers have nuclear weapons.

The policy conclusion from fat-tail analysis: prepare for extremes, not averages. Don't be reassured by decades of stability. Don't assume trends will continue. Don't mistake luck for safety.

Insurance companies understand this. They don't say "no major earthquakes in California in 30 years, so earthquake risk is declining." They price for the catastrophe that will eventually come. Our approach to civilizational risk should have the same structure: plan for the worst, hope for the best, and never confuse hoping with knowing.


Pinker's Responses

Pinker and his defenders have responded to the fat-tails critique:

Even in fat-tailed domains, trends can exist. A power law distribution can shift over time. The exponent can change. Saying "it's fat-tailed" doesn't mean trends are impossible—just that they're harder to detect with confidence.

Multiple independent measures show decline. Homicide rates, torture abolition, judicial cruelty, violence against minorities—these aren't war deaths. Different measures with different distributions all pointing in the same direction is stronger evidence than any single metric. Taleb's critique applies most forcefully to warfare; it's less clear that homicide rates suffer from the same fat-tail problem.

Mechanisms matter. We have theories about why violence declined: states, commerce, empathy, reason. These theories make predictions beyond the violence data itself. If the theories are correct, we'd expect violence to decline—and it has.

The alternative is nihilism. If we can never know whether violence is declining, we can never know anything about social progress. At some point, we have to act on the best available evidence, even if that evidence isn't statistically airtight.


What Would Resolve This?

The honest answer: more time. If the Long Peace continues for another century without a great-power war, the trend becomes increasingly hard to attribute to luck. At some point, evidence accumulates even in fat-tailed domains.

But we might not get more time. Nuclear weapons, engineered pandemics, advanced AI—21st-century risks are potentially worse than anything in the historical data. We're making decisions about existential risk without the statistical luxury of waiting for more data.

Given this uncertainty, both camps might converge on practical recommendations:

- Maintain institutions that seem to preserve peace, even if we can't prove causation. - Take tail risks seriously, even if they haven't manifested recently. - Avoid overconfidence in either direction—neither complacent optimism nor paralyzing pessimism. - Build resilience against catastrophes rather than assuming they won't happen.


The Deeper Point

Taleb's critique isn't really about violence—it's about how we reason under uncertainty. His target is the hubris of treating tail-risk domains like normal distributions, of assuming that historical averages predict future averages, of using the wrong statistical tools for the problem at hand.

This matters because humans are bad at thinking about rare catastrophes. We anchor on recent experience. We assume trends continue. We mistake quiet periods for permanent stability. These cognitive biases are dangerous in fat-tailed domains—precisely the domains where catastrophes emerge from apparent calm.

The violence debate is a case study in a broader epistemic problem: how do you make decisions when you can't trust your data to reveal underlying probabilities? When the thing you most need to know—how likely is catastrophe—is exactly what your data can't tell you?

Taleb's answer: epistemic humility about what you know, combined with practical preparation for what you don't. Don't claim to know that violence is declining. Don't claim to know it isn't. Recognize that in this domain, confident claims exceed what the evidence supports.

That's uncomfortable. We want to know whether the world is getting better or worse. But wanting to know doesn't mean we can know—and pretending otherwise is how turkeys end up on the table.


Further Reading

- Taleb, N. N. (2007). The Black Swan. Random House. - Taleb, N. N., & Cirillo, P. (2019). "The Decline of Violent Conflicts: What Do the Data Really Say?" Significance. - Clauset, A. (2018). "Trends and Fluctuations in the Severity of Interstate Wars." Science Advances.


This is Part 3 of the Violence and Its Decline series. Next: "Structural Violence: Johan Galtung's Challenge"