Every time we choose a route to work, decide whether to go on a second date, or set aside money for a rainy day, we are making a prediction about the future. Yet from the global financial crisis to 9/11 to the Fukushima disaster, we often fail to foresee hugely significant events. In The Signal and the Noise, the New York Times' political forecaster and statistics guru Nate Silver explores the art of prediction, revealing how we can all build a better crystal ball.
In his quest to distinguish the true signal from a universe of noisy data, Silver visits hundreds of expert forecasters, in fields ranging from the stock market to the poker table, from earthquakes to terrorism. What lies behind their success? And why do so many predictions still fail? By analysing the rare prescient forecasts, and applying a more quantitative lens to everyday life, Silver distils the essential lessons of prediction.
We live in an increasingly data-driven world, but it is harder than ever to detect the true patterns amid the noise of information. In this dazzling insider's tour of the world of forecasting, Silver reveals how we can all develop better foresight in our everyday lives.
1. THE FINANCIAL CRISIS:
A Catastrophic Failure of
It was October 23, 2008. The stock market was in free fall, having plummeted almost 30
percent over the previous five weeks. Once-esteemed companies like Lehman Brothers had
gone bankrupt. Credit markets had all but ceased to function. Houses in Las Vegas had lost
40 percent of their value. Unemployment was skyrocketing. Hundreds of billions of dollars
had been committed to failing financial firms. Confidence in government was the lowest
that pollsters had ever measured. The presidential election was less than two weeks away.
Congress, normally dormant so close to an election, was abuzz with activity. The
bailout bills it had passed were sure to be unpopular and it needed to create every
impression that the wrongdoers would be punished. The House Oversight Committee had called
the heads of the three major credit-rating agencies, Standard & Poor’s (S&P), Moody’s, and
Fitch Ratings, to testify before them. The ratings agencies were charged with assessing
the likelihood that trillions of dollars in mortgage-backed securities would go into
default. To put it mildly, it appeared they had blown the call.
The Worst Prediction of a Sorry Lot
The financial crisis of the late 2000s can be understood in a number of ways: as a
moral failure or as a regulatory failure or as a failure of institutions. Most obviously,
it was an economic failure of massive proportions. By late 2011, four years after the
Great Recession officially began, the average American was about $2,500 poorer than she
would have been otherwise.
I am convinced, however, that the best way to view the financial crisis is as a
catastrophic failure of prediction. The predictive failures were widespread, occurring at
virtually every stage during, before, and after the crisis and involving everyone from the
mortgage brokers to the White House.
The most calamitous failures of prediction usually have a lot in common. We focus on
those signals that tell a story about the world as we would like it to be, not how it
really is. We ignore the risks that are hardest to measure, even when they pose the
greatest threats to our well-being. We make approximations and assumptions about the world
that are much cruder than we realize. We abhor uncertainty, even when it is an irreducible
part of the problem we are trying to solve. If we want to get at the heart of the
financial crisis, we should begin by identifying the greatest predictive failure of all, a
prediction that committed all these mistakes.
The ratings agencies had given their AAA rating, normally reserved for a handful of the
world’s most solvent governments and best-run businesses, to thousands of mortgage-backed
securities, financial instruments that allowed investors to bet on the likelihood of
someone else defaulting on their home. The ratings issued by these companies are quite
explicitly meant to be predictions: estimates of the likelihood that a piece of debt will
go into default. Standard & Poor’s told investors, for instance, that when it rated a
particularly complex type of security known as a credit-default option (CDO) at AAA, there
was only a 0.12 percent probability—about 1 chance in 850—that it would fail to pay out
over the next five years. This supposedly made it as safe as a AAA-rated corporate bond
and safer than S&P now assumes U.S. Treasury bonds to be.8 The ratings agencies do
not grade on a curve.
In fact, around 28 percent of the AAA-rated CDOs defaulted, according to S&P’s internal
fi gures. (Some independent estimates are even higher.) That means that the actual default
rates for CDOs were more than two hundred times higher than S&P had predicted.
This is just about as complete a failure as it is possible to make in a prediction:
trillions of dollars in investments that were rated as being almost completely safe
instead turned out to be almost completely unsafe. It was as if the weather forecast had
been 86 degrees and sunny, and instead there was a blizzard.
When you make a prediction that goes that so badly, you have a choice of how to explain
it. One path is to blame external circumstances—what we might think of as “bad luck.”
Sometimes this is a reasonable choice, or even the correct one. When the National Weather
Service says there is a 90 percent chance of clear skies, but it rains instead and spoils
your golf outing, you can’t really blame them. Decades of historical data show that when
the Weather Service says there is a 1 in 10 chance of rain, it really does rain about 10
percent of the time over the long run.
This explanation becomes less credible, however, when the forecaster does not have a
history of successful predictions and when the magnitude of his error is larger. In these
cases, it is much more likely that the fault lies with the forecaster’s model of the world
and not with the world itself.
In the instance of CDOs, the ratings agencies had no track record at all: these were
new and highly novel securities, and the default rates claimed by S&P were not derived
from historical data but instead were assumptions based on a faulty statistical model.
Meanwhile, the magnitude of their error was enormous: AAA-rated CDOs were two hundred
times more likely to default in practice than they were in theory.
The ratings agencies’ shot at redemption would be to admit that the models had been
flawed and the mistake had been theirs. But at the congressional hearing, they shirked
responsibility and claimed to have been unlucky. They blamed an external contingency: the
“S&P is not alone in having been taken by surprise by the extreme decline in the
housing and mortgage markets,” Deven Sharma, the head of Standard & Poor’s, told Congress
that October. “Virtually no one, be they homeowners, financial institutions, rating
agencies, regulators or investors, anticipated what is coming.”
Nobody saw it coming. When you can’t state your innocence, proclaim your
ignorance: this is often the first line of defense when there is a failed forecast. But
Sharma’s statement was a lie, in the grand congressional tradition of “I did not have
sexual relations with that woman” and “I have never used steroids.”
What is remarkable about the housing bubble is the number of people who did see it
coming—and who said so well in advance. Robert Shiller, the Yale economist, had noted its
beginnings as early as 2000 in his book Irrational Exuberance. Dean Baker, a
caustic economist at the Center for Economic and Policy Research, had written about the
bubble in August 2002. A correspondent at The Economist magazine, normally known
for its staid prose, had spoken of the “biggest bubble in history” in June 2005. Paul
Krugman, the Nobel Prize–winning economist, wrote of the bubble and its inevitable end in
August 2005. “This was baked into the system,” Krugman later told me. “The housing crash
was not a black swan. The housing crash was the elephant in the room.”
Ordinary Americans were also concerned. Google searches on the term “housing bubble”
increased roughly tenfold from January 2004 through summer 2005. Interest in the term was
heaviest in those states, like California, that had seen the largest run-up in housing
prices—and which were about to experience the largest decline. In fact, discussion of the
bubble was remarkably widespread. Instances of the two-word phrase “housing bubble” had
appeared in just eight news accounts in 2001 but jumped to 3,447 references by 2005. The
housing bubble was discussed about ten times per day in reputable newspapers and
And yet, the ratings agencies—whose job it is to measure risk in financial markets—say
that they missed it. It should tell you something that they seem to think of this as their
best line of defense. The problems with their predictions ran very deep.
1. Can you explain the title of your book, THE SIGNAL AND THE NOISE?
It’s a metaphor that comes from electrical engineering. The signal is the sound that
you want to transmit: say, a recording of Beethoven’s Moonlight Sonata. The noise is
anything that interferes with it: say, the cracking from a nearby radio tower.
I found that this metaphor was coming up again and again in my research. Intelligence
officials trying to detect terrorist activity will speak of a terrorist’s signal. Or an
economist might speak of the noise in financial markets.
Their goal is to isolate the signal from the noise. But I call the book “The Signal
and The Noise” because I found that there is a sort of duality between them. We can
trick ourselves into thinking that random patterns are meaningful ones – that noise is
signal -- and sometimes vice versa.
2. Why do you think statistics books continue to capture the popular imagination,
from Freakonomics to Moneyball?
We encounter so much information today that people are naturally curious about what in
the heck we should do with all of it. And we’re becoming less trusting of institutions
that mediate information, like the news media. We have all this data, and we want to learn
for ourselves what it all means.
A little bit of math and statistics and probability and logic helps us with our
information-processing goals. But what’s great about books like Moneyball and
Freakonomics is that they make statistics approachable. Subjects like English and
history are taught in very hands-on ways – you read great books, discuss the ideas and
characters, and it’s easy to understand their relevance. Whereas math is taught in very
abstract and technical ways -- even though it’s just as relevant to our everyday lives,
and just as intuitive if it’s taught well.
Books like Freakonomics and Moneyball help to bridge that gap. They’re
sort of making up for the calculus teacher that had you memorize one too many derivatives
and turned you off to the subject as a result. Not that there’s anything wrong with
3. Even before you came on the scene as a political forecaster, you developed an
innovative system for predicting baseball performance – how did you go from sports to
Partly because of Moneyball, the competition in baseball was getting very
fierce. Most of the inefficiencies that Michael Lewis described in the book were exploited
a long time ago. There’s hardly a team left that doesn’t employ a statistical analyst of
some kind, or which doesn’t know that on-base percentage is more important than batting
Whereas in politics – well, you don’t need watch Fox News or MSNBC for very long to
know that there’s a lot of hot air when it comes to political coverage. Your average
political pundit is pretty detached from the things that normal voters care about.
Reporters are a lot better than pundits, but they still need to weave messy information
into neat narratives. So I thought there was room to apply a little bit more rigor to the
4. What is the easiest professional sport to predict?
I’d argue that baseball is both the easiest and the hardest to predict,
depending on how you’re thinking about the question. The statistical methods are the most
complete in baseball. Pretty much everything that has happened on a baseball diamond
within the past 150 years has been dutifully and accurately recorded. And for the most
part, these statistics provide a very good description of what’s really going on in the
But we also know that there is a lot of luck in baseball. Even a world-class team will
lose one third of its ballgames. And a .300 hitter will go through plenty slumps where he
can’t hit a lick. So we know a lot about how to isolate the signal from the noise in
baseball – but there is an awful lot of noise!
So maybe the best answer is something like tennis, which meets both definitions of
predictability. It’s quite simple structurally, so you can describe it pretty well with
statistics. And the best players – Roger Federer, Rafael Nadal – are absolutely dominant.
5. Politics and Baseball, the two subjects you are best known for, are just pat of
the book. Why was it important to include so many different fields - economics, earth and
life sciences, games, even terrorism?
One thing that baseball fans know is to be wary of small sample sizes. If you show up
at the ballpark, and the catcher gets three hits that day, that doesn’t really tell you
very much about how good he really is. It takes a long time – hundreds of at-bats -- for
the signal to emerge since there’s so much luck in the game.
But in the same way, I thought, perhaps baseball is an exceptional case. Are there
Moneyball-like success stories in other fields in which statistics and analysis and
prediction is pertinent?
In fact, I found that there are entire disciplines in which our analysis has failed to
produce much progress, at least as measured by our ability to make reliable predictions.
Finance and economics are obvious examples of this, for instance. Economists have
literally tens of thousands of data series to mine – more statistics than baseball geeks
do. But they still aren’t able to predict recessions more than a few months in advance.
The book needed to cover a diverse enough range of examples that I could to some
systematic conclusions about why predictions succeed and why they fail in different
fields. And that meant going beyond the cases that were most familiar to me when I started
to write and research the book.
Often, the conclusions were surprising. I’d thought that weather forecasting was a
hopeless case, for instance, but it turned out to be a huge success story. Meteorologists
and professional gamblers basically emerge as the heroes of the book.
6. Rather controversially you say in the book “we can never make perfectly objective
predictions. They will always be tainted by our subjective point of view.” How so?
Well, I think human beings are pretty darned smart. Our brains can store about three
terabytes of information, which is just an enormous amount. And a three-year old is able
to do a lot of things that a supercomputer can’t.
Still, three terabytes represents only about one one-millionth of the information that
IBM says is now being produced in the world each day. We have to be terribly selective
about the information that we choose to remember. That necessarily implies that we have a
point of view – the set of facts that I have at my command won’t be the same as yours. And
we have to make approximations, whether it’s in the form of our language, or the
mathematical models that we design.
Even the things we take most for granted, like our sensory inputs (vision, hearing,
etc.) rely heavily on making approximations about the objective world. We’re just taking
in way more information out there than our brains can process.
So it’s absolutely delusional to think that any one of us has a monopoly on the truth –
that our beliefs about the world aren’t flawed in any number of large and small ways. The
book, in some ways, is about accepting our flaws, as well as recognizing the things that
we’re good at.
7. What distinguishes those who are good at forecasting to those who seem to get it
The whole book is an answer to that question. But here’s one big thing that weather
forecasters and gamblers -- two of our success stories -- have in common. They
recognize that their knowledge of the world is imperfect. They express their
predictions in terms of probabilities: there’s a 40 percent chance of rain tomorrow;
there’s a 30 percent chance that I’ll catch a card to make a flush and win a huge poker
This type of thinking turns out to be extremely important when it comes to sorting
through the enormous amount of information that we encounter today.
8. What do you feel most accounted for the grand failure of forecasting the 2008
A lot of things had to go wrong to create such a gigantic mess. The credit-rating
agencies built models that assumed that the status of one’s person mortgage wasn’t much
related to another’s: if a carpenter in Cleveland defaults on his mortgage, that has no
effect on whether a dentist in Denver does. That assumption fails miserably when you have
a massive housing bubble, and mortgages to go underwater all across the country.
What’s worse, people doubled-down rather than hedge their bets. For every dollar
invested in purchasing a home, Wall Street was making about $50 in bets on the side. So a
collapse in housing prices brought down the entire financial system. But it began with
having a naïve trust in models that made exact-seeming predictions based on utterly flawed
9. In 2008 you correctly predicted Barak Obama’s victory in 49 of 50 states as well
as the winner of all 35 U.S. Senate races. In the four years since, has your methodology
changed at all when it comes to politics? What’s toughest about political forecasting?
What’s tricky about presidential elections is simply that they don’t happen very often
– just one every four years. And there are dozens of factors that go into determining the
winner. It’s very much the opposite of something like baseball, where each team plays 162
games per season, but the structure of the game is pretty simple. In presidential
elections, we’re really just making educated guesses about which factors determine the
winners and losers.
With that said, my methodology is increasingly starting to incorporate some of those
structural factors in addition to polling. The evidence is pretty clear that a bad
economy, for instance, makes life challenging for the incumbent. If there’s a bad jobs
report, but a good poll for Obama in Ohio, the jobs report is often the more important
Let me protest, incidentally, that some of those forecasts were a bit fortunate in
2008. We had Al Franken with a 50.1 percent chance of winning the Senate race in Minnesota
– basically a coin flip. The coin came up the right way for me after a prolonged recount
there. But that’s basically just luck, so it’s no reason to rest on one’s laurels.
10. In the era of Big Data and the advent of faster, better technology, are we
getting better at forecasting?
In certain circumstances, yes, the technology is helping -- like in baseball prediction
or weather forecasting. But it shouldn’t be assumed that this is the default case.
Think of all the disasters that we’ve had in the new millennium – from the financial
crisis to the September 11 attacks to the Japan earthquake (some seismologists thought we
couldn’t possibly have so large an earthquake there). All of these involved some
substantial failure of prediction, and all of these occurred in information-rich fields.
More information and better technology aren’t all that useful if we don’t know how to
use it. And all that shiny the new technology can sometimes make us overconfident – we
think we have mastery of a subject when we just don’t.
11. How is forecasting a natural phenomenon (weather, earthquakes) different from
predicting human performance (financial markets, sports, politics)?
Many natural systems, including weather and earthquakes, are quite complex. But at
least they aren’t changing much, at least on the scale of human lifespans. Slowly, if not
always steadily, we’ve made progress in predicting natural phenomenon.
When it comes to systems that involve how humans interact with one another, however,
they’re not just complex -- but also growing more complex all the time. There’s
just no comparison between the globalized economy of today and the localized, agrarian
economy of a couple of centuries ago. Seven billion human beings, who can catch a flight
to anywhere in the world if they have the means, can spread the flu around the planet much
faster they did a generation ago.
So we’re always running against a moving target. Prediction in these areas requires a
somewhat more defensive posture: we have to prepare more for “unknown unknowns” as human
beings keep finding new and ingenuous ways of interacting with one another.
12. You write about hedgehogs and foxes. Can you explain the difference? Which one
are you? Obama? Romney?
These terms come from a quote attributed to the Greek poet Archilochus: "the fox knows
many things, but the hedgehog knows one big thing". Foxes tend to be multidisciplinary and
adaptable, always looking for little gains around the margin. They’re comfortable with
probability and uncertainty. They’re a little more pragmatic.
Hedgehogs like to swing for the fences and seek out the big kill. They want to find
some grand unifying principle that explains the world. But they can be stubborn and
ideological as a result.
I’m sure that we need both these types of people in the world. But in the book, I cite
research to suggest that the foxes are a lot better at making predictions. They’re more
likely to know their limits, and less likely to mistake noise for a signal because it
happens to fit some theory they’ve concocted.
Obama and Romney are both foxes, I think. I’m sure that some partisans will disagree,
but I think they’re both quite pragmatic – perhaps unusually so for presidential
candidates. Certainly as compared to George W. Bush. I’m a fox too.