- Hardcover: 544 pages
- Publisher: Penguin Press; 1 edition (September 27, 2012)
- Language: English
- ISBN-10: 159420411X
- ISBN-13: 978-1594204111
- Product Dimensions: 6.4 x 1 x 9.6 inches
- Shipping Weight: 1.8 pounds (View shipping rates and policies)
- Average Customer Review: 4.4 out of 5 stars See all reviews (1,110 customer reviews)
- Amazon Best Sellers Rank: #29,875 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
The Signal and the Noise: Why So Many Predictions Fail - But Some Don't 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
The Amazon Book Review
Discover what to read next through the Amazon Book Review. Learn more.
Frequently bought together
Customers who bought this item also bought
Amazon Best Books of the Month, September 2012: People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver's brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Data meets human nature. Baseball, weather forecasting, earthquake prediction, economics, and polling: In all of these areas, Silver finds predictions gone bad thanks to biases, vested interests, and overconfidence. But he also shows where sophisticated forecasters have gotten it right (and occasionally been ignored to boot). In today's metrics-saturated world, Silver's book is a timely and readable reminder that statistics are only as good as the people who wield them. --Darryl Campbell
Silver doesn't offer one comprehensive theory for what makes a good prediction in his interdisciplinary tour of forecasting. But the book is a useful gloss on the tricky business of making predictions correctly. —Chris Wilson
Amazon's editors selected this title as one of our Best Books of the Month. See our current Editors' Picks.
If you are a seller for this product, would you like to suggest updates through seller support?
Top Customer Reviews
Longer review: I'm an applied business researcher and that means my job is to deliver quality forecasts: to make them, persuade people of them, and live by the results they bring. Silver's new book offers a wealth of insight for many different audiences. It will help you to develop intuition for the kinds of predictions that are possible, that are not so possible, where they may go wrong, and how to avoid some common pitfalls.
The core concept is this: prediction is a vital part of science, of business, of politics, of pretty much everything we do. But we're not very good at it, and fall prey to cognitive biases and other systemic problems such as information overload that make things worse. However, we are simultaneously learning more about how such things occur and that knowledge can be used to make predictions better -- and to improve our models in science, politics, business, medicine, and so many other areas.
The book presents real-world experience and critical reflection on what happens to research in social contexts. Data-driven models with inadequate theory can lead to terrible inferences. For example, on p. 162: "What happens in systems with noisy data and underdeveloped theory - like earthquake prediction and parts of economic and political science - is a two-step process. First, people start to mistake the noise for a signal. Second, this noise pollutes journals, blogs, and news accounts with false alarms, undermining good science and setting back our ability to understand how the system really works." This is the kind of insight that every good practitioner acquires through hard-won battles, and continues to wrestle every day both in doing work and in communicating it to others.
It is both readable and technically accurate: it presents just enough model details yet avoids being formula-heavy. Statisticians will be able to reproduce models similar to the ones he discusses, but general readers will not be left out: the material is clear and applicable. Scholars of all stripes will appreciate the copious notes and citations, 56 pages of notes and another 20 pages of index, which detail the many sources. It is also important to note that this is perhaps the best general readership book from a Bayesian perspective -- a viewpoint that is overdue for readable exposition.
The models cover a diversity of areas from baseball to politics, from earthquakes to finance, from climate science to chess. Of course this makes the book fascinating to generalists, geeks, and breadth thinkers, but perhaps more importantly, I think it serves well to develop reusable intuition across domains. And, for those of us who practice such things professionally, to bring stories and examples that we can tell and use to illustrate concepts with the people we inform.
There are three audiences who might not appreciate the book as much. First are students looking for a how-to book. Silver provides a lot of pointers and examples, but does not get into nuts and bolts details or supply foundational technical instruction. That requires coursework in research methods and and statistics. Second, his approach to doing multiple models and interpreting them humbly will not satisfy those who promote a naive, gee-whiz, "look how great these new methods are" approach to research. But then, that's not a problem; it's a good thing. The third non-fitting audience will be experts who desire depth in one of the book's many topic areas; it's not a technical treatise for them and I can confidently predict grumbling in some quarters. Overall, those three audiences are small, which happily leaves the rest of us to enjoy the book.
What would make it better? As a pro, I'd like a little more depth (of course). It emphasizes games a little too much for my taste. And a clearer prescriptive framework could be nice (but also could be a problem for reasons he illustrates). But those are minor points; it hits its target better than any other such book I know.
Conclusion: if you're interested in scientific or statistical forecasting, either as a professional or layperson, or if you simply enjoy general science books, get it. Cheers!
During election season, everyone with a newspaper column or TV show feels entitled to make (transparently partisan) predictions about the consequences of each candidate's election to unemployment/crime/abortion/etc. This kind of pundit chatter, as Silver notes, tends to be insanely inaccurate. But there are also some amazing success stories in the prediction business. I list some chapter-by-chapter takeaways below (though there's obviously a lot depth more to the book than I can fit into a list like this):
1. People have puzzled over prediction and uncertainty for centuries.
2. TV pundits make terrible predictions, no better than random guesses. They are rewarded for being entertaining, and not really penalized for being wrong.
3. Statistics has revolutionized baseball. But computer geeks have not replaced talent scouts altogether. They're working together in more interesting ways now.
4. Weather prediction has gotten lots better over the last fifty years, due to highly sophisticated, large-scale supercomputer modeling.
5. We have almost no ability to predict earthquakes. But we know that some regions are more earthquake prone, and that in a given region an earthquake of magnitude n happens about ten times as often as an earthquake of magnitude (n+1).
6. Economists are terrible at predicting quantities such as next year's GDP. Predictions are only very slightly correlated with reality. They also tend to be overconfident, drastically underestimating the margin of error in their guesses. Politically motivated predictions (such as those released by White House, historically) are even worse.
7. The spread of a disease like the flu is hard to predict. Sometimes we overreact because risk of under-reacting seems greater.
8. A few professional sports gamblers are able to make make a living by spotting meaningful patterns before others do, and being right slightly more than half the time.
9. Kasparov thought he could beat Deep Blue. Couldn't. Interesting tale of humans/computers trying to outguess each other.
10. Nate Silver made a living playing online poker for a few years. When the government tightened the rules, the less savvy players ("fish") stopped playing, and he found he couldn't make money any more. So he started FiveThirtyEight.
11. Efficient market hypothesis: market seems very efficient, but not perfectly so. Possible source of error: most investment is done by institutions, and individuals at these institutions are rewarded based on short term profits. Rational employees may have less career risk when they "bet with the consensus" than when they buck a trend: this may increase herding effects and makes bubbles worse. Note: Nate pointedly does not claim that one can make money on Intrade by betting based on FiveThirtyEight probabilities. But he stresses that Intrade prices are themselves probably heavily informed by poll-based models like the ones on FiveThirtyEight.
12. Climate prediction: prima facie case for anthropic warming is very strong (greenhouse gas up, temperature up, good theoretical reason for former causing latter). But lots of good reason to doubt accuracy of specific elaborate computer models, and most scientists admit uncertainty about details.
13. We failed to predict both Pearl Harbor and September 11. Unknown unknowns got us. Got to watch out for loose Pakistani nukes and other potential catastrophic surprises in the future.
Nonetheless it's a great book, and Silver bears the hallmark of someone who is intellectually curious and genuinely interested in making his analytical tool better, rather than attaching his ego to the outcome. As part of that, he's refreshingly candid in his opinion of others. Well researched and covers a lot of areas including sports, weather, financial meltdowns, chess, and others. The best section imo was on chess, where he displayed both his story telling skills (retelling of chess master Kasparov's loss to IBM was both compelling and insightful), and more in depth technical discussion which chess lends itself to. The book seemed to run out of steam toward the end, with some chapters going on longer than I thought necessary, particularly poker and efficient markets.
He shares some of my core beliefs that statistics/data is not enough, if you really want to understand something and make good forecasts you need to understand its underlying structure. And that the proper relationship between man and machine is symbiotic, rather than one taking over the other. Those, and the importance of thinking probabilistically, are the core takeaways.