Industrial Deals Rebound Introducing Prime Wardrobe nav_sap_plcc_ascpsc Unlimited Music. Offline listening. Learn more. GNO for Samsung S9 Starting at $39.99 Grocery Handmade Mother's Day gifts Home Gift Guide Mother's Day gifts across Amazon Home Gift Guide Book a house cleaner for 2 or more hours on Amazon howardsend howardsend howardsend  Echo Dot Fire tablets: Designed for entertainment Kindle Paperwhite GNO Shop now Start your Baby Registry



on September 27, 2012
This is the best general-readership book on applied statistics that I've read. Short review: if you're interested in science, economics, or prediction: read it. It's full of interesting cases, builds intuition, and is a readable example of Bayesian thinking.

Longer review: I'm an applied business researcher and that means my job is to deliver quality forecasts: to make them, persuade people of them, and live by the results they bring. Silver's new book offers a wealth of insight for many different audiences. It will help you to develop intuition for the kinds of predictions that are possible, that are not so possible, where they may go wrong, and how to avoid some common pitfalls.

The core concept is this: prediction is a vital part of science, of business, of politics, of pretty much everything we do. But we're not very good at it, and fall prey to cognitive biases and other systemic problems such as information overload that make things worse. However, we are simultaneously learning more about how such things occur and that knowledge can be used to make predictions better -- and to improve our models in science, politics, business, medicine, and so many other areas.

The book presents real-world experience and critical reflection on what happens to research in social contexts. Data-driven models with inadequate theory can lead to terrible inferences. For example, on p. 162: "What happens in systems with noisy data and underdeveloped theory - like earthquake prediction and parts of economic and political science - is a two-step process. First, people start to mistake the noise for a signal. Second, this noise pollutes journals, blogs, and news accounts with false alarms, undermining good science and setting back our ability to understand how the system really works." This is the kind of insight that every good practitioner acquires through hard-won battles, and continues to wrestle every day both in doing work and in communicating it to others.

It is both readable and technically accurate: it presents just enough model details yet avoids being formula-heavy. Statisticians will be able to reproduce models similar to the ones he discusses, but general readers will not be left out: the material is clear and applicable. Scholars of all stripes will appreciate the copious notes and citations, 56 pages of notes and another 20 pages of index, which detail the many sources. It is also important to note that this is perhaps the best general readership book from a Bayesian perspective -- a viewpoint that is overdue for readable exposition.

The models cover a diversity of areas from baseball to politics, from earthquakes to finance, from climate science to chess. Of course this makes the book fascinating to generalists, geeks, and breadth thinkers, but perhaps more importantly, I think it serves well to develop reusable intuition across domains. And, for those of us who practice such things professionally, to bring stories and examples that we can tell and use to illustrate concepts with the people we inform.

There are three audiences who might not appreciate the book as much. First are students looking for a how-to book. Silver provides a lot of pointers and examples, but does not get into nuts and bolts details or supply foundational technical instruction. That requires coursework in research methods and and statistics. Second, his approach to doing multiple models and interpreting them humbly will not satisfy those who promote a naive, gee-whiz, "look how great these new methods are" approach to research. But then, that's not a problem; it's a good thing. The third non-fitting audience will be experts who desire depth in one of the book's many topic areas; it's not a technical treatise for them and I can confidently predict grumbling in some quarters. Overall, those three audiences are small, which happily leaves the rest of us to enjoy the book.

What would make it better? As a pro, I'd like a little more depth (of course). It emphasizes games a little too much for my taste. And a clearer prescriptive framework could be nice (but also could be a problem for reasons he illustrates). But those are minor points; it hits its target better than any other such book I know.

Conclusion: if you're interested in scientific or statistical forecasting, either as a professional or layperson, or if you simply enjoy general science books, get it. Cheers!
1616 comments| 798 people found this helpful. Was this review helpful to you? Report abuse
on November 11, 2012
Excellent book!!! People looking for a "how to predict" silver bullet will (like some reviewers here) be disappointed, mainly because Silver is too honest to pretend that such a thing exists. The anecdotes and exposition are fantastic, and I wish we could make this book required reading for, say, everyone in the country.

During election season, everyone with a newspaper column or TV show feels entitled to make (transparently partisan) predictions about the consequences of each candidate's election to unemployment/crime/abortion/etc. This kind of pundit chatter, as Silver notes, tends to be insanely inaccurate. But there are also some amazing success stories in the prediction business. I list some chapter-by-chapter takeaways below (though there's obviously a lot depth more to the book than I can fit into a list like this):

1. People have puzzled over prediction and uncertainty for centuries.

2. TV pundits make terrible predictions, no better than random guesses. They are rewarded for being entertaining, and not really penalized for being wrong.

3. Statistics has revolutionized baseball. But computer geeks have not replaced talent scouts altogether. They're working together in more interesting ways now.

4. Weather prediction has gotten lots better over the last fifty years, due to highly sophisticated, large-scale supercomputer modeling.

5. We have almost no ability to predict earthquakes. But we know that some regions are more earthquake prone, and that in a given region an earthquake of magnitude n happens about ten times as often as an earthquake of magnitude (n+1).

6. Economists are terrible at predicting quantities such as next year's GDP. Predictions are only very slightly correlated with reality. They also tend to be overconfident, drastically underestimating the margin of error in their guesses. Politically motivated predictions (such as those released by White House, historically) are even worse.

7. The spread of a disease like the flu is hard to predict. Sometimes we overreact because risk of under-reacting seems greater.

8. A few professional sports gamblers are able to make make a living by spotting meaningful patterns before others do, and being right slightly more than half the time.

9. Kasparov thought he could beat Deep Blue. Couldn't. Interesting tale of humans/computers trying to outguess each other.

10. Nate Silver made a living playing online poker for a few years. When the government tightened the rules, the less savvy players ("fish") stopped playing, and he found he couldn't make money any more. So he started FiveThirtyEight.

11. Efficient market hypothesis: market seems very efficient, but not perfectly so. Possible source of error: most investment is done by institutions, and individuals at these institutions are rewarded based on short term profits. Rational employees may have less career risk when they "bet with the consensus" than when they buck a trend: this may increase herding effects and makes bubbles worse. Note: Nate pointedly does not claim that one can make money on Intrade by betting based on FiveThirtyEight probabilities. But he stresses that Intrade prices are themselves probably heavily informed by poll-based models like the ones on FiveThirtyEight.

12. Climate prediction: prima facie case for anthropic warming is very strong (greenhouse gas up, temperature up, good theoretical reason for former causing latter). But lots of good reason to doubt accuracy of specific elaborate computer models, and most scientists admit uncertainty about details.

13. We failed to predict both Pearl Harbor and September 11. Unknown unknowns got us. Got to watch out for loose Pakistani nukes and other potential catastrophic surprises in the future.
88 comments| 314 people found this helpful. Was this review helpful to you? Report abuse
This book was first published in 2012, at a time when Big Data (or if you prefer, big data) was only beginning to receive the attention it deserves as a better way to use analytics within and beyond the business world. One key point is that big data should also be right data and in sufficient quantity. I recently re-read the book, in its paperbound edition. Thde quality and value of its insights have held up remarkably well.

In the years that followed publication of the first edition, as Nate Silver notes in the new Preface, the perception that statisticians are soothsayers was proven to be an exaggeration, at best, and a dangerous assumption, at worst. This new edition "makes some recommendations but they are philosophical as much as technical. Once we're getting the big stuff right -- coming to a better [i.e. more accurate and more reliable] understanding of probability and uncertainty; learning to recognize our biases; appreciating the value of diversity, incentives, and experimentation -- we'll have the luxury of worrying about the finer points of technique."

In the Introduction to the First Edition, Silver observes, "If there is one thing that defines Americans -- one thing that makes us exceptional -- it is our belief in Cassius' idea that we are in control of our own fates." In t his instance, Silver refers to a passage in Shakespeare's play, Julius Caesar, when Cassius observes:

"Men at some time are masters of their fates.
The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings."
(Act 1, Scene 2, Lines 146-148)

Cassius' assertion has serious implications and significant consequences. It is directly relevant to a theory named after Reverend Thomas Bayes (1701–1761), who first provided an equation that allows new evidence to update beliefs in his An Essay towards solving a Problem in the Doctrine of Chances (1763). Silver: "Bayes's theorem is nominally a mathematical formula. But it is really much more than that. It implies that we must think differently about our ideas [predictions, for example] -- and how to test them. We must become more comfortable with probability and uncertainty. We must think more carefully about the assumptions and beliefs that we bring to a problem."

Silver cites another passage in Julius Caesar when Cicero warns Caesar: "Men may construe things, after their fashion / Clean from the purpose of things themselves." According to Silver, man perceives information selectively, subjectively, "and without much self-regard for the distortions this causes. We think we want information when we want knowledge." I take "want" to have a double meaning: lack and desire. Silver goes on to suggest, "the signal is the truth. The noise is what distracts us from the truth. This is a book about the signal and the noise...We may focus on those signals that advance our preferred theory about the world, or might imply a more optimistic outcome. Or we may simply focus on the ones that fit with bureaucratic protocol, like the doctrine that sabotage rather than an air attack was the more likely threat to Pearl Harbor."

In their review of the book for The New Yorker (January 25, 2013), Gary Marcus and Ernest Davis observe: "Switching to a Bayesian method of evaluating statistics will not fix the underlying problems; cleaning up science requires changes to the way in which scientific research is done and evaluated, not just a new formula." That is, we need to think about how we think so that we can make better decisions.

In Thinking, Fast and Slow, Daniel Kahneman explains how an easy question ("How coherent is the narrative of a given situation?") is often substituted for a more difficult one ("How probable is it?"). And this, according to Kahneman, is the source of many of the biases that infect our thinking. Kahneman and Tversky's System 1 jumps to an intuitive conclusion based on a “heuristic” — an easy but imperfect way of answering hard questions — and System 2 lazily endorses this heuristic answer without bothering to scrutinize whether it is logical). And this, according to Kahneman, is the source of many of the biases that infect our thinking. System 1 jumps to an intuitive conclusion based on a “heuristic” — an easy but imperfect way of answering hard questions — and System 2 lazily endorses this heuristic answer without bothering to scrutinize whether it is logical.

When an unprecedented disaster occurs, some people may feel at least some doubt that they are in control of their fate. Nate Silver offers this reminder: "But our bias is to think we are better at prediction than we really are. The first twelve months of the new millennium have been rough, with one unpredicted disaster after another. May we arise from the ashes of these beaten but not bowed, a little more modest about our forecasting abilities, and a little less likely to repeat our mistakes."

A Jewish proverb suggests that man plans and then God laughs. The same could be said of man's predictions.
0Comment| 8 people found this helpful. Was this review helpful to you? Report abuse
on May 26, 2017
The author Nate Silver does a great job weaving more technical statistical concepts in context early in the book, so as not to lose readers early on. However I thought this would lead to more a detailed technical discussion later on, which the author said it would, but it never really transpired. Instead he kept to analogies and keeping the science of prediction in context. Which there's really nothing wrong with, if you're someone looking for that ... just not exactly what I wanted or expected.

Nonetheless it's a great book, and Silver bears the hallmark of someone who is intellectually curious and genuinely interested in making his analytical tool better, rather than attaching his ego to the outcome. As part of that, he's refreshingly candid in his opinion of others. Well researched and covers a lot of areas including sports, weather, financial meltdowns, chess, and others. The best section imo was on chess, where he displayed both his story telling skills (retelling of chess master Kasparov's loss to IBM was both compelling and insightful), and more in depth technical discussion which chess lends itself to. The book seemed to run out of steam toward the end, with some chapters going on longer than I thought necessary, particularly poker and efficient markets.

He shares some of my core beliefs that statistics/data is not enough, if you really want to understand something and make good forecasts you need to understand its underlying structure. And that the proper relationship between man and machine is symbiotic, rather than one taking over the other. Those, and the importance of thinking probabilistically, are the core takeaways.
0Comment| 6 people found this helpful. Was this review helpful to you? Report abuse
on May 7, 2016
Early in "The Signal and the Noise," author Nate Silver reminds us how often we make predictions and forecasts in our day-to-day lives. Doing so is an unavoidable task of living, and this superb book takes the reader through the pitfalls inherent in prognostication and how better to avoid them by recognizing what is irrelevant (the noise) and what is germane (the signal).

There are many types of errors that lead to incorrect forecasts, and Silver discusses how people let their biases overly control their thoughts, how people use incomplete information to come up with predictions, and how they ignore pertinent warning signs. Silver outlines personality traits that make for both good and bad forecasters and talks about the types of errors that lead to incorrect predictions and the importance of objectivity.

The author shows that the concept behind Bayes' theorem, thinking probabilistically, is the key in revising our predictions as we get new information to make them better. Silver takes the reader on a breezy ride through the fields of politics, the housing bubble of the last decade, baseball, weather forecasting, earthquakes, economics, pandemics, gambling, chess, poker, stock markets, climate change, and terrorism to illustrate the concepts he puts forth.

Silver acknowledges that luck plays a role in some areas of our lives, but stresses that being more fundamentally sound in our prediction abilities can always help us ignore the noise, pay attention to the signal, and make better decisions. He closes with a short recapitulation of the main concepts he introduces.

The author is a well-known liberal, but he is an honest broker and teaches his concepts without making the case for any political ideology. "The Signal and the Noise" is great fun, especially for math lovers, but also for anyone with an interest in one of the areas he covers in the thirteen chapters he uses to illustrate his concepts. One prediction I can be sure in seeing come true is that the vast majority who invest the time to read this book will be glad that they did so.
0Comment| 3 people found this helpful. Was this review helpful to you? Report abuse
on January 4, 2015
This is a long but generally interesting, not mathematically rigorous, look at the human side of making predictions. The book has four sections, the first 3 chapters give an in depth look at the application of predictions (in finance, baseball and politics), then there are 4 chapters on hard prediction problems (weather, earthquakes, economics, and infectious diseases), followed by an introduction to Bayesian thinking (how to adjust your beliefs) and finally there is a 3 chapter discussion on how Bayesian thinking can be applied to hard problems.

The entire book is brilliantly written and it really shines for the diversity of its examples. However, it is very long and it drags badly in the chapters on Bayesian thinking. In particular the author gets bogged down with very extended sections on chess, which does have some brilliant discussion of man vs. machine matches, and poker, which could have been reduced to a couple pages of insights on the economic impact of bad players.

The one major problem with the book is it treats Bayesian thinking as a panacea. Nobody would argue with the general idea of trying to quantify your beliefs and then updating your beliefs when new data comes in. (This is the gist of Bayesian statistics.) However, the author does not address just how hard it is to quantify/specify ones starting beliefs. Coming up with a guestimate on your prior beliefs can be extraordinary difficult and as the author does point out (but only in one example) tweaking your prior belief (probability) can *radically* change your conclusions. The fact that this is not stressed is a major disservice to the reader.

That major complaint aside this book does a brilliant job of explaining complex ideas in layman's terms and highlighting the importance of dismissing the idea of certainty and instead embracing or at least paying attention to the imprecision of our estimates and how wrong we, both laymen and experts, can be.
0Comment| 4 people found this helpful. Was this review helpful to you? Report abuse
This is a book about forecasting; not a "how to" exactly, but a "how to make better". It is about why forecasts so often go wrong. They are hard to do right, and they are even harder to do exactly. Good forecasters know this and express results in terms of likelihoods, or margins of error. Bad forecasters often do not even care that their too-exact predictions are frequently, even almost always, wrong. Such people are "in the business" because the media attention such forecasts often receive has made them well off. Early in this very long book, Nate Silver gets into this. He calls such forecasters hedgehogs because they rely on a single strategy. By contrast, the foxes recognize that there is a lot to understand about the world, a lot that matters to what will happen in the future. This book is about the foxes, but even they are often wrong because what they are doing is hard, and this book is about why it is hard.

Silver rests his methodology on Thomas Bayes (and his subsequent champion Simon Laplace) and an approach to statistical reasoning called Bayesian Reasoning. Today this process is well known in the scientific and philosophical communities. Economists and sociologists are also fans, though its competitor, Frequentism, developed by Ronald Fisher some 190 years after Bayes, is even better known. Frequentism is what much of the "measures of significance" in widespread use today are about. It has certainly given us insight into the probabilistic nature of the world. But as Silver argues, Bayes does better when we must start from somewhere and project the future. Bayes gives us a way to refine projections as they evolve into the further future. Bayesianism is not only about the probabilistic nature of the world, but also about the incompleteness of our knowledge. Silver does not claim that Bayes is a magic bullet that will give you a correct forecast. Properly applied in areas where new data is accumulated, it will refine the next forecast, and more so those that follow. That is the point of it all. We can rarely hit the bulls eye, but we can approach it with each new try.

Silver walks us through various kinds of real-world examples where forecasting is important for one reason or another. Games like baseball, basketball, poker, and chess make up his first class of examples. Every one of these presents the keen observer with signals and noise of different kinds and the means by which we can separate these and properly understand where the signals point is crucial to improving our predictions about the future. From games he moves on to such things as weather, earthquakes, the economy, politics, military preparedness, and climate change. At the end he deals with the issue of terrorism.

His examples are chosen to illustrate how many different kinds of signals and noise there are. In some arenas there is so much signal, so many relations between factors have an influence on outcomes, that the signal itself becomes its own noise. For each arena explored he cites examples successful and unsuccessful forecasting and from a position of hindsight explains how it was that the forecasts came out as they did. What part of the signal was properly interpreted or missed altogether? What part of the noise was mistaken for signal? Which models were too simple, grasping signal but not enough of it, and which rested mostly on noise mistaken for signal. In each of his examples he returns to Bayes.

Silver never tells us how to get rid of the noise. He cannot. A great part of his point here is that we usually do not know, exactly, what is signal and what is noise in the data. When there is a clear cut causal connection, for example that increasing CO2 concentrations in the atmosphere must have a warming effect on the climate, we know we have some handle on real signal. But even a causal connection can be drowned out, at least in the short term, by other factors. He is careful to note again and again, that telling signal from noise can be very hard to do and often the best we can hope for is to understand that a wide latitude of likely possibility remains.

This is a long book. Its principles could be stated in a few pages, but its richness comes from Silver's careful explication of signal and noise in each of the arenas he explores all of them very different. This explication requires a lot of pages, but that is the meat of the book. At the same time, Silver's explanations are all plain, his writing about all of these subjects is easy to understand. Well done, and a book to which everyone with some forecasting to do should pay attention.
0Comment|Was this review helpful to you? Report abuse
on January 12, 2017
Nate Silver is best known for using polling data to call political elections. He missed on the Trump win, but was pretty good up until then.

The Signal and the Noise is a well written, well researched and well reasoned book about forecasting and the various mistakes that prognosticators make. He addresses failures as the inability of economists and others to foresee the bursting of the housing bubble and the chaos it created in 2008. Other themes include easier-to-predict subjects such as future performance of major league baseball players and the success (or not) of poker players. In these later two, he has real world experience as he developed software to predict baseball player performance and made a living as a professional poker player.

Other forecasting areas that he writes about include weather (a modern success); earthquakes (not so much due to difficulties in differentiating the signal from the noise); the spread of infectious diseases (difficult to model due to human behaviour); and climate change (right on warming but uncertain about effects).

One of the over all themes involves the Bayes Theorem. This requires an a priori hunch about the chances of an event that is refined by future observations and experimenting.

There were sections I like more than others, but this may correlate more with my affinity for the subjects rather than Silver's reporting. I particularly like the section on Climate Change research. It was thoughtful and open-minded. As he does throughout the book, he looks at the facts and the stats and interviews the people involved in the research.
0Comment| 2 people found this helpful. Was this review helpful to you? Report abuse
on April 16, 2018
This book was interesting, although I'm not sure I took away anything from it that I will use in my everyday life. Silver has a wide-ranging approach: the book is about forecasting in general, and so tackles a variety of areas where humans attempt to forecast from weather (surprisingly good at this!) to earthquakes (lol terrible) to the stock market to baseball and on.

I have a friend who often talks about future events in his own life in probabilistic terms: a 90% chance he will go on this planned trip, or a 50% chance that he will retire this year, or whatever. Reading Silver's book gave me a new appreciation for this approach, because Silver encourages the reader to think of forecasts in terms of probability and especially to think about uncertainty. Not only "what don't you know?" but "what don't you know that you don't know?"

He skewers one particular target in the housing market crash: the rating agencies. The two major rating agencies emerged almost unscathed from the mortgage crisis, despite being in large part responsible for it. Yes, banks made sub-prime mortgages to people with terrible credit, and people with terrible credit dove into the market, and Fannie Mae and Freddie Mac underwrote those loans, and other banks came up with the bright idea of selling them by bundling them together and re-dividing them into tranches.. But it was the rating who'd given triple-A ratings to the "least risky" tranches of these high-risk mortgages. They're the ones who said not "the housing market won't crash" but "these investments are safe even if the housing market crashes.".

(Narrator: they were not safe.)

Anyway, this was a scholarly book (so many footnotes!) written with a solid, engaging style. Easy-to-follow and interesting. If you are interested in forecasting or probabilities as applied to real life, it's an excellent read.
0Comment|Was this review helpful to you? Report abuse
on February 27, 2018
A good book from excellent author and uncannily-successful political predictor Nate Silver in great need of an editor. With a text 500pp long (without back matter), but still rather factual and straightforward (unlike the guilty pleasure purple prose of Taleb or Stephenson) it's very repetitive: entire sections repeat - with examples from other spheres of prediction, whether sports, stocks, or weather being the only fundamental differences - in which we learn that Silver made a living at online poker (and possibly online trading) before he made one as an online popular statistician - leading to a book 200pp or more over its Platonic length. No one will call Silver laconic, as the signal rather ironically gets lost in the noise.
0Comment| One person found this helpful. Was this review helpful to you? Report abuse