Facility Spring Cleaning Textbook Trade In Amazon Fashion Learn more nav_sap_SWP_6M_fly_beacon Blue October Fire TV with 4k Ultra HD Gifts for Mom to look and feel great Made in Italy Shop now Amazon Gift Card Offer out2 out2 out2  Amazon Echo  Echo Dot  Amazon Tap  Echo Dot  Amazon Tap  Amazon Echo Introducing new colors Kindle Paperwhite AutoRip in CDs & Vinyl Spring Arrivals in Outdoor Clothing SnS
Customer Review

261 of 268 people found the following review helpful
4.0 out of 5 stars An enjoyable popular science book that needs more depth, May 29, 2011
Verified Purchase(What's this?)
This review is from: The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Kindle Edition)
"The Theory That Would Not Die" is an enjoyable account of the history of Bayesian statistics from Thomas Bayes's first idea to the ultimate (near-)triumph of Bayesian methods in modern statistics. As a statistically-oriented researcher and avowed Bayesian myself, I found that the book fills in details about the personalities, battles, and tempestuous history of the concepts.

If you are generally familiar with the concept of Bayes' rule and the fundamental technical debate with frequentist theory, then I can wholeheartedly recommend the book because it will deepen your understanding of the history. The main limitation occurs if you are *not* familiar with the statistical side of the debate but are a general popular science reader: the book refers obliquely to the fundamental problems but does not delve into enough technical depth to communicate the central elements of the debate.

I think McGrayne should have used a chapter very early in the book to illustrate the technical difference between the two theories -- not in terms of mathematics or detailed equations, but in terms of a practical question that would show how the Bayesian approach can answer questions that traditional statistics cannot. In many cases in McGrayne's book, we find assertions that the Bayesian methods yielded better answers in one situation or another, but the underlying intuition about *why* or *how* is missing. The Bayesian literature is full of such examples that could be easily explained.

A good example occurs on p. 1 of ET Jaynes's Probability Theory: I observe someone climbing out a window in the middle of the night carrying a bag over the shoulder and running away. Question: is it likely that this person is a burgler? A traditional statistical analysis can give no answer, because no hypothesis can be rejected with observation of only one case. A Bayesian analysis, however, can use prior information (e.g., the prior knowledge that people rarely climb out wndows in the middle of the night) to yield both a technically correct answer and one that obviously is in better, common-sense alignment with the kinds of judgments we all make.

If the present book included a bit more detail to show exactly how this occurs and why the difference arises, I think it would be substantially more powerful for a general audience.

In conclusion: a good and entertaining book, although if you know nothing about the underlying debate, it may leave you wishing for more detail and concrete examples. If you already understand the technical side in some depth and can fill in the missing detail, then it will be purely enjoyable and you will learn much about the back history of the competing approaches to statistics.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No

[Add comment]
Post a comment
To insert a product link use the format: [[ASIN:ASIN product-title]] (What's this?)
Amazon will display this name with all your submissions, including reviews and discussion posts. (Learn more)
Name:
Badge:
This badge will be assigned to you and will appear along with your name.
There was an error. Please try again.
Please see the full guidelines here.

Official Comment

As a representative of this product you can post one Official Comment on this review. It will appear immediately below the review wherever it is displayed.   Learn more
The following name and badge will be shown with this comment:
 (edit name)
After clicking the Post button you will be asked to create your public name, which will be shown with all your contributions.

Is this your product?

If you are the author, artist, manufacturer or an official representative of this product, you can post an Official Comment on this review. It will appear immediately below the review wherever it is displayed.  Learn more
Otherwise, you can still post a regular comment on this review.

Is this your product?

If you are the author, artist, manufacturer or an official representative of this product, you can post an Official Comment on this review. It will appear immediately below the review wherever it is displayed.   Learn more
 
System timed out

We were unable to verify whether you represent the product. Please try again later, or retry now. Otherwise you can post a regular comment.

Since you previously posted an Official Comment, this comment will appear in the comment section below. You also have the option to edit your Official Comment.   Learn more
The maximum number of Official Comments have been posted. This comment will appear in the comment section below.   Learn more
Prompts for sign-in
  [Cancel]

Comments

Track comments by e-mail
Tracked by 4 customers

Sort: Oldest first | Newest first
Showing 1-10 of 12 posts in this discussion
Initial post: Jun 6, 2011 11:56:53 AM PDT
Walter Horn says:
I enjoyed your review very much. Informative and helpful. But I have a question: you write, "I observe someone climbing out a window in the middle of the night carrying a bag over the shoulder and running away. Question: is it likely that this person is a burgler? A traditional statistical analysis can give no answer, because no hypothesis can be rejected with observation of only one case." Why couldn't an infinite number of hypotheses that are inconsistent with our single observation be ruled out? Did you rather mean to say no hypothesis can be CONFIRMED by a single observation?

Thanks.

Posted on Jun 6, 2011 12:42:07 PM PDT
[Deleted by the author on Jun 6, 2011 12:42:18 PM PDT]

In reply to an earlier post on Jun 7, 2011 6:14:32 PM PDT
Last edited by the author on Jun 7, 2011 9:49:22 PM PDT
Good question, although it requires a long answer! Hypothesis testing is, of course, only one part of stats although a very important part and is the crucial part of the example I suggested.

First, as to "confirming" a theory, that's not usually viewed as possible in either classical or Bayesian statistics. In classical stats, a common thing is to talk about "rejecting the null hypothesis", i.e., showing that the data is unlikely to have occurred by "chance" and thus implying that the hypothesis of interest is perhaps more likely. In Bayesian models, one might say that the "evidence for Hypothesis 1 is [some amount] greater than that for Hypothesis 2 [given the prior probabilities and the observed data]"

Now, what is the "hypothesis"? In classical stats, that would be a statement about the likelihood of having observed some distribution of data under the condition of the (usually null) hypothesis. An important thing is that it is about a set of data -- a distribution -- and not about a single case. That data is used to reject the "null hypothesis", i.e., the hypothesis that there really is "no difference".

Here's a simplified example. Suppose we hypothesize that "obese people have more heart attacks." The classic null hypothesis would be the inverse of that: "obese people do NOT have more heart attacks ("CIs")". Then we collect data from obese and non-obese people. We might collect data and observe, say, 20 CIs for non-obese people, and in an equivalent sample (however one defines that), 100 CIs for obese people. That would probably be enough data (depending on various other assumptions, of course) to REJECT the null hypothesis of no difference, and thus to "fail to reject" the real hypothesis that there IS a difference. In other words, we'd say that the hypothesis "obese people have more CIs" was NOT rejected.

That's the silliness and convoluted logic that the modern usage of Bayes's theorem goes after. No one is really interested in the roundabout procedure of "failing to reject" a hypothesis -- we're interested to know "how likely is the hypothesis?" !

Back to the burglar. The primary hypothesis there is: "A person carrying stuff out a window at night is probably a burglar." In classical stats, the most direct translation (perhaps) to test this would be:
H1 (real hypothesis): people carrying stuff out windows are burglars
H0 (null hypothesis): people carrying stuff out windows are not burglars <- this is what gets tested

One would then want to test H0 by collecting data: Person 1: I observe Person 1 carrying stuff & he is a burglar. Person 2: carrying stuff & burglar ... and so forth.

Eventually you'd end up with enough data to say that "the joint distribution of carrying stuff & burglar is such that most people carrying stuff are burglars [or technically, the distribution is *not* centered over not-burglars]". The problem is that the confidence in that distribution is very low under classical models with only a single observation. The logic is something like: "OK, so *this* person is indeed a burglar ... but what about all the others? We need more data to get more confidence!"

With Bayes, one can include the prior likelihoods (carrying stuff out the window is very unlikely *unless* one is a burglar) and thus it can arrive at more sensible conclusions rather more easily in many cases. In classical stats one can't do that; instead, you just have to observe cases over and over until you get enough to draw a conclusion about the distribution.

There are many other dimensions to the problem and many more ways in which Bayes is helpful -- and perhaps my and Jaynes's burglar example is not the best -- but that's the kind of example and explication that I thought *might* help clarify the underlying logic. I hope that helps!

In reply to an earlier post on Jun 8, 2011 3:37:33 AM PDT
W. Horn says:
Many thanks.

In reply to an earlier post on Jul 15, 2011 12:35:31 PM PDT
Damn good answer and damn good review. Thanks for that. I greatly appreciated reading you comments on this book and your expansion above.

In reply to an earlier post on Jul 15, 2011 5:11:14 PM PDT
Thank you for the kind note! I'm glad it helped, and that I could share some of the observations. Best wishes, -- C

In reply to an earlier post on Oct 30, 2012 6:28:21 AM PDT
Pure classical statisticians would say that you cannot answer this question "is it likely that this person is a burglar?" because you cannot put a probability on something that has already occurred. This particular person either is or is not a burglar. The probability is either 0 or 1. In the same way, classical statisticians would not put a probability on a coin that has already been flipped being a heads. One of the appealing features of Bayesian statistics is that it DOES put probabilities on lots of interesting questions, like a person being a burglar or a person having cancer.

In reply to an earlier post on Oct 31, 2012 9:19:43 AM PDT
Right! Thanks for giving that translation. It's an important distinction with Bayesian methods. To some extent, Bayesian and frequentist can answer the same questions (such as, say, average voter preference) with similar results. But in other cases like this, the Bayesian model is able to say at least *something* (and often quite a lot) where frequentist methods can say nothing.

Posted on Aug 23, 2013 8:36:48 AM PDT
Ex-Pat Brit says:
Excellent review! I am in the middle of reading the book and am having mixed feelings. You really captured the essence of my concerns and your suggestions would have turned a good book into an excellent one.
P.S. I have an M.S. in Statistics and Operations Research (although my Ph.D. is in Health Sciences) and have taught college level statistics.

In reply to an earlier post on Aug 23, 2013 9:12:51 AM PDT
Thank you! I appreciate that and am glad to hear that your take was similar. Enjoy the rest of it!
‹ Previous 1 2 Next ›