Industrial Deals Beauty Best Books of the Year So Far STEM nav_sap_plcc_ascpsc PCB for Musical Instruments Starting at $39.99 Grocery Handmade Wedding Shop Shop Popular Services Paterson Paterson Paterson  Introducing Echo Show All-New Fire HD 8 Kids Edition, starting at $129.99 Kindle Oasis Assassins Shop Now STEMClub17_gno

There was a problem filtering reviews right now. Please try again later.

Showing 1-10 of 15 reviews(Verified Purchases). See all 23 reviews
on September 15, 2014
This is a thoroughly-written book. On one hand I was very glad that it didn't turn out to be a pop-economics type of book. On the other hand, I think the book is largely meant for a more academic audience, so I found it very dense to get through. It describes the problem very clearly, and gives a detailed account of the history behind the statistics, which was interesting.

What I wish the book had, however, was more help for people who want a way out! It spends 90% of time talking about the problem of statistical significance and the history behind it, but I was already in agreement with them so I didn't need any convincing.

I was hoping for more guidance on alternative approaches, or at least more detail on Gosset's thinking and ideas. They make vague references to loss functions, power analysis, etc. as much better approaches, but if you don't know very much about those things you're pretty much on your own to read something else.
0Comment| 6 people found this helpful. Was this review helpful to you?YesNoReport abuse
on May 29, 2008
Tests of statistical significance are a particular tool which is appropriate in particular situations, basically to prevent you from jumping to conclusions based on too little data. Because this topic lends itself to definite rules which can be mechanically implemented, it has been prominently featured in introductory statistics courses and textbooks for 80 years. But according to the principle "if all you have is a hammer, then everything starts to look like a nail", it has become a ritual requirement for academic papers in fields such as economics, psychology and medicine to include tests of significance. As the book argues at length, this is a misplaced focus; instead of asking "can we be sure beyond reasonable doubt that the size of a certain effect is not zero" one should think about "how can we estimate the size of the effect and its real world significance". A nice touch is the authors' use of the word oomph for "size of effect".

Misplaced emphasis on tests of significance is indeed arguably one of the greatest "wrong turns" in twentieth century science. This point is widely accepted amongst academics who use statistics, but perversely the innate conservatism of authors and academic journals causes them to continue a bad tradition. All this makes a great topic for a book, which in the hands of an inspired author like Steven Jay Gould might have become highly influential. The book under review is perfectly correct in its central logical points, and I hope it does succeed in having influence, but to my taste it's handicapped by several stylistic features.

(1) The overall combative style rapidly becomes grating.

(2) A little history -- how did this state of affairs arise? -- is reasonable, but this book has too much, with a curious emphasis on the personalities of the individuals involved, which is just distracting in a book about errors in statistical logic.

(3) The authors don't seem to have thought carefully about their target audience. For a nonspecialist audience, a lighter How to Lie With Statistics style would surely work better. For an academic audience, a more focused [logical point/example of misuse/what authors should have done] format would surely be more effective.

(4) Their analysis of the number of papers making logical errors (e.g. confusing statistical significance with real-world importance) is wonderfully convincing that this problem hasn't yet gone away. But on the point "is this just an academic game being played badly, or does it have harmful real world consequences" they assert the latter but merely give scattered examples, which are not completely convincing. If people fudge data in the traditional paradigm then surely they would fudge data in any alternate paradigm; if one researcher concludes an important real effect is "statistically insignificant" just because they didn't collect enough data, then won't another researcher be able to collect more data and thereby get the credit for proving it important? Ironically, they demonstrate the harmful real world effect is of the cult is non-zero but not how large it is ......
0Comment| 114 people found this helpful. Was this review helpful to you?YesNoReport abuse
on August 2, 2014
I like the authors and generally feel McCloskey's criticisms of the economics profession are both accurate and humorous. I think this book is essentially right about the abuse of statistics in economics and the social sciences more generally but the point is belabored and the delivery is, very often, unnecessarily pretentious. I do think the application of statistics in the social sciences has vastly improved since the publication of this book, whether this book had anything to do with it is a mystery.
0Comment| 3 people found this helpful. Was this review helpful to you?YesNoReport abuse
on January 10, 2013
This is an important book with an important message: worry about the size of an effect, not (just) it's statistical significance. Once explained, the idea comes across as very obvious but one that has been missed by whole fields. I wish more would read this book and consider its message before invoking statistics to make major decisions. Certainly something that would have saved a major drug company with which I am familiar. This book will only become more important as data mining and machine learning become more accessible and more interwoven in our lives. Be forewarned and forearmed!
0Comment| 5 people found this helpful. Was this review helpful to you?YesNoReport abuse
on April 25, 2011
Every paragraph in this book is filled with simmering outrage, and every point is made at least twenty times. The main text is 250 pages long; 25 pages would have been much better.

The thesis is interesting (and I suppose it might even be important and valuable). But the writing style is so unbearable that I cannot give this book more than 2 stars.
0Comment| 19 people found this helpful. Was this review helpful to you?YesNoReport abuse
on September 14, 2012
For me this was a matter of life and death.

My cholesterol numbers were bad, and though I told the doc I wouldn't take a statin, he looked at the chart and clucked, "with LDL this high you're at risk for pancreatitis. You'd better get those numbers down." And he wrote a prescription.

I filled it, and as usual I got the dozen pages of onionskin paper with the pharmacological details. I decided to read them, and I was flabbergasted. Translated into English, this is what it said:

We have done a Big Study, oh yes we have, and we have Numbers: look at them!
And we have analyzed those numbers and we have Conclusions. And we used
Statistics, so you know we must be right.

The first thing you should know is that people who took our drug died at a higher
rate than people who didn't. 11% higher approximately. And if this were a court,
we'd have to say that the drug is guilty on the preponderance of the evidence,
because we figure the odds are about 4 to 1 that the drug did it. But our Statistics
tells us to ignore anything that is not beyond reasonable doubt, and 4-to-1 doesn't
make it, so we think you should ignore the fact that the drug does more harm than good.

What you should focus on is that it lowers your LDL. We showed that beyond reasonable
doubt - the odds are at least 22 to 1. And low LDL is good. So our advice is, take it.

That is what it said.

The question for us all is, How did we come to this? How can the scientific hierarchy, from the FDA down to kindly Doctor Brown, think that it is anything but crazy to take this drug? Am I looking to have "his LDL was low" on my tombstone? Is there no judgement that would say that costing lives is very bad, and lowering LDL is of no value of itself if it doesn't save lives?

The message of the book is that things are every bit as bad as you might fear.

The authors show how the pursuit of science has been shunted off into a search for "statistical significance" which has nothing whatever to do with scientific significance or importance. They give a pretty good explanation for how things got so messed up.

This book is of the highest possible importance for anyone who uses or teaches statistical inference. But anyone who knows a little statistics should definitely read the book, and anyone who knows a little math might enjoy it.

For it is engagingly written. Think of it, a book on statistical significance! Could anything be more dull? But the book has the pace of a potboiler. There are witty jokes, haikus, appeals to outrage and to laughter. There is a hero (William Gosset, aka Student) and a villain (the evil Sir Ronald Fisher). There are fables and parables.

Most importantly, there is the truth: that science based solely on rejecting the null hypothesis is sterile, unconvincing even to its practitioners, and extremely costly, both in money and in lost time and lives. You will weep when you see how thoroughly the sciences that use statistics (such economics, psychology, sociology, medicine) have come under the grip of a Statistical-Academic Complex that persists in significance tests because significance tests get you promoted, never mind the real scientific value.

The problem is real. Mistakes are being made daily because of sloppy statistics. And remember that drug that the doc wanted me to take to ward off pancreatitis? In the fine print the study showed that the drug didn't help people with pancreatitis either. Ba-dum.
22 comments| 5 people found this helpful. Was this review helpful to you?YesNoReport abuse
on May 14, 2008
The authors do an admirable job of exposing an important issue, but this work only identifies the problem for you, and offers no solution. It seems to go on too long and eventually become s platform for the authors to grip about the injustices that have been served on them in their career. As we have all been indoctrinated into the "cult of significance" through the education system, it would have been nice for the authors to show us how we could do better. Many times we are asked to work statistics on numbers from disciplines on which we have very little knowledge and experience, therefore all we can offer is statistical significance - not material significance, and hope that the people we are working with understand the difference -Most do not and are not prepared to bridge the gap. If there are alternative techniques and methods I am none the wiser, maybe that's my problem.
0Comment| 25 people found this helpful. Was this review helpful to you?YesNoReport abuse
on May 15, 2013
The authors have a very valid point, but i feel they make it more complicated that it actually is. very repetitive.
0Comment| One person found this helpful. Was this review helpful to you?YesNoReport abuse
on December 9, 2008
I know and admire Deirdre McCloskey's work and I am an empirical economist who has to work every day with t and F tests and p-values. So I was quite excited when I read that this particular author had co-authored a book on this particular subject.

Unfortunately, I was quite disappointed. I was expecting either a narrative of errors made in the name of statistical significance or an in-depth analysis of what tests really mean. The authors do neither.

In the first half of the book, they superficially narrate the problems with the Vioxx clinical trials, but tell few other stories of how the standard error "costs jobs, justice and lives." A narrative along the lines of "Normal Accidents", by Charles Perrow, which documents an extensive list of accidents to tell of the perils of complexity, would have made for much better reading. After reading the book, I am none the wiser as to why or how the jobs, justice and lives were lost to statistical significance.

Alternatively, the book could have explained in terms clear to those who do not work every day with tests what is meant by significance and power of a test and what these terms really mean. I have never seen an explanation of these terms that is really clear and sticks in your mind. Unfortunately this was not the case either. The authors mention that statistical significance is more complex than just p-values, affirm that most economists not understand why, and leave it at that. They confuse more than explain.

As a final problem, the book takes a good versus evil attitude that is nowhere good science. Gosset is good and Fischer is bad. Please.

In conclusion, while I agree with the author's main thesis, their book argues it very poorly, very lengthily, and very tediously.
0Comment| 76 people found this helpful. Was this review helpful to you?YesNoReport abuse
on January 8, 2011
This book explains, in straightforward English, some of the major weaknesses of statistical analysis, as it is practiced by most scientists these days--at least, most who are being published. The same set of logical and statistical missteps--the "standard error"--are made over and over again in a wide variety of fields. Unfortunately, the standard error is the very heart of what most undergraduates, and more than a few graduate students, take away from their days in higher education. This leaves far too much of our scientific practice mediocre, at best.
0Comment| 5 people found this helpful. Was this review helpful to you?YesNoReport abuse