- Paperback: 176 pages
- Publisher: No Starch Press; 1 edition (March 16, 2015)
- Language: English
- ISBN-10: 1593276206
- ISBN-13: 978-1593276201
- Product Dimensions: 5.9 x 0.5 x 8.1 inches
- Shipping Weight: 10.4 ounces (View shipping rates and policies)
- Average Customer Review: 4.4 out of 5 stars See all reviews (84 customer reviews)
- Amazon Best Sellers Rank: #85,049 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Statistics Done Wrong: The Woefully Complete Guide 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
See the Best Books of the Month
Want to know our Editors' picks for the best books of the month? Browse Best Books of the Month, featuring our favorite new books in more than a dozen categories.
Frequently bought together
Customers who bought this item also bought
"If you analyze data with any regularity but aren't sure if you're doing it correctly, get this book." -- Nathan Yau, FlowingData
"Of all the books that tackle these issues, Reinhart's is the most succinct, accessible and accurate." -- Tom Siegfried, Science News
"A spotter's guide to arrant nonsense cloaked in mathematical respectability." -- Gord Doctorow, BoingBoing
From the Author
What goes wrong most often in scientific research and data science? Statistics.
Statistical analysis is tricky to get right, even for the best and brightest. You'd be surprised how many pitfalls there are, and how many published papers succumb to them. Here's a sample:
- Statistical power. Many researchers use sample sizes that are too small to detect any noteworthy effects and, failing to detect them, declare they must not exist. Even medical trials often don't have the sample size needed to detect a 50% difference in symptoms. And right turns at red lights are legal only because safety trials had inadequate sample sizes.
- Truth inflation. If your sample size is too small, the only way you'll get a statistically significant result is if you get lucky and overestimate the effect you're looking for. Ever wonder why exciting new wonder drugs never work as well as first promised? Truth inflation.
- The base rate fallacy. If you're screening for a rare event, there are many more opportunities for false positives than false negatives, and so most of your positive results will be false positives. That's important for cancer screening and medical tests, but it's also why surveys on the use of guns for self-defense produce exaggerated results.
- Stopping rules. Why not start with a smaller sample size and increase it as necessary? This is quite common but, unless you're careful, it vastly increases the chances of exaggeration and false positives. Medical trials that stop early exaggerate their results by 30% on average.
If you are a seller for this product, would you like to suggest updates through seller support?
Top Customer Reviews
A few "similar" books come to mind, including (a) the drier "Common errors in statistics" by Phillip Good, (b) the three terrific popular books by Ben Goldacre - "Bad science", "Bad pharma" and "I think you'll find it's a bit more complicated than that" - and (c) the elegant "Understanding the new statistics" by Geoff Cumming. (I have not seen "How to lie with statistics" by Huff and Geis). Reinhart's book is more "big-picture" than Good's, and broader than Goldacre's or Cumming's. (The latter is a perfect "single-issue" book; the former are not specifically about cataloging statistics errors).
Statistical semi-literacy of empirical researchers is a serious problem, and any effort to improve the situation is to be lauded. Alex Reinhart's book - engagingly written, and nicely produced (and fairly cheaply sold) by No Starch Press - is a force for good, and one which can have a material impact.
Reinhart's book helps fill that conceptual gap, and it does it extremely well with a fresh and inviting writing style. This book discusses the pitfalls of relying on statistical significance, the nuances of power analysis and why everyone should do it, and the dangers of pseudoreplication (even though so many people do it). There is a very clear discussion of the "base rate fallacy" which, for events that are rare anyway, can yield apparently statistically significant results (which are nowhere near significant). Furthermore, Reinhart calls out fundamental issues: for example, he notes that many researchers do not have a good grasp on what the p-value (which is used to definitively reject null hypotheses... or not) represents. It also describes regression to the mean, a phenomenon that arises often in claims of firms or processes that outperform others.
This is an amazing book that can help everyone become a more aware consumer of statistical claims, as well as a more rigorous researcher. It is a short, clear, and easy read, accessible to anyone with an algebra background. The last two chapters provide some actionable advice for improving your own practice as a researcher, and helping improve the practice of others through statistical education. Although the book is subtly disturbing (it now has me wondering how many claims in quality management aren't actually statistically significant), it contains material that every researcher in quality management should embrace.
Statistics, however, was a favorite class of mine.
Most Recent Customer Reviews
A couple of reviews suggested that the writing was subpar -- dry?Read more
Real eye opener.