Your rating(Clear)Rate this item


There was a problem filtering reviews right now. Please try again later.

204 of 214 people found the following review helpful
on October 24, 2010
One of the benefits of retiring from my career as a statistician is that I no longer feel it's my personal responsibility to alert friends and colleagues to the myriad ways they are being misled or deceived by the kind of abominably poor summarization of data that's pretty much the norm these days. It's just as well - who wants to be *that guy*, the crank at the table who people start to inch away from surreptitiously, avoiding eye contact all the while?

Not that I endorse misleading or deceptive data presentation - far from it. Now more than ever, as we all struggle to make sense of the avalanche of information that constantly assails us, the capacity for critical, intelligent interpretation is vital. So it's important to be able to see through the most prevalent fallacies in data interpretation, not to mention data presentation strategies deliberately intended to mislead. This latest book by Charles Seife has the laudable goal of educating the reader about some of the most common types of statistical malpractice out there, continuing a tradition established by such authors as Darrell Huff ("How to Lie With Statistics"), John Paulos ("Innumeracy"), Edward Tufte, or the authors of last year's highly successful "The Numbers Game" (Michael Blastland and Andrew Dilnot).

Unfortunately, though "Proofiness" is a well-intentioned book, it suffers from a fundamental crisis of identity. There is a major gap between what "Proofiness" promises and what Seife actually delivers. The first hundred pages cover roughly what one might expect: graphical deception by use of misleading labels or scales, comparison of apples and oranges (e.g. dollar amounts unadjusted for inflation, absence of an appropriate control group, regression to the mean), cherry-picking of data, the tendency to interpret mere random variation as systematic, nonsensical conclusions obtained by extrapolating beyond the range of observed data, overstatement of the precision of measurements, the way in which humans are hard-wired to misinterpret risk and deal poorly with calculations involving risk. Seife's exposition of these topics is lively and clear (with the major caveat discussed below). About halfway through the chapter on risk, however, he makes a major detour. His discussion of the malfeasance of those involved in the Enron debacle, the Bernie Madoff pyramid scheme, the failures at AIG, Citigroup and other institutions, and the subsequent bailout efforts has almost nothing to do with statistical trickery, focusing instead on the public policy and regulatory issues raised by the financial meltdown.

The next chapter, "Poll Cats" does return to the issues involved in conducting accurate sample surveys and presenting the data appropriately, with a reasonably clear discussion of systematic error versus random error. However, the following two chapters, "Electile Dysfunction" and "An Unfair Vote", taking up some 80 pages, really have little to do with data-related issues. Instead they provide a review of events surrounding the Florida vote count in the 2000 presidential election, the six-month circus that took place before Al Franken was eventually declared winner in the 2008 Minnesota Senate race, and a review of historical and present-day gerrymandering efforts whenever congressional redistricting comes up for discussion. Not that Seifen's review of the relevant events, and the issues they raise, is not interesting - but it is largely editorial comment on political events and, as such, it seems to belong in a different book, as does the appendix in which he discusses electronic voting. In making this criticism, I take the view that fraud, malfeasance and corruption stemming from poor public policy, faulty regulatory mechanisms, or inadequate enforcement of existing protections, really are subjects for a different kind of book than that initially described by Seifen. Though the author does return to his initial remit in the final two chapters (discussing abuse of probability and statistical arguments within the judicial system, and for propaganda purposes), overall the book does not make a coherent whole.

Then there's the caveat mentioned above, regarding Seife's exposition methods, which turns out to be a serious one, enough to prevent me from giving this book my endorsement, despite its good intentions. It's the author's predilection for coining cutesy neologisms that not only add nothing to the discussion, but actually end up seriously muddying the exposition. It's evident right there in the book's faux-cute title, "Proofiness". I wish I could say that the author offers a rigorous definition of exactly what he means by this invented term, but he doesn't. It remains unhelpfully vague throughout the book. Sadly, it's not the only example of authorial neologism run amok. "Disestimation", "Potemkin numbers", "randumbness", "regression to the moon", and the horrendous coinage "causuistry"; each of these is a neologism that adds nothing to the discussion. Many of them lack a clear definition, or when a definition is offered, the term just seems to muddy the waters. For instance, Seife uses "disestimation" to mean "overstatement of the precision of a number or measurement", indicating an error based in randomness. But the 'dis'-prefix clearly suggests a systematic error, as does the parallelism with "misestimation", a term which statisticians routinely use to indicate a systematic error. And while one applauds the author's efforts to educate his readership about the error of mistaking correlation for causation, the term "causuistry" is simply an abomination. I'm not sure where this recent trend for authors to invent their own faux-cutesy terminology, where none is needed, originates (possibly Malcolm Gladwell bears some of the responsibility), but it needs to stop.

Though I am sympathetic to the author's stated aims, his execution was such that I cannot endorse this book. Anyone interested in this important topic would be far better served by reading The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and inLife by Michael Blastland and Andrew Dilnot.
1010 commentsWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
92 of 102 people found the following review helpful
VINE VOICEon September 24, 2010
In Proofiness, science journalist and NYU journalism professor Charles Seife decries the tactic of using numbers to lie. Not just using numbers to bolster one's argument. But, in his words, to use fake numbers to prove falsehoods. To use bogus mathematical arguments to prove something that we know in our heart is true - even when it's not.

Seife does not just condemn proofiness as a mistake in logic. He thinks that numbers have a mystical power. That phony numbers have the appearance of absolute truth, of pure objective fact. So we can, and do, wrongly use them to prejudice people.

Proofiness, Seife believes, is the raw material that arms partisans to fight off the assault of knowledge. To clothe irrationality in the garb of the rational and the scientific. So, he says, proofiness is a dark art of deception.

That makes, Seife believes, proofiness one of the biggest problems we face. He says our society is awash in proofiness. Using a few powerful techniques, thousands of people are crafting mathematical falsehoods to get us to swallow untruths. In fact, proofiness is destroying our democracy by deception.

Seife makes some good arguments. And Proofiness is well-written and provokes thought. But does he show that proofiness is a danger to democracy? That proofiness is at the root of many of the problems we face today? In my opinion, not hardly. On this point, Proofiness needs a little more proof.

Take the example that Seife uses to lead off the book. In February 1950, Senator Joseph McCarthy said in a speech in Wheeling, West Virginia that: "I have here in my hand a list of 205 . . . a list of names that were made known to the Secretary of State as being members of the Communist Party and who nevertheless are still working and shaping policy in the State Department."

Seife claims that McCarthy's use of the number 205 was a jolt of electricity that shocked Washington into action against communist infiltrators. He says that the very fact that McCarthy attached a number to his accusations imbued them with an aura of truth. The numbers gave McCarthy's accusations heft; they were too substantial, too specific, to ignore.

Really? So if McCarthy had left out the number 205 (which it appears from the quote that he almost did), and just said he had in his hand a list of names, then McCarthy's claims would not have had the attention they got?

I don't think so. Number or not, McCarthy's rhetorical device was powerful - "I have here in my hand a list of names." And that speech was just one part of the complex historical picture of McCarthyism. McCarthy did not need to use the dark art of proofiness to do what he did.

While Seife's focus on dark arts and deception seems overblown, Proofiness did make me think about how to weigh purported proof of complex issues. He gives some examples of how people use numbers to deceive:

-- Falsifying numbers (This is what Seife claims Joseph McCarthy did. McCarthy said he had 205 names. Then later it was 57 names. Then 81. Seife claims that McCarthy had no names. Not a single one.)

-- Comparing apples with oranges

-- Cherry-picking data

-- Apple polishing (Giving technically correct, but deliberately misleading, numbers.)

-- Potemkin numbers (These are phony statistics based on wrong or nonexistent calculations.)

-- Disestimation (Giving too much meaning to a measurement, and not qualifying it enough.)

Seife's analysis is clever, and his examples well chosen. Still, I'm not sure that he breaks much new ground here. After all, many have long warned us to watch out when someone cites numbers to prove a point. Even Homer Simpson knows that "people can come up with statistics to prove anything." And the proverb "lies, damn lies, and statistics" has been around for at least a century.

So while I enjoyed Proofiness, I would have liked Seife to plow more new ground on some issues that he only touches on. For example:

-- What do you do when things by their nature cannot really be proven? Do humans cause the earth's climate to change? Did the $787 billion stimulus help? How can you prove that you are correct on these critical issues, no matter which answer you choose? If you cannot prove you are correct, what do you do? Nothing? Or should you rely on what Stephen Colbert derided as "truthiness" (the inspiration for Seife's title Proofiness): "the truth that comes from the gut, not books."

-- Seife focuses on how others deceive us with numbers. But human beings are notoriously susceptible to self-deception. How can we avoid the trap Paul Simon warns us about in song: "All lies and jest. Still, a man hears what he wants to hear and disregards the rest."?

-- We humans find it hard to connect cause with effect. We see relationships that are not there. There's nothing deceptive about this. It's just human nature. Even the smartest among us fall prey to this, as seen by two-time Nobel prizewinner Linus Pauling and his strong but apparently mistaken beliefs about vitamin C. To avoid this problem, should we abandon faith, ignore our guts and only believe things that have been proven?

In short, Proofiness is a book worth reading. Agree with him or disagree, Seife will make you think, and that makes the book an important one. But Proofiness could have been better.
88 commentsWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
54 of 63 people found the following review helpful
I think I was expecting something more in the spirit of "How to Lie with Statistics", the small 1954 book by Darrell Huff. In other words, I was hoping to find some techniques to improve critical thinking and show how mathematics can be used both to deceive or to uncover fraud. Generally the book focuses on the former. In the process, it glosses over the concept of margins of error in polls, without explaining standard deviations, confidence intervals, or how the margin of error might depend on the results of the poll.

The invention of cute new terms like Potemkin numbers, disestimation, and causuistry was rather awkward. The confusion of casuistry and causuistry was rather perplexing. It would have been more appropriate to discuss Granger Causality tests for example. Perhaps some discussion of techniques for improving polling results for sensitive questions like those presented in Daniel Corstange's article, "Sensitive Questions, Truthful Answers", which recently won the Warren Miller Prize awarded by one of the top Political Science journals, would have been useful.
I often felt as though I was being subjected to a passionate speech to the crowd, urging us to "nuke" all the numbers. Emotional appeals are at least as suspect as those adorned with numbers.
While I thoroughly enjoyed the author's book "Zero: The Biography of a Dangerous Idea", "Proofiness" left much to be desired.
0CommentWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
54 of 64 people found the following review helpful
on September 29, 2010
This is just my 30,000ft view having completed the book last night and I don't want to get into a point-by-point review. Although the product description puts it in the same realm as Freakonomics and the Gladwell books, don't expect it to live up that comparison. I wouldn't say I'm any better prepared to recognize/counter deceptions of the mathematical variety having read the book. It seems like a handful of situations or anecdotes formed the idea for the book, but it just didn't seem to cover the topic enough (granted I bought the book wanting to learn more about the topic, but even not knowing what I don't know about the subject, there must be more to it). Additionally, if it were concisely written, it'd probably be about a 100 page book; I felt like every paragraph or topic was stretched to fill space. For example, nearly a third of the book covered the "election" topic including chapters regarding Bush's 2000 win and the recent Franken/Coleman battle. Lot's of space used, but little in the form of "mathematical deception"--more like a short history/commentary of the debacle(s). Got it, it was a farce, move on.
A couple good definitions in the front portion (Potemkin numbers, disestimation) were sprinkled into later sections to attempt to give common threads, but again, was expecting a few more deception tactics/techniques/examples to surface and be analyzed in later chapters.
22 commentsWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
42 of 51 people found the following review helpful
on November 14, 2010
This book is easy to read and engaging. As is the custom with journalist pop-sci efforts, the text is written in a breezy, humorous style. Seife has a gift for memorable neologisms, "proofiness" or "causuistry" (for conflation of correlation and cause).

Unfortunately, the book is full of inaccuracies, some minor, some not so minor. There are far too many to list here, but a large chunk of what Seife writes is sloppy, misleading, or wrong. I'll give a brief sampling below.

(1) A good illustration of how sloppy Seife is with the facts can be found in his discussion of the O.J. Simpson trial, (p. 201 &ff.). Seife claims that Dershowitz helped acquit O.J. Simpson by making use of a logical error called the "prosecutor's fallacy" (which, by the way, Seife also misdefines). It's a great story, and enlivened by Seife's tabloid prose "for a defense attorney, a specious argument might be just the thing to get a client off"; "the jury was apparently fooled"; "a bogus argument that helped put a likely murderer back on the streets."

You won't find phrases like this in a textbook on statistics, and it's this user-friendly style that makes books like this so popular to laymen.

According to Seife, Dershowitz misled the jury about the significance of O.J. Simpson's previous abuse of his ex-wife "Dershowitz, however, turned that piece of evidence completely upside down thanks to some phony probabilities. He convinced the jury that the battery made it incredibly improbable that Simpson murdered his ex-wife; after all, only one in a thousand wife-beaters winds up murdering his spouse! ...The jury was apparently fooled"

It's a memorable story. It resonates with many people's dislike of the O.J. verdict. But it's false.

Dershowitz never made any kind of statistical argument to the jury about spouse abuse. (Dershowitz is an appellate lawyer, he doesn't normally make jury arguments). The jury never heard from O.J.'s defense team about any kind of statistical argument like this either. And for that matter, Dershowitz apparently never made this argument at all! The whole story was concocted out of thin air by Seife.

Well, perhaps "out of thin air" is too strong. I tried to trace down where Seife could have come up with such a wild theory.

The simplest thing would be for Seife to cite to place where Dershowitz made the argument that Seife objects to.

Seife doesn't do that. Indeed, Seife's lack of careful citation and sourcing is a major symptom for the sloppiness of the book. Instead, he cites an article by John Allen Paulos claiming that "during the trial" Dershowitz made a statement that the abuse evidence was "irrelevant" for statistical reasons. Somehow this statement by Paulos was morphed and exaggerated into Seife's claim, and became the basis for the histrionics I quoted above. Paulos' language that Dershowitz claimed evidence was "irrelevant" morphed into a claim that Dershowitz was arguing the evidence was "exculpatory". And Seife just decided for some reason that if he made the claim during the trial, he made it to the jury.

Now, abuse evidence not being RELEVANT, which is what Paulos claimed Dershowitz said, and the abuse evidence being EXCULPATORY, which is what Seife claimed Dershowitz argued, are completely different. And it's a very different thing to make an argument in a motion to a judge about relevance than to try to hoodwink a jury about guilt.

To fully understand how ridiculous Seife's theory is, one has to know a bit about evidence law.

In American criminal trials, before the jury hears evidence, the defense can try to keep the evidence from the jury by a "motion in limine", a request to withhold evidence. This is typically done if the evidence is highly inflammable, irrelevant, unreliable, or violates many other rules of evidence.

Dershowitz's position to Judge Ito during the O.J. case was that if too much abuse evidence was presented, the jury might convict O.J. as punishment for the prior abuse, not for the crime he was charged with.

So, it would have been absurd for Dershowitz to try and argue that the evidence makes it "incredibly improbable" that O.J. was guilty, because that would be the opposite of what Dershowitz wanted to show - he wanted to show the evidence was not legally relevant. If the evidence had been exculpatory it would have been highly legally relevant.

Although it's not cited by Seife or by the source on which Seife cites, you can actually read Dershowitz's argument that the abuse evidence, or at least some of it, should have been excluded, here on Amazon: it's online on page 104 of Reasonable Doubts: The O.J. Simpson Case". Whether Dershowitz's argument is right or wrong, the danger that Dershowitz and the law is trying to avoid (codified in the Califonia analogues of the federal rules of evidence 401, 403 and 404) is to avoid presenting so much bad character evidence about a defendant that the jury comes to hate the defendant and convicts him because of these prior bad acts, rather than because of the defendant's guilt.

Seife then gets even more confused by trying to say that the Dershowitz argument, which he never actually made of course, was an example of the "prosecutor's fallacy" (p. 259). Seife completely misdefines the prosecutor's fallacy in general, but even Seife's imaginary version of Dershowitz's argument would not be an instance of a prosecutor's fallacy, which applies to certain invalid Bayesian analyses that misleadingly *inculpate* a defendant. Had Dershowitz made the argument Seife claimed he did, because it misleadingly tended to *exculpate*, it would have been an instance of the "defense attorney's fallacy" (see e.g., Thompson & Schumann, Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor's Fallacy and the Defense Attorney's Fallacy, 11 Law & Hum. Behav. 167 (1987)).

Thus, Seife's story is incendiary but inaccurate. The ludicrous argument Dershowitz supposedly made - that abuse evidence was made guilty incredibly improbable - was never made. The jury never heard any similar statistical argument about abuse frequencies. So it could not have been misled by them. And to top it off, Seife calls the imaginary argument by the wrong name. Seife whips up all this indignation about Dershowitz and slimy defense attorneys based, as far as I can tell, either on nothing at all or on misreading an article *about* Dershowitz.

(2) The author claims "when a scientist says that a dinosaur skeleton is sixty-five million years old, it's a signal that the number is a fairly rough approximation; the measurement error is on the order of tens or hundreds of thousands of years." (p. 22). Well, no. It is not a signal about the "the measurement error" at all. Seife is confusing precision and measurement error. The measurement error (by which Seife seems to mean error in the techniques used in establishing the age) might be a few years; or a few million years; or tens of millions of years. This is different from the meaning of "sixty-five million years" which *linguistically* indicates an approximation to the nearest million or to the nearest five million years. "65 million years" alone does not mean "within ten thousand years" irrespective of the measurement error. The scientist might in theory know the age is 65,310,011 years and 3 months, but when he says "65 million" he is indicating that the phrase "65 million" does not have such precision, and that precision is much less accurate than "order of ten thousand" years. Or, the error might be much greater and the scientist is just presenting the mean of the distribution of probable errors. You just cannot infer anything about the "measurement error" from the single number presented, as Seife claims.

(3) Seife claims that the animation of the world's coastlines disappearing that were presented in the movie "An Inconvenient Truth" is an example of "cherry-picking" because those animations only applied if melting ice would raise sea levels twenty feet (p. 27). The author claims this amount of sea-level increase is unlikely, or is unsupported by most studies, and thus the animations in the movie are a misleading instance of "cherry-picking."

"Cherry-picking" as the term is usually used, is the selection of unrepresentative data to confirm a particular hypothesis. Assuming Seife's comments about the true likely sea level rise are correct and the movie's animation is not, then the animation would not be an instance of selecting any particular "data". Choosing one study or a worst-case scenario is not "cherry-picking" as ordinarily understood.

(4) The author's attempted definition of margin of error (beginning "a number larger than the imprecision caused by randomness 95 percent of the time", p. 101) is not necessarily wrong, it's just so incomprehensibly presented as to be meaningless. The danger here is that lay readers will be fooled by the avalanche of verbiage into believing they understand margin of error when they do not.

(5) Seife has a series of criticisms of Scalia where he suggests Scalia is being proofy, but where the text actually suggests simply that Scalia's legal interpretation - not statistical interpretation - differs from Seife's. For instance, Seife criticizes Scalia's concurring opinion in Dept. of Commerce v. United States House of Representatives (1999) in which Scalia opined that the "actual enumeration" clause in the constitution makes the constitutionality of statistical sampling for the census doubtful.

But Scalia's argument, as excerpted in the book, is not that statistical sampling is wise, or unwise, or accurate, or inaccurate, which would be relevant to a proofiness claim. His argument is only that the phrase in the constitution, "actual enumeration" requires that people be, (1) actually (2) enumerated, and not statistically sampled. Whether actual enumeration is itself error-prone (as Seife argues) or whether statistical sampling is more accurate is not the issue Scalia's argument addresses. (A more detailed analysis of the meaning of "actual enumeration", which Seife ignores, is in Utah v. Evans (2002) (Thomas, J.)).

Seife criticizes Scalia's lexical analysis of "actual enumeration" partly because in an earlier draft of the Constitution those words were not present (p. 193). Seife believes this shows the words were merely chosen for style, and actually mean "census". However, Scalia's reliance on the wording of the final drafts of the Constitution or of statutes is a longstanding feature of Scalia's interpretive philosophy. (See, e.g., Rossum, Antonin Scalia's Jurisprudence: Text and Tradition) whose merits or weaknesses seem rather far outside the scope of a book on statistics. Scalia, for instance, believes that the final draft of the Constitution takes precedence over former drafts because it's that draft which was actually voted on and ratified. Whether Scalia or Seife is correct about the weight given to a prior draft of the constitution is simply not related to the purported benefits of a statistical sampling census. It makes no sense to accuse Scalia of proofiness for this.

There are a number of other legal discussions in the book, but throughout Seife's treatment tends to conflate what he believes is the most efficient or right policy with what the law provides. As in the O.J. example, Seife tends to oversimplify the legal issues and to use hyperbolic, almost Manichean language that is unsupported by the actual arguments.

Some much better popular books on proofiness are:

- A very good recent book on misuse of statistics is Sam Savage's "The Flaw of Averages". This book focuses on one "proofiness" fallacy that Seife touches on, using a number for a distribution, but it also discusses other important fallacies.

- Ziliak's "The Cult of Statistical Significance" is a reasonable book discussing a common "proofiness" argument that Seife does not address: treating statistical significance as real signficance.

- De Groot and Fienberg, "Statistics and the Law" is a careful, clear introduction to some of the legal concepts that Seife mangles.

Also, for a more leisurely introduction to evidence law and the O.J. trial than I tried to pack into a paragraph above, see Gerald Uelmen's "The O.J. Files: Evidentiary Issues in a Tactical Context", which was written by the defense team's evidence expert.
99 commentsWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
11 of 12 people found the following review helpful
on October 12, 2010
Charles Seife's book is one of many on the general subject of bogus statistics. While it is easy to read, and entertains the reader with lots of hot-button topics (the death penalty, presidential election recounts), I do have some reservations about it:

(1) It is very, very lightweight on the math side; that's a plus for readers who aren't mathematically inclined, but on the minus side, almost all of the examples in the book are seriously under-explained.

(2) The book's language is intemperate throughout, with politicians and lawyers singled out for particular abuse. While some readers may enjoy the bashing of these two unpopular occupations, I found it distracting.

(3) The book's focus on political examples naturally leads to conclusions that some readers will disagree with in a political sense. This is another distraction that makes it harder for the author to just put across the math.

I'm not criticizing Seife for his political passion; he's the author, and he's entitled. But I do think, having read the book through, that the indignation sometimes gets in the way of the explanation.
22 commentsWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
6 of 6 people found the following review helpful
on October 22, 2010
I picked this up because I was looking for something I could use in the classroom that would help undergraduates move past the habit of uncritically accepting any proffered number as "proof" and to potentially increase their motivation to take statistics classes, if not other math classes. I have not yet put the book to the test, but I look forward to using it.

Things this book does well:
1) it takes the aphorism "lies, damn lies and statistics" and enriches it with contemporary examples.
2) it offers explanations of things like the mortgage crisis and census practices that are great launching pads for further discussion and more reading.
3) it presents a relevance angle on the study of mathematics that will be particularly appealing to students in non-math centered fields. Too often, students' math phobia is encouraged by teachers and professors in humanistic fields who seem to hold math-ignorance as a virtue. Seife's book brings home how such math-ignorance leaves people susceptible to manipulation and disenfranchisement in ways that might catch the attention of someone who thinks math is only for the finance and pre-med majors.

Things this book does less well but can still be used to prompt more inquiry:
1) demystify mathematics - while Seife's examples and arguments are easy enough to follow (he's a clear writer), I'm less sold that someone having read his book could work through the same logic when confronted with a number. The appendices help, but how many readers realistically will look at the appendices?
2) encourage more math study - I think Seife's emphasis on the deceptive possibilities of math might leave some readers throwing up their hands and deciding all math is evil. Even though Seife ends with the message that the solution to proofiness is more math education, I'm not sure every reader will get to that message.

One thing this book doesn't do nearly as well as I hoped is sort out which kinds of questions lend themselves to quantitative analysis and which ones can be approached with qualitative analysis. Seife discusses the problem of subjective questions (like rating pain on a scale of 1-10 or converting happiness to a number scale), but there's a hint of "if it can't be mathematically determined, we should give up researching it" that I take issue with.

The nice thing about using this as a teaching text is that I have a chance to supplement it. I can use the examples provided to introduce scholarly articles on the same topics, and since students will have a big picture idea of what they are looking at, they'll be more able to make sense of what they're reading, even with the issue of technical language and jargon. I can also reinforce the value and need for math education, and I can present them with qualitative methods and make it clear to them when a qualitative approach is preferable to a quantitative approach. Hence, I think that "Proofiness" will be a very useful book and I'm glad to have read it.
11 commentWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
5 of 5 people found the following review helpful
on October 30, 2010
Charles Seife's Proofiness is an accessible and entertaining look at the many ways numbers can be used (more to the point, abused) in order to win an argument. Seife spends the early part of the book outlining his typology for numerical abuses. For instance, "disestimation" is the act of taking a number too literally, understating or ignoring the uncertainty that surrounds it. This is often done when some kind of data is presented without taking into account that its calculation contains a great deal of measurement error (think of polling or the US Census). Seife also shows how visualization can be used to manipulate the meaning of data--what he terms "apple-polishing". A classic example is portraying longitudinal data in a graph where the y-axis is truncated instead of starting at zero. Even a small change over time will be magnified by such a presentation.

The book is packed with great examples. However, Seife spends a bit too much time on some cases. More variety would have made the book better. Seife also tends to focus on the intentional manipulation of data while ignoring the unintentional instances. There is no doubt that people use many of the tricks he describes to bend data to their advantage, but often times misleading data is the result of people simply making bad calculations rather than purposeful manipulation. Additionally, Seife's suggestion as to how to combat proofiness, mathematical sophistication, doesn't seem capable of solving the problem on its own. While I agree that the public could benefit from a more robust understanding of numbers and their manipulation, Seife basically ignores the issue of perceptual bias. Even the most sophisticated consumers of data are subject to fundamental perceptual biases. Given that we are "predictably irrational", to quote Dan Ariely, any solution must also take into account that we are hardwired in many ways to be manipulated by proofiness.

The book is not for deep subject matter experts in mathematics or statistics, but it is a fantastic primer for the lay person.
0CommentWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
26 of 34 people found the following review helpful
on September 24, 2010
It's funny I read the book over a couple of visits to the bookstores this week. I picked it up as the idea intrigued me on how using numbers can bolster an argument, even if the argument has no basis in reality, or perhaps supported by reality is a better way of putting it. It starts with Sen. Joe McCarthy and his infamous list of communists at the State Department and how him saying he had `a list with 57 names of people in the State Department....' sounded like he was more `truthful' in his accusations than if he had said something like `here is a list of names' or `I have on good authority a number of employees.....' by the simple fact he had a specific sounding number. This illustrates a tendency for people to accept data if it sounds exact over data that is rounded off. After all the list didn't have 50 or even 60 names on it , it had 57 and for him to use that number there must be a reason, and the reason is that it is true. Not quite, you can lie with numbers, and it may be more effective than doing it with words. Many consider that mathematics is a foreign language, and if one doesn't understand the language one must trust the translation - or in this case the interpretation - is correct.

Wednesday I read the first half of the book, and I liked it. Basically telling how numbers lie, cherry picking, apple polishing, Potemkin numbers and how graphs lie were all covered. I actually laughed out loud when I read the example of how runners at the Olympics have improved their times and if one simply looked at the numbers and the graph they made, by 2200, female 100 yard dash times would be supersonic and eventually runners would be fast enough to be faster than light and actually finish the race before they started. This was as a result of seeing women's times improve over the last 20-40 years and logically projecting the raw data out into the future when one should look at the fact that the numbers started from a smaller base of female athletes and as the base sample increases as more females participate in the sport, the line flattens out.

Another was the Quaker Oats ad that said oatmeal would lower your cholesterol over 4 weeks and they had statistical proof that was shown by a bar graph. The only problem was the base of the graph was at 98% and not at 0% and the very minor changes that were apparent on the Quaker Oats graph basically disappeared when looked at as a scale that went from 0 to 100 rather than 98 to 100. The ad was pulled, but the thinking behind it, is still with us as Cheerios was told to quit claiming the exact same claim in an ad earlier thus year. Why this made even more of an impression was a misuse of graphs on Fox News earlier this week (Sept 2010). Someone had written in and wanted to know about how much Federal Employment had increased over the last 5 years and compared to job gains/losses in the private sector. In it the graph showed an apparent huge spike in employment over the last two years and a huge drop in private sector jobs at the same time. But even though the graphs were drawn the same they were total bull.... The horizontal scale of the years was the same but the vertical scale was nowhere near the same, though drawn the same. Each gradient on the Federal looked to be maybe 20,000 workers with the huge increase numbering 120,000 workers. Okay. Now lets take a look at the private sector one. There the vertical scale was maybe a million, so that the job losses in 2008 and 2009 showed as large as the Federal Employment gains in the other graph so that just glimpsing at the graphs without thinking it looks like the Federal payroll has increased by the same amount as the private payroll shrank. But think about it, a vertical scale of 20k is only 2% of that covered by the public sector graph and if shown to the same scale would be imperceptible. Then again it would make the political point that was trying to be made. The talking head even used the term that the data came from the Bureau of Labor Statistics to give them an air of legitimacy - 'Proofiness' at it's worst.

There are also examples of risk taking that the financial markets took in the 2000's, but considering I still don't understand exactly what derivatives and some of the other tricks people played on the financial markets worked - other than they didn't pass the smell test with me - I am going to spare you any attempts by myself to try and explain them, other than mentioning that Seife does cover them

The one explanation that really got me was a closer look at the `Laffer Curve'. This is the economic theory that as tax rates increase eventually revenues brought in by those taxes drops to eventually nothing. Now being on the right side of the political spectrum - slightly - I have heard of and was familiar with this theory and the curve for many years. However I have never seen the data plotted onto the curve that he used to make it. When it was I was dumbfounded. In statistics I remember being taught that if you had data that fell way out of the norm, you ignored that data as an anomaly. Here it was data from Norway, one of the smallest countries in his graph and one that if you applied standard statistical tools to would be found outside of the ability to explain why it was where it was. So normally I would have ignored it when drawing my graph in Economics at school, but here he includes it and goes from a fairly flat upward sloping graph, to a bizarre bell curve, or the ballistics of a mortar shell. If I had turned in the Laffer Curve in any of my economics or statistics classes I think I would have been failed on that assignment.

Though looking up some info - such as that shown in a couple of wiki type articles on the subject, shows that perhaps Mr. Seife may have used some apple polishing of his own in choosing the representation of the Laffer Curve that the book uses as it's example. Perhaps it is a good place to transition to the second half of the book, where politics intrude on the beauty of impure mathematics.

Now if I had stopped reading there I probably would have been impressed enough with buying the book - but then I read the second half, and I got less impressed. The first half covers mankind's ability to see patterns even when none exist and attempt to make sense of what we see, and how we can be mislead by those who know how to exploit that weakness. The second half of the book almost felt like a separate book, it dealt with politics, polling, gerrymandering and recounts.

The second half starts with a discussion about polls and how nearly every one can be manipulated and thus be rendered useless - except on a slow news day when editors and reporters have nothing better to fill dead air and blank space with. Sampling errors - where people who respond are those most enthusiastic about the subject respond and thus may skew the sample are discussed, such as a 1936 presidential poll that during the Depression was mailed based upon phone numbers and addresses when many people may not have had a private phone. This skewed the sample towards those most unaffected by the Depression and would trend towards the Republican candidate versus FDR, The poll came back showing a Republican landslide for Alf Landon, when in reality FDR won over 60% of the vote and all but two states. The sampling method was flawed. Other ways that polls can be manipulated are shown in how you word the questions you ask and polls on the Teri Schiavo case from the early 2000's or post 9/11 wiretap policy were used as examples of how nearly contradictory results could be arrived at by simply asking the question a different way. The last that I found most humorous was where public opinion polls offer nonsense results. The example that stuck out to me was a study - based on polling - that stated American men had a history of 7 sexual partners....but American women only had 4. Okay.... Now logically they should be equal wouldn't they - even taking into account the homosexual wild card, but they are not equal. So someone is lying. And if people obviously lie on something like this with men boasting their virility and women playing up their chasteness, does ANYTHING that is a result of a public opinion poll worthy of placing any emphasis on?

Discussing the validity (or non-validity) of polling is still number related but then the book goes into the nonsense that surrounded the Florida 2000 presidential and Minnesota 2008 senatorial elections. Here it is described all of the controversy surrounding counting and recounting (and recounting and recounting) votes took place. Here are hanging chads and Lizard People voting explained in all its, ummm....glory. And here he goes into detail on how manually recounting votes almost never adds up to the same number twice and when they are used in what amounts to a statistical dead heat there is no way to achieve a valid result. I think Coleman/Frankin had a difference of around 0.02% - two thousandth of a percent - a couple of hundred votes out of two and a half million cast. Recounting that total number of votes manually - or by a statistical sampling, would almost never result in the same number twice. The Florida election is looked at with the same result. The most interesting thing to come out of the last half of the book is that there was a solution on the books in both states. Deciding the winner by drawing lots, or flipping a coin. It was a good story but to me didn't have any real `Aha!'moments. Anyone - from either side of the political spectrum basically knew what he has written, just maybe not in the depth of detail, if attention had been paid to the stories.

The problem I had with the second half is that it seemed to almost be a different book than what I read in the first half. Almost as if two lengthy magazine articles were brought together. The first half on how one can use mathematics to lie and advance a position when according to popular opinion, mathematics doesn't lie. The second half seems to be a discussion of the limitations of mathematics thru statistics and how sometimes mathematics cannot give you an answer no matter how closely looked at by fallible human beings.

Not a bad book, if I had the money to buy a $25 book when I was reading the first half, I probably would have bought it. But after reading the second half I am unsure whether I would have really felt the push to finish it. When I get to skimming and eventually skipping sections the book has lost me, and Proofiness had lost me by the end.
1212 commentsWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
4 of 4 people found the following review helpful
I have my pet peeves, of course, like the guy who translates 3/16" into metric and gets 0.0047625 meters. That's a precision of a half micron, a hundredth of a hair-width, when the input was good only to within 1/32" (or worse, depending on your ruler and how you read it). My annoyance is nothing compared to subversion of the rule of law and corruption of the principles of democracy.

Seife starts with a startling (but obvious) piece of psychology: when someone quotes an exact number - and Sen. Joe McCarthy's bogus "205 communists" comes to mind - they must have exact knowledge. That actually comes from a positive part of the human mind, the part that wants to trust in others and to have the most knowledgeable people making important decisions. It banishes all the uncertainty and subjectivity that make life so threatening, contentious, and combative. It gives the comfort and strength of knowing The Truth. But, of course, it doesn't. Instead, it gives "proofiness," the numerical species of "truthiness."

So, what does it mean when a mascara gives eyelashes "twelve times more impact"? Or when a "million man march" had 400K attendees, by best Park Service estimates? (BTW, Farrakhan demanded his million even if he'd get it in court, so he sued and stopped the Park Service from estimating crowd sizes for a decade.) If you've already read this far, you already know how some product can "cost 200% less than the competitor," even if they don't pay you to take it away from them. So, let's get to the most destructive kinds of cases.

The Bush/Gore fiasco figures heavily, as does the Franken/Coleman inanity of nearly a decade later. The basic problem in both cases was simple: when you count up a few million of anything, your answer has inherent uncertainty. You miss a few, you double-count a few, a few show up so mangled that you're not sure what you're looking at, and next time you count a few more have shown up or vanished. Even when you put all the best available resources into a good-faith effort, your numbers will jitter, and will jitter differently when you count again. When counting tens of millions of votes, a difference of few hundred either way is exactly a tie, to within the resolving power of flawed human mechanisms. Any interested reader can pursue the debacle that ensued, a veritable festival of political bullying, good-buddy and family favor-gathering, party politics, and base ignorance proclaimed from the highest levels.

Bad math has entered the courts, too. Take the OJ entertainment, for example. The defense lawyer said that although OJ was a convicted abuser, only 1 in 1000 abusers ever kill anyone. That makes him 99.9% innocent, right, or something like that? Dershowitz's proofiness skipped over the fact that only one in (I forget exact numbers) 50,000 are murderers, so abusers have a 50:1 higher chance than anyone else of being murderers. But, when an abused woman is murdered, it's about 1:1 odds that her abuser killed her. That's a 50% chance of guilt even before the forensic evidence shows up, not 0.1%.

Proofiness goes beyond celebrity trials where bags of money argue guilt and innocence against other bags of money. It infiltrates the Supreme Court of the U.S., the agency that daily defines what is legal or not under our Constitution. Seife recounts the number-based reasoning that underlies major precedents, and exposes the plain and simple defects of reason that justify them. He even starts in on the reasoning that condemns Americans to death within their own legal system, and finds the same defective reasoning not only entrenched, but defended against any potential challenge, a triumph of righteousness over rationality that I find terrifying.

Although this book involves arithmetic, most fifth-graders will understand the numerical principles involved. Seife's appeals to the reasoning adult, however, and leaves me wondering just how few those people might be. Argument by isolated example can only appeal to feelings, not to real understanding of the larger landscape, and Seife argues by dismal and gut-wrenching example. He does not argue against himself, however. Instead, he mobilizes the thinking reader to truly think, to understand our complex world in some valid way. He documents a bizarre and very public society in which numbers are enslaved to the most degrading and brutal of tasks. Then he calls on us all to make numbers our own again, to given them their rightful place among our honest and trusted advisors.

- wiredweird
0CommentWas this review helpful to you?YesNoSending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
     
 
Customers who viewed this also viewed


Zero: The Biography of a Dangerous Idea
Zero: The Biography of a Dangerous Idea by Charles Seife (Paperback - September 1, 2000)
$13.77
 
     

Send us feedback

How can we make Amazon Customer Reviews better for you?
Let us know here.

Your Recently Viewed Items and Featured Recommendations 
 

After viewing product detail pages, look here to find an easy way to navigate back to pages you are interested in.