Enjoy fast, FREE delivery, exclusive deals and award-winning movies & TV shows with Prime
Try Prime
and start saving today with Fast, FREE Delivery
Amazon Prime includes:
Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.
Amazon Prime members enjoy:- Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
- Unlimited Free Two-Day Delivery
- Instant streaming of thousands of movies and TV episodes with Prime Video
- A Kindle book to borrow for free each month - with no due dates
- Listen to over 2 million songs and hundreds of playlists
- Unlimited photo storage with anywhere access
Important: Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.
Buy new:
$15.91$15.91
FREE delivery: Sunday, May 28 on orders over $25.00 shipped by Amazon.
Ships from: Amazon.com Sold by: Amazon.com
Buy used: $8.37
Other Sellers on Amazon
+ $3.99 shipping
84% positive over last 12 months
Usually ships within 4 to 5 days.
+ $3.99 shipping
90% positive over last 12 months
Usually ships within 4 to 5 days.
+ $3.99 shipping
82% positive over last 12 months
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life Paperback – March 5, 1993
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
- Kindle
$0.00 Read with Kindle Unlimited to also enjoy access to over 4 million more titles $14.99 to buy -
Audiobook
$0.00 Free with your Audible trial - Hardcover
$20.508 Used from $5.93 1 New from $65.01 2 Collectible from $39.99 - Paperback
$15.9193 Used from $2.15 18 New from $10.32 3 Collectible from $11.87
Purchase options and add-ons
When can we trust what we believe—that "teams and players have winning streaks," that "flattery works," or that "the more people who agree, the more likely they are to be right"—and when are such beliefs suspect? Thomas Gilovich offers a guide to the fallacy of the obvious in everyday life. Illustrating his points with examples, and supporting them with the latest research findings, he documents the cognitive, social, and motivational processes that distort our thoughts, beliefs, judgments and decisions. In a rapidly changing world, the biases and stereotypes that help us process an overload of complex information inevitably distort what we would like to believe is reality. Awareness of our propensity to make these systematic errors, Gilovich argues, is the first step to more effective analysis and action.
- Print length224 pages
- LanguageEnglish
- Publication dateMarch 5, 1993
- Dimensions6.13 x 0.56 x 9.25 inches
- ISBN-100029117062
- ISBN-13978-0029117064
Books with Buzz
Discover the latest buzz-worthy books, from mysteries and romance to humor and nonfiction. Explore more
Frequently bought together

What do customers buy after viewing this item?
Editorial Reviews
About the Author
Product details
- Publisher : Free Press; Reprint edition (March 5, 1993)
- Language : English
- Paperback : 224 pages
- ISBN-10 : 0029117062
- ISBN-13 : 978-0029117064
- Item Weight : 9.2 ounces
- Dimensions : 6.13 x 0.56 x 9.25 inches
- Best Sellers Rank: #66,238 in Books (See Top 100 in Books)
- #148 in Medical Cognitive Psychology
- #231 in Popular Social Psychology & Interactions
- #275 in Cognitive Psychology (Books)
- Customer Reviews:
About the author

Discover more of the author’s books, see similar authors, read author blogs and more
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviewed in the United States on January 21, 2018
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
medicine, etc. He tends to imply that "holistic" is the same
as "quackery". Of course, quackery is quackery, no matter what a quack calls his "medicine", be it "holistic", be it "orthodox", be it whatever you chose. At least what I know about "holistic" medicine has nothing to do with quackery and nothing to do with Gilovich's description. The same thing can be said about his condemnation of those who oppose to the "germ theory" that is dominant since Pasteur. As far as I know real doctors who oppose it do not say germs are not important factor. What they say is that germ is only ONE factor and very often not the most important one. That explains why you can subject hundreds of people to the same germ and most of the time only a few will get sick.
In page 6 he refers to Rhea Sullins. Her father may have killed her, as the author implies. Nevertheless, this is far from certain based on what he says. Would she be cured using conventional medicine? Did her father use a proper natural medicine? Statistically, is her case important? He mentions a single occurrence of a victim and a perpetrator and expects us to believe this is enough to prove "Natural hygiene" is bad or that hygienists are all dumb and irresponsible.
When he goes to Homeopathy he applies every single technique he denounces in others. He implies homeopathy is quackery and has no scientific soundness. Far from this. Homeopathy does not pretend to understand why it works. Nobody knows. But it follows the same scientific methodology other sciences follow. Like the so much beloved "double blind", for instance. Also, homeopathy practioners tend to make much better prognosis then orthodox medicine, like the course of a remission. With very objective measures, like changes in body temperature, weight modification, skin alterations etc. Also, homeopathy is used in animals with great success. And in emergency rooms too. Also -- in Brazil at least -- in order to practice homeopathy a person must first go to a regular medicine school for about 6 years. After that 3 years more are needed to become an homeopathist. It is hard to believe a homeopathist is less educated and trained than a "regular" doctor. The reverse seems to be true.
By the way, I have no affiliation whatsoever with Homeopathy. I am among those (also more or less rediculed by Gilovich) who believe our health should be in our own hands, doctors being only helpers. So I try to understand what we should expect from our mind and body (not two separate entities), from our food, from the doctors and from the drugs (in this order of precedence).
The final chapters are a little boring. It seems the author wanted to put as many things in as few pages as possible in order to support his views. It is quite miscellaneous and clearly shows the author has an axe to grind.
Again, this is a good book that deserves to be read. The fact that the author is himself victim of the failures he sees so clearly in our reasoning does not belittle his work.
Part One of the book looks at the reasoning behind why we are susceptible to ideas and conclusions that are not supported by fact. An example is given about pattern recognition in the realm of sports that deftly demonstrates how the human brain is programmed to seek out patterns, sometimes even if there aren’t any to be found. Likewise, this phenomenon convinces the subject erroneously and leads to one believing false information. The example revolves around a basketball player’s belief that scoring comes in streaks, and that one can develop a “hot-hand”. Research, however, shows that statistically, a prior make or miss has no bearing on the success of a future shot attempt. This belief in a player’s mind leads them to believe that if they have made one or two shots in a row, that they will continue to make them at a greater percentage, leading them to change the way they play; such as not passing to open teammates. This idea is further proven incorrect by the introduction of regression and how the concept shows that some decline must occur once a peak has been achieved.
Part Two of the book delves into the motivations behind our beliefs, such as how social standards, biases, and overstated conversations can convince us of false realities. Again, the author uses several practical examples: one going back to the sports world and describing the biasing effects of referees who unfairly penalize certain jersey colors, as well as conditioning story of Albert, a young boy who was subjected to conditioning tests with animals and sounds.
The final section takes an unexpected turn and goes after a handful of unconventional beliefs such as alternative medicine and extrasensory perception (ESP). Gilovich reveals himself to be quite the skeptic as he skillfully pokes holes in the non-scientific nature of these activities.
Using his extensive background in social and behavioral psychology, Gilovich has created an insightful book that is essentially a "how-to" guide to avoiding irrational thinking. By giving the reader a set of tools to critically think about data, long-held beliefs, and newer fringe philosophies, Gilovich has empowered his audience to challenge the status quo by analyzing and evaluating the information that goes into making decisions or choosing what to believe as fact. The biggest criticisms of the book are related to how some topics seem to be discussed longer than necessary, and that several of the references are outdated. That being said, for a book that is 25+ years old, the content is written in a way that keeps the reader engaged, and explains the core concepts in a way that the layperson can sufficiently understand.
Top reviews from other countries
The brain is hard-wired to detect order in the nature of things. We can learn from experience by accumulated observations and this has obvious survival advantages in evolutionary terms.
But where do things start to go wrong? First of all, we see ordered patterns of outcomes that are in fact the blind product of chance. Chance produces less alternation than our intuition leads us to expect. If we toss a coin 20 times, we're unlikely to see 10 heads and 10 tails. A series of 20 tosses has a 50-50 chance of producing 4 heads in row. When we see patterns such as the lucky streak in baseball we think we are spotting an order that isn't in fact there.
The regression effect also fools us into misattributing a cause to an effect. You perform exceptional badly or excel when taking an exam, much worse or better than your average. Your next result is likely to be better or worse as you move back to your average. That's the regression effect. But we make assumptions that the exceptional and atypical is representative when the regression effect would tell us otherwise: investors may assume that a company's bumper profits in one year will predict will be repeated in future years when in all likelihood they will actually fall.
We underdetermine beliefs with insufficient data, treating weakly tested hypotheses as facts. We look for confirmatory examples while overlooking or discounting facts that contradict a belief. We fail in other words to understand the distinction between necessary and sufficient evidence. We seize on isolated, salient examples of pieces of data that prematurely confirm a hypothesis. Take the homoeopathist's claims that a cancer patient was miraculously cured after taking an alternative remedy. The recovery is treated as conclusive evidence of the remedy's efficacy. But such evidence is in itself insufficient to proof anything - isolated facts do not in themselves provide sufficient confirmation. They are too vulnerable to the discovery of counter-examples that contradict the hypothesis.
We leap to such conclusions because when we test for a hypothesis, we fail to define what success or failure is. Too often beliefs are formed with vague definitions of what counts as a successful confirmation. Studies of identical twins separated at birth may well track an identity of life outcomes that point strongly to genetic influences. But there are many outcomes or results in any given life. Some of these may overlap and give the impression of congruence. So the twins may both choose the same occupation and this is indeed a striking identity of outcome but this is only one such outcome, and others may vary. The danger once again is taking an overlap of outcomes from two sets of data similar while overlooking variances. Likewise many predictions are couched so vaguely to guarantee against disconfirmation, akin to Woody Allen's spoof Nostradamus character who portentously avers that `two nations will go to war but only one will win'.
Does our social nature compensate for this? Not necessarily. We tend to associate with like-minded people and to fight shy of conflict and controversy. Therefore members of presidential advisory groups hold their own counsel. We keep our mouths shut during a meeting at work. We do not want to be seen to rock the boat. The result is that others believe that their beliefs are more broadly shared than they actually are (one reason why the bore and the name dropper carries on with a self-defeating strategy is precisely the reluctance of others to point it out).
Good heavens, having said all this how on earth can we tell if our beliefs our well founded? There is no easy way out of these cognitive illusions. It's not all bad. We do have good reasons for example to accept the theory of gravity, which has weight (so to speak) and well attested by centuries of sense and statistical data. So we can rightly disregard claims of levitation on this basis.
We can also tighten up our definitions of what counts as confirmation, as we noted earlier. If we were testing whether a training course that claims it can raise sales staff performance really works, then we would define successful confirmation as increased sales figures. The scientific process of peer review also helps: we can make sure that a researcher does not which members of the trial group are receiving the new drug being tested, so preconceptions of success or failure do not contaminate the researcher's observations. We can test if a claim for an extraordinary effect like Extra Sensory Perception can be replicated (it can't).
These are palliatives however. We can only strive imperfectly to try and recognise when our reasoning faculties are leading us up blind-allies. This book will help you at least be a little more vigilant when it comes to forming conclusions about why you think you are right to believe the way you do














