Buying Options
| Print List Price: | $17.00 |
| Kindle Price: | $11.99 Save $5.01 (29%) |
| Sold by: | Penguin Group (USA) LLC Price set by seller. |
Your Memberships & Subscriptions
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle Cloud Reader.
Using your mobile phone camera - scan the code below and download the Kindle app.
The Knowledge Illusion: Why We Never Think Alone Kindle Edition
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
|
Digital
"Please retry" |
—
| — | — |
We all think we know more than we actually do.
Humans have built hugely complex societies and technologies, but most of us don’t even know how a pen or a toilet works. How have we achieved so much despite understanding so little? Cognitive scientists Steven Sloman and Philip Fernbach argue that we survive and thrive despite our mental shortcomings because we live in a rich community of knowledge. The key to our intelligence lies in the people and things around us. We’re constantly drawing on information and expertise stored outside our heads: in our bodies, our environment, our possessions, and the community with which we interact—and usually we don’t even realize we’re doing it.
The human mind is both brilliant and pathetic. We have mastered fire, created democratic institutions, stood on the moon, and sequenced our genome. And yet each of us is error prone, sometimes irrational, and often ignorant. The fundamentally communal nature of intelligence and knowledge explains why we often assume we know more than we really do, why political opinions and false beliefs are so hard to change, and why individual-oriented approaches to education and management frequently fail. But our collaborative minds also enable us to do amazing things. The Knowledge Illusion contends that true genius can be found in the ways we create intelligence using the community around us.
- LanguageEnglish
- PublisherRiverhead Books
- Publication dateMarch 14, 2017
- File size1737 KB
Similar books based on genre
Editorial Reviews
Review
“Sloman and Fernbach offer clever demonstrations of how much we take for granted, and how little we actually understand... The book is stimulating, and any explanation of our current malaise that attributes it to cognitive failures—rather than putting it down to the moral wickedness of one group or another—is most welcome. Sloman and Fernbach are working to uproot a very important problem... [The Knowledge Illusion is] written with vigour and humanity.” —Financial Times
“The Knowledge Illusion is at once both obvious and profound: the limitations of the mind are no surprise, but the problem is that people so rarely think about them... In the context of partisan bubbles and fake news, the authors bring a necessary shot of humility: be sceptical of your own knowledge, and the wisdom of your crowd.” —The Economist
“A breezy guide to the mechanisms of human intelligence.” —Psychology Today
“In an increasingly polarized culture where certainty reigns supreme, a book advocating intellectual humility and recognition of the limits of understanding feels both revolutionary and necessary. The fact that it’s a fun and engaging page-turner is a bonus benefit for the reader.” —Publishers Weekly
“An utterly fascinating and unsettling book, The Knowledge Illusion shows us how everything we know is bound together with knowledge of others. Sloman and Fernbach break down many of our assumptions about science, how we think and how we know anything at all about the world in which we live. Despite the wide-scale deconstruction, the authors are upbeat... Anyone engaged in the work of nurturing healthy and flourishing communities will ultimately have to wrestle with the questions posed in this book. Sloman and Fernbach help us to do so gracefully, acknowledging the truth of how little we know, and finding hope in this precarious situation.” —Relevant Magazine
“We all know less than we think we do, including how much we know about how much we know. There’s no cure for this condition, but there is a treatment: this fascinating book. The Knowledge Illusion is filled with insights on how we should deal with our individual ignorance and collective wisdom.” —Steven Pinker, Johnstone Family Professor of Psychology, Harvard University, and author of How the Mind Works and The Stuff of Thought
“I love this book. A brilliant, eye-opening treatment of how little each of us knows, and how much all of us know. It's magnificent, and it's also a lot of fun. Read it!” —Cass R. Sunstein, coauthor of Nudge and founder and director, Program on Behavioral Economics and Public Policy, Harvard Law School
About the Author
Philip Fernbach is a cognitive scientist and professor of marketing at the University of Colorado’s Leeds School of Business. He lives in Boulder, Colorado, with his wife and two children. --This text refers to the paperback edition.
Excerpt. © Reprinted by permission. All rights reserved.
What We Know
Nuclear warfare lends itself to illusion. Alvin Graves was the scientific director of the U.S. military’s bomb testing program in the early fifties. He was the person who gave the order to go ahead with the disastrous Castle Bravo detonation discussed in the last chapter. No one in the world should have understood the dangers of radioactivity better than Graves. Eight years before Castle Bravo, in 1946, Graves was one of eight men in a room in Los Alamos, the nuclear laboratory in New Mexico, while another researcher, Louis Slotin, performed a tricky maneuver the great physicist Richard Feynman nicknamed “tickling the dragon’s tail.” Slotin was experimenting with plutonium, one of the radioactive ingredients used in nuclear bombs, to see how it behaved. The experiment involved closing the gap between two hemispheres of beryllium surrounding a core of plutonium. As the hemispheres got closer together, neutrons released from the plutonium reflected back off the beryllium, causing more neutrons to be released. The experiment was dangerous. If the hemispheres got too close, a chain reaction could release a burst of radiation. Remarkably, Slotin, an experienced and talented physicist, was using a flathead screwdriver to keep the hemispheres separated. When the screwdriver slipped and the hemispheres crashed together, the eight physicists in the room were bombarded with dangerous doses of radiation. Slotin took the worst of it and died in the infirmary nine days later. The rest of the team eventually recovered from the initial radiation sickness, though several died young of cancers and other diseases that may have been related to the accident.
How could such smart people be so dumb?
It’s true that accidents happen all the time. We’re all guilty of slicing our fingers with a knife or closing the car door on someone’s hand by mistake. But you’d hope a group of eminent physicists would know to depend on more than a handheld flathead screwdriver to separate themselves from fatal radiation poisoning. According to one of Slotin’s colleagues, there were much safer ways to do the plutonium experiment, and Slotin knew it. For instance, he could have fixed one hemisphere in position and raised the other from below. Then, if anything slipped out of position, gravity would separate the hemispheres harmlessly.
Why was Slotin so reckless? We suspect it’s because he experienced the same illusion that we have all experienced: that we understand how things work even when we don’t. The physicists’ surprise was like the surprise you feel when you try to fix a leaky faucet and end up flooding the bathroom, or when you try to help your daughter with her math homework and end up stumped by quadratic equations. Too often, our confidence that we know what’s going on is greater at the beginning of an episode than it is at the end.
Are such cases just random examples, or is there something more systematic going on? Do people have a habit of overestimating their understanding of how things work? Is knowledge more superficial than it seems? These are the questions that obsessed Frank Keil, a cognitive scientist who worked at Cornell for many years and moved to Yale in 1998. At Cornell, Keil had been busy studying the theories people have about how things work. He soon came to realize how shallow and incomplete those theories are, but he ran into a roadblock. He could not find a good method to demonstrate scientifically how much people know relative to how much they think they know. The methods he tried took too long or were too hard to score or led participants to just make stuff up. And then he had an epiphany, coming up with a method to show what he called the illusion of explanatory depth (IoED, for short) that did not suffer from these problems: “I distinctly remember one morning standing in the shower in our home in Guilford, Connecticut, and almost the entire IoED paradigm spilled out in that one long shower. I rushed into work and grabbed Leon Rozenblit, who had been working with me on the division of cognitive labor, and we started to map out all the details.”
Thus a method for studying ignorance was born, a method that involved simply asking people to generate an explanation and showing how that explanation affected their rating of their own understanding. If you were one of the many people that Rozenblit and Keil subsequently tested, you would be asked a series of questions like the following:
1.On a scale from 1 to 7, how well do you understand how zippers work?
2.How does a zipper work? Describe in as much detail as you can all the steps involved in a zipper’s operation.
If you’re like most of Rozenblit and Keil’s participants, you don’t work in a zipper factory and you have little to say in answer to the second question. You just don’t really know how zippers work. So, when asked this question:
3.Now, on the same 1 to 7 scale, rate your knowledge of how a zipper works again.
This time, you show a little more humility by lowering your rating. After trying to explain how a zipper works, most people realize they have little idea and thus lower their knowledge rating by a point or two.
This sort of demonstration shows that people live in an illusion. By their own admission, respondents thought they understood how zippers work better than they did. When people rated their knowledge the second time as lower, they were essentially saying, “I know less than I thought.” It’s remarkable how easy it is to disabuse people of their illusion; you merely have to ask them for an explanation. And this is true of more than zippers. Rozenblit and Keil obtained the same result with speedometers, piano keys, flush toilets, cylinder locks, helicopters, quartz watches, and sewing machines. And everyone they tested showed the illusion: graduate students at Yale as well as undergraduates at both an elite university and a regional public one. We have found the illusion countless times with undergraduates at a different Ivy League university, at a large public school, and testing random samples of Americans over the Internet. We have also found that people experience the illusion not only with everyday objects but with just about everything: People overestimate their understanding of political issues like tax policy and foreign relations, of hot-button scientific topics like GMOs and climate change, and even of their own finances. We have been studying psychological phenomena for a long time and it is rare to come across one as robust as the illusion of understanding.
One interpretation of what occurs in these experiments is that the effort people make to explain something changes how they interpret what “knowledge” means. Maybe when asked to rate their knowledge, they are answering a different question the first time they are asked than they are the second time. They may interpret the first question as “How effective am I at thinking about zippers?” After attempting to explain how the object works, they instead assess how much knowledge they are actually able to articulate. If so, their second answer might have been to a question that they understood more as “How much knowledge about zippers am I able to put into words?” This seems unlikely, because Rozenblit and Keil used such careful and explicit instructions when they asked the knowledge questions. They told participants precisely what they meant by each scale value (1 to 7). But even if respondents were answering different questions before and after they tried to explain how the object worked, it remains true that their attempts to generate an explanation taught them about themselves: They realized that they have less knowledge that they can articulate than they thought. This is the essence of the illusion of explanatory depth. Before trying to explain something, people feel they have a reasonable level of understanding; after explaining, they don’t. Even if they lower their score because they’re defining the term “knowledge” differently, it remains a revelation to them that they know relatively little. According to Rozenblit and Keil, “many participants reported genuine surprise and new humility at how much less they knew than they originally thought.”
A telling example of the illusion of explanatory depth can be found in what people know about bicycles. Rebecca Lawson, a psychologist at the University of Liverpool, showed a group of psychology undergraduates a schematic drawing of a bicycle that was missing several parts of the frame as well as the chain and the pedals.
She asked the students to fill in the missing parts. Try it. What parts of the frame are missing? Where do the chain and pedals go?
It’s surprisingly difficult to answer these questions. In Lawson’s study, about half the students were unable to complete the drawings correctly (you can see some examples on the next page). They didn’t do any better when they were shown the correct drawings as well as three incorrect ones and were asked to pick out the correct one. Many chose pictures showing the chain around the front wheel as well as the back wheel, a configuration that would make it impossible to turn. Even expert cyclists were far less than perfect on this apparently easy task. It is striking how sketchy and shallow our understanding of familiar objects is, even objects that we encounter all the time that operate via mechanisms that are easily perceived.
How Much Do We Know?
So we overestimate how much we know, suggesting that we’re more ignorant than we think we are. But how ignorant are we? Is it possible to estimate how much we know? Thomas Landauer tried to answer this question.
Landauer was a pioneer of cognitive science, holding academic appointments at Harvard, Dartmouth, Stanford, and Princeton and also spending twenty-five years trying to apply his insights at Bell Labs. He started his career in the 1960s, a time when cognitive scientists took seriously the idea that the mind is a kind of computer. Cognitive science emerged as a field in sync with the modern computer. As great mathematical minds like John von Neumann and Alan Turing developed the foundations of computing as we know it, the question arose whether the human mind works in the same way. Computers have an operating system that is run by a central processor that reads and writes to a digital memory using a small set of rules. Early cognitive scientists ran with the idea that the mind does too. The computer served as a metaphor that governed how the business of cognitive science was done. Thinking was assumed to be a kind of computer program that runs in people’s brains. One of Alan Turing’s claims to fame is that he took this idea to its logical extreme. If people work like computers, then it should be possible to program a computer to do what a human being can. Motivated by this idea, his classic paper “Computing Machinery and Intelligence” in 1950 addressed the question Can machines think?
In the 1980s, Landauer decided to estimate the size of human memory on the same scale that is used to measure the size of computer memories. As we write this book, a laptop computer comes with around 250 or 500 gigabytes of memory as long-term storage. Landauer used several clever techniques to measure how much knowledge people have. For instance, he estimated the size of an average adult’s vocabulary and calculated how many bytes would be required to store that much information. He then used the result of that to estimate the size of the average adult’s entire knowledge base. The answer he got was half of a gigabyte.
He also made the estimate in a completely different way. Many experiments have been run by psychologists that ask people to read text, look at pictures, or hear words (real or nonsensical), sentences, or short passages of music. After a delay of between a few minutes and a few weeks, the psychologists test the memory of their subjects. One way to do this is to ask people to reproduce the material originally presented to them. This is a test of recall and can be quite punishing. Do you think you could recall a passage right now that you had heard only once before, a few weeks ago? Landauer analyzed a number of experiments that weren’t so hard on people. The experiments tended to test recognition—whether participants could identify a newly presented item (often a picture, word, or passage of music) as one that had been presented before or not. In some of these experiments, people were shown several items and had to pick the one they had seen before. This is a very sensitive way of testing memory; people would be able to do well even if their memories were weak. To estimate how much people remembered, Landauer relied on the difference in recognition performance between a group that had been exposed to the items and a group that had not. This difference is as pure a measure of memory as one can get.
Landauer’s brilliant move was to divide the measure of memory (the difference in recognition performance between the two groups) by the amount of time people spent learning the material in the first place. This told him the rate at which people are able to acquire information that they later remember. He also found a way to take into account the fact that people forget. The remarkable result of his analysis is that people acquire information at roughly the same rate regardless of the details of the procedure used in the experiment or the type of material being learned. They learned at approximately the same rate whether the items were visual, verbal, or musical.
Landauer next calculated how much information people have on hand—what the size of their knowledge base is—by assuming they learn at this same rate over the course of a seventy-year lifetime. Every technique he tried led to roughly the same answer: 1 gigabyte. He didn’t claim that this answer is precisely correct. But even if it’s off by a factor of 10, even if people store 10 times more or 10 less than 1 gigabyte, it remains a puny amount. It’s just a tiny fraction of what a modern laptop can retain. Human beings are not warehouses of knowledge.
From one perspective, this is shocking. There is so much to know and, as functioning adults, we know a lot. We watch the news and don’t get hopelessly confused. We engage in conversations about a wide range of topics. We get at least a few answers right when we watch Jeopardy! We all speak at least one language. Surely we know much more than a fraction of what can be retained by a small machine that can be carried around in a backpack.
But this is only shocking if you believe the human mind works like a computer. The model of the mind as a machine designed to encode and retain memories breaks down when you consider the complexity of the world we interact with. It would be futile for memory to be designed to hold tons of information because there’s just too much out there. --This text refers to the paperback edition.
Product details
- ASIN : B01HNJIJY4
- Publisher : Riverhead Books (March 14, 2017)
- Publication date : March 14, 2017
- Language : English
- File size : 1737 KB
- Text-to-Speech : Enabled
- Screen Reader : Supported
- Enhanced typesetting : Enabled
- X-Ray : Enabled
- Word Wise : Enabled
- Print length : 301 pages
- Lending : Not Enabled
- Best Sellers Rank: #252,548 in Kindle Store (See Top 100 in Kindle Store)
- #243 in Applied Psychology
- #352 in Cognitive Psychology (Kindle Store)
- #411 in Business Decision-Making
- Customer Reviews:
About the author

Discover more of the author’s books, see similar authors, read author blogs and more
Products related to this item
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonTop reviews from the United States
There was a problem filtering reviews right now. Please try again later.
1) Poor scientific foundations
These authors claim to offer "a better way to think about how we think" (back cover) but seem unclear about basic biological principles. This first struck me early on, when they referred to cancer cells as "microscopic organisms" (pg. 29), but I gave a pass and kept reading. Such errors grew less cute as they multiplied. True, the authors refer to themselves as cognitive scientists, not as biologists or neuroscientists, but an unfailing grasp of modern biology should be required for any modern scientist (of any kind) who claims unique insight into the thinking of biological entities as complex as human beings.
2) Regular self-contradictions
This can be summarized with an example. On page 40, the authors write: "No plant evolved cells that can organize into networks to process information." On page 41, they write: "You can think of the Venus flytrap as a kind of information-processing system." Self-contradictions like this are so frequent and stark that I was cringing with embarrassment after the first few.
3) Weak claims made with weasel language
Again, I'll let an example do the talking. From page 61: "Even though we're not great at diagnostic reasoning, our ability to do it may be what makes us human. There's hardly any evidence that any other animal can do it." No book on any modern scientific topic should be allowed to get away with these kinds of empty sentences. This book is packed with them.
4) Self-unaware writing
On page 71, the authors (both based in the U.S., according to the book jacket) write "whether or how our governments should engage in the Middle East" seemingly without considering that some readers of this book might, in fact, be citizens of governments in the Middle East. On page 49, they write that when one of the authors "sees dinnertime in his future, he hangs around his wife because she is responsible for preparing dinner in the family." And so on. This kind of writing is deeply tone-deaf and implies a sort of clueless privilege that should give pause to readers of a book that, again, purports to offer "a better way to think about how we think" (back cover).
5 and 6) Cliched content and significant repetition of simple ideas
The latter would have been served by slashing the former. Every good idea in this book could have fit into 100 pages.
I wish I could say that any of these issues were isolated or rare. They were not. The book is full of them, and as they piled up, the compounded effect was that I eventually stopped reading the book, somewhere around the halfway point.
The only reason this book gets two stars is that it did, in fact, provide me with a few moments of genuine thought. But not enough to make me finish the book.
The only thing that tempted me to give it 4 stars was what I thought was a lack of advice on how to deal with these problems. Many pages were spent explaining how human thinking is fragile and how this results in bad effects. But there was very little discussion about how to proceed with this understanding. What are some practical ways to improve the knowledge system so individual ignorance has less of an impact? How does this apply to bosses/managers? Community leaders? Parents? Social Media?
I would welcome a second book about that topic specifically - now that you know all the problems with how we think and where knowledge is actually stored, here are some practical strategies for making the most of it!
The authors start out by giving examples of incredibly smart people doing foolish things as well as asking the reader to go through several exercises to test their understanding of their own knowledge. They are good prompts to familiarize the reader with their own ignorance that they were perhaps ignorant of themselves. The author discusses some neuroscience and gives some arguments for the benefits of a mind able to construct a causal framework in which events can be predicted. The author then discusses how this our views of causal relationships can be very wrong and the mind's intuition which we default do doesn't serve us when solving problems with unique basis of assumptions. The author's also touch upon AI and how it has evolved in the last 70 years, where it has not developed as people thought and where it is going now. I wouldn't give that much credit to this part of the book as there is much better material out there. The author discusses how collaborative thinking can be a good mechanism for people to see how differentiated expertise can lead to improved research abilities and that often if not always great human achievements are dependent on groups rather than individuals. The authors remind us that deifying the achievements of the individual are often due to our belief that in the mind of one person there can be so much whereas we are all constrained and dependent on both collaboration as well as idea copying. The authors discuss how technology is impacting our ability to be honest about our own knowledge and how our ability to experience is changing with technology. The author also brings up our weak understanding of science and the failings of initiatives to improve scientific reasoning and how even those with good scientific knowledge actually rest their understanding on the expertise of others rather than deep intrinsic knowledge of the subject matter. The authors also discuss the political sphere and the extremely weak ability for people to work through the causal implications of what their voter preferences would imply. As a consequence politics becomes dependent on what the group neighborhood thinks rather than based off individual thoughtfulness. This the author's note is part of the reason for voter divisiveness today and in particular its not as though value systems are so far apart its rather that people talk in big principles rather than policy repercussions and so debates are heated by framing and group thinking. The author's discuss how we need to view intelligence and human capacity and that our biases are innate so policies which have some paternalism can be beneficial. The author's bring up some ideas from behavioral economics like nudging people's preferences to better outcomes for society.
The knowledge illusion discusses a lot of concepts around how the mind functions, what its limitations are and how we can be blind to our ignorance. I think most thoughtful people recognize that their knowledge is limited and that the more complex the world is the more it is difficult to create a comprehensive view of causality. Nonetheless the book communicates its ideas well and gives some good examples highlighting how our own understanding of our ignorance is usually ignored, potentially to our peril. Worthwhile read, no really new ideas but good mix of psychology, technology, politics and social policy.
