Other Sellers on Amazon
+ $3.99 shipping
91% positive over last 12 months
Usually ships within 4 to 5 days.
& FREE Shipping
91% positive over last 12 months
Usually ships within 3 to 4 days.
+ $3.99 shipping
88% positive over last 12 months
Usually ships within 3 to 4 days.
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
How We Learn: Why Brains Learn Better Than Any Machine . . . for Now Paperback – February 2, 2021
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Enhance your purchase
An illuminating dive into the latest science on our brain's remarkable learning abilities and the potential of the machines we program to imitate them
The human brain is an extraordinary learning machine. Its ability to reprogram itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. But how do we learn? What innate biological foundations underlie our ability to acquire new information, and what principles modulate their efficiency?
In How We Learn, Stanislas Dehaene finds the boundary of computer science, neurobiology, and cognitive psychology to explain how learning really works and how to make the best use of the brain’s learning algorithms in our schools and universities, as well as in everyday life and at any age.
- Print length352 pages
- LanguageEnglish
- PublisherPenguin Books
- Publication dateFebruary 2, 2021
- Dimensions5.5 x 0.8 x 8.42 inches
- ISBN-100525559906
- ISBN-13978-0525559900
Customers who viewed this item also viewed
Editorial Reviews
Review
“[An] expert overview of learning . . . Never mind our opposable thumb, upright posture, fire, tools, or language; it is education that enabled humans to conquer the world . . . Dehaene's fourth insightful exploration of neuroscience will pay dividends for attentive readers.”--Kirkus Reviews
“[Dehaene] rigorously examines our remarkable capacity for learning. The baby brain is especially awesome and not a ‘blank slate’ . . . Dehaene’s portrait of the human brain is fascinating.”--Booklist
“A richly instructive [book] for educators, parents, and others interested in how to most effectively foster the pursuit of knowledge.” --Publishers Weekly
Praise for Reading in the Brain:
"Splendid...Dehaene reveals how decades of low-tech experiments and high-tech brain-imaging studies have unwrapped the mystery of reading and revealed its component parts...A pleasure to read. [Dehaene] never oversimplifies; he takes the time to tell the whole story, and he tells it in a literate way."—The Wall Street Journal
"Masterful...a delight to read and scientifically precise."—Nature
Praise for Consciousness and the Brain:
"Ambitious . . . Dehaene offers nothing less than a blueprint for brainsplaining one of the world's deepest mysteries. . . . [A] fantastic book."—The Washington Post
"Dehaene is a maestro of the unconscious."—Scientific American Mind
"Brilliant... Essential reading for those who want to experience the excitement of the search for the mind in the brain."—Nature
About the Author
Excerpt. © Reprinted by permission. All rights reserved.
Seven Definitions of Learning
What does "learning" mean? My first and most general definition is the following: to learn is to form an internal model of the external world.
You may not be aware of it, but your brain has acquired thousands of internal models of the outside world. Metaphorically speaking, they are like miniature mock-ups more or less faithful to the reality they represent. We all have in our brains, for example, a mental map of our neighborhood and our home-all we have to do is close our eyes and envision them with our thoughts. Obviously, none of us were born with this mental map-we had to acquire it through learning.
The richness of these mental models, which are, for the most part, unconscious, exceeds our imagination. For example, you possess a vast mental model of the English language, which allows you to understand the words you are reading right now and guess that plastovski is not an English word, whereas swoon and wistful are, and dragostan could be. Your brain also includes several models of your body: it constantly uses them to map the position of your limbs and to direct them while maintaining your balance. Other mental models encode your knowledge of objects and your interactions with them: knowing how to hold a pen, write, or ride a bike. Others even represent the minds of others: you possess a vast mental catalog of people who are close to you, their appearances, their voices, their tastes, and their quirks.
These mental models can generate hyper-realistic simulations of the universe around us. Did you ever notice that your brain sometimes projects the most authentic virtual reality shows, in which you can walk, move, dance, visit new places, have brilliant conversations, or feel strong emotions? These are your dreams! It is fascinating to realize that all the thoughts that come to us in our dreams, however complex, are simply the product of our free-running internal models of the world.
But we also dream up reality when awake: our brain constantly projects hypotheses and interpretative frameworks on the outside world. This is because, unbeknownst to us, every image that appears on our retina is ambiguous-whenever we see a plate, for instance, the image is compatible with an infinite number of ellipses. If we see the plate as round, even though the raw sense data picture it as an oval, it is because our brain supplies additional data: it has learned that the round shape is the most likely interpretation. Behind the scenes, our sensory areas ceaselessly compute with probabilities, and only the most likely model makes it into our consciousness. It is the brain's projections that ultimately give meaning to the flow of data that reaches us from our senses. In the absence of an internal model, raw sensory inputs would remain meaningless.
Learning allows our brain to grasp a fragment of reality that it had previously missed and to use it to build a new model of the world. It can be a part of external reality, as when we learn history, botany, or the map of a city, but our brain also learns to map the reality internal to our bodies, as when we learn to coordinate our actions and concentrate our thoughts in order to play the violin. In both cases, our brain internalizes a new aspect of reality: it adjusts its circuits to appropriate a domain that it had not mastered before.
Such adjustments, of course, have to be pretty clever. The power of learning lies in its ability to adjust to the external world and to correct for errors-but how does the brain of the learner "know" how to update its internal model when, say, it gets lost in its neighborhood, falls from its bike, loses a game of chess, or misspells the word ecstasy? We will now review seven key ideas that lie at the heart of present-day machine-learning algorithms and that may apply equally well to our brains-seven different definitions of what "learning" means.
Learning Is Adjusting the Parameters
of a Mental Model
Adjusting a mental model is sometimes very simple. How, for example, do we reach out to an object that we see? In the seventeenth century, Ren Descartes (1596-1650) had already guessed that our nervous system must contain processing loops that transform visual inputs into muscular commands (see the figure on the next page). You can experience this for yourself: try grabbing an object while wearing somebody else's glasses, preferably someone who is very nearsighted. Even better, if you can, get a hold of prisms that shift your vision a dozen degrees to the left and try to catch the object. You will see that your first attempt is completely off: because of the prisms, your hand reaches to the right of the object that you are aiming for. Gradually, you adjust your movements to the left. Through successive trial and error, your gestures become more and more precise, as your brain learns to correct the offset of your eyes. Now take off the glasses and grab the object: you'll be surprised to see that your hand goes to the wrong location, now way too far to the left!
So, what happened? During this brief learning period, your brain adjusted its internal model of vision. A parameter of this model, one that corresponds to the offset between the visual scene and the orientation of your body, was set to a new value. During this recalibration process, which works by trial and error, what your brain did can be likened to what a hunter does in order to adjust his rifle's viewfinder: he takes a test shot, then uses it to adjust his scope, thus progressively shooting more and more accurately. This type of learning can be very fast: a few trials are enough to correct the gap between vision and action. However, the new parameter setting is not compatible with the old one-hence the systematic error we all make when we remove the prisms and return to normal vision.
Undeniably, this type of learning is a little particular, because it requires the adjustment of only a single parameter (viewing angle). Most of our learning is much more elaborate and requires adjusting tens, hundreds, or even thousands of millions of parameters (every synapse in the relevant brain circuit). The principle, however, is always the same: it boils down to searching, among myriad possible settings of the internal model, for those that best correspond to the state of the external world.
An infant is born in Tokyo. Over the next two or three years, its internal model of language will have to adjust to the characteristics of the Japanese language. This baby's brain is like a machine with millions of settings at each level. Some of these settings, at the auditory level, determine which inventory of consonants and vowels is used in Japanese and the rules that allow them to be combined. A baby born into a Japanese family must discover which phonemes make up Japanese words and where to place the boundaries between those sounds. One of the parameters, for example, concerns the distinction between the sounds /R/ and /L/: this is a crucial contrast in English, but not in Japanese, which makes no distinction between Bill Clinton's election and his erection. . . . Each baby must thus fix a set of parameters that collectively specify which categories of speech sounds are relevant for his or her native language.
A similar learning procedure is duplicated at each level, from sound patterns to vocabulary, grammar, and meaning. The brain is organized as a hierarchy of models of reality, each nested inside the next like Russian dolls-and learning means using the incoming data to set the parameters at every level of this hierarchy. Let's consider a high-level example: the acquisition of grammatical rules. Another key difference which the baby must learn, between Japanese and English, concerns the order of words. In a canonical sentence with a subject, a verb, and a direct object, the English language first states the subject, then the verb, and finally its object: "John + eats + an apple." In Japanese, on the other hand, the most common order is subject, then object, then verb: "John + an apple + eats." What is remarkable is that the order is also reversed for prepositions (which logically become post-positions), possessives, and many other parts of speech. The sentence "My uncle wants to work in Boston," thus becomes mumbo jumbo worthy of Yoda from Star Wars: "Uncle my, Boston in, work wants"-which makes perfect sense to a Japanese speaker.
Fascinatingly, these reversals are not independent of one another. Linguists think that they arise from the setting of a single parameter called the "head position": the defining word of a phrase, its head, is always placed first in English (in Paris, my uncle, wants to live), but last in Japanese (Paris in, uncle my, live wants). This binary parameter distinguishes many languages, even some that are not historically linked (the Navajo language, for example, follows the same rules as Japanese). In order to learn English or Japanese, one of the things that a child must figure out is how to set the head position parameter in his internal language model.
Learning Is Exploiting a Combinatorial Explosion
Can language learning really be reduced to the setting of some parameters? If this seems hard to believe, it is because we are unable to fathom the extraordinary number of possibilities that open up as soon as we increase the number of adjustable parameters. This is called the "combinatorial explosion"-the exponential increase that occurs when you combine even a small number of possibilities. Suppose that the grammar of the world's languages can be described by about fifty binary parameters, as some linguists postulate. This yields 2 combinations, which are over one million billion possible languages, or 1 followed by fifteen zeros! The syntactic rules of the world's three thousand languages easily fit into this gigantic space. However, in our brain, there aren't just fifty adjustable parameters, but an astoundingly larger number: eighty-six billion neurons, each with about ten thousand synaptic contacts whose strength can vary. The space of mental representations that opens up is practically infinite.
Human languages heavily exploit these combinations at all levels. Consider, for instance, the mental lexicon: the set of words that we know and whose model we carry around with us. Each of us has learned about fifty thousand words with the most diverse meanings. This seems like a huge lexicon, but we manage to acquire it in about a decade because we can decompose the learning problem. Indeed, considering that these fifty thousand words are on average two syllables, each consisting of about three phonemes, taken from the forty-four phonemes in English, the binary coding of all these words requires less than two million elementary binary choices ("bits," whose value is 0 or 1). In other words, all our knowledge of the dictionary would fit in a small 250-kilobyte computer file (each byte comprising eight bits).
This mental lexicon could be compressed to an even smaller size if we took into account the many redundancies that govern words. Drawing six letters at random, like "xfdrga," does not generate an English word. Real words are composed of a pyramid of syllables that are assembled according to strict rules. And this is true at all levels: sentences are regular collections of words, which are regular collections of syllables, which are regular collections of phonemes. The combinations are both vast (because one chooses among several tens or hundreds of elements) and bounded (because only certain combinations are allowed). To learn a language is to discover the parameters that govern these combinations at all levels.
In summary, the human brain breaks down the problem of learning by creating a hierarchical, multilevel model. This is particularly obvious in the case of language, from elementary sounds to the whole sentence or even discourse-but the same principle of hierarchical decomposition is reproduced in all sensory systems. Some brain areas capture low-level patterns: they see the world through a very small temporal and spatial window, thus analyzing the smallest patterns. For example, in the primary visual area, the first region of the cortex to receive visual inputs, each neuron analyzes only a very small portion of the retina. It sees the world through a pinhole and, as a result, discovers very low-level regularities, such as the presence of a moving oblique line. Millions of neurons do the same work at different points in the retina, and their outputs become the inputs of the next level, which thus detects "regularities of regularities," and so on and so forth. At each level, the scale broadens: the brain seeks regularities on increasingly vast scales, in both time and space. From this hierarchy emerges the ability to detect increasingly complex objects or concepts: a line, a finger, a hand, an arm, a human body . . . no, wait, two, there are two people facing each other, a handshake. . . . It is the first Trump-Macron encounter!
Learning Is Minimizing Errors
The computer algorithms that we call "artificial neural networks" are directly inspired by the hierarchical organization of the cortex. Like the cortex, they contain a pyramid of successive layers, each of which attempts to discover deeper regularities than the previous one. Because these consecutive layers organize the incoming data in deeper and deeper ways, they are also called "deep networks." Each layer, by itself, is capable of discovering only an extremely simple part of the external reality (mathematicians speak of a linearly separable problem, i.e., each neuron can separate that data into only two categories, A and B, by drawing a straight line through them). Assemble many of these layers, however, and you get an extremely powerful learning device, capable of discovering complex structures and adjusting to very diverse problems. Today's artificial neural networks, which take advantage of the advances in computer chips, are also deep, in the sense that they contain dozens of successive layers. These layers become increasingly insightful and capable of identifying abstract properties the further away they are from the sensory input.
Let's take the example of the LeNet algorithm, created by the French pioneer of neural networks, Yann LeCun (see figure 2 in the color insert). As early as the 1990s, this neural network achieved remarkable performance in the recognition of handwritten characters. For years, Canada Post used it to automatically process handwritten postal codes. How does it work? The algorithm receives the image of a written character as an input, in the form of pixels, and it proposes, as an output, a tentative interpretation: one out of the ten possible digits or twenty-six letters. The artificial network contains a hierarchy of processing units that look a bit like neurons and form successive layers. The first layers are connected directly with the image: they apply simple filters that recognize lines and curve fragments. The layers higher up in the hierarchy, however, contain wider and more complex filters. Higher-level units can therefore learn to recognize larger and larger portions of the image: the curve of a 2, the loop of an O, or the parallel lines of a Z . . . until we reach, at the output level, artificial neurons that respond to a character regardless of its position, font, or case. All these properties are not imposed by a programmer: they result entirely from the millions of connections that link the units. These connections, once adjusted by an automated algorithm, define the filter that each neuron applies to its inputs: their settings explain why one neuron responds to the number 2 and another to the number 3.
Product details
- Publisher : Penguin Books (February 2, 2021)
- Language : English
- Paperback : 352 pages
- ISBN-10 : 0525559906
- ISBN-13 : 978-0525559900
- Item Weight : 12 ounces
- Dimensions : 5.5 x 0.8 x 8.42 inches
- Best Sellers Rank: #117,914 in Books (See Top 100 in Books)
- #248 in Medical Cognitive Psychology
- #515 in Biology (Books)
- #532 in Cognitive Psychology (Books)
- Customer Reviews:
About the author

Professor Stanislas Dehaene holds the Chair of Experimental Cognitive Psychology at the Collége de France, Paris. He directs the INSERM-CEA Cognitive Neuroimaging Unit at NeuroSpin in Saclay, south of Paris, France's advanced brain imaging research center. He is also the president of the Scientific Council for Education of the French ministry of education.
Stanislas Dehaene is recognized as one of Europe’s most prominent brain scientists. He is well known for his pioneering studies of “the number sense”, the innate brain circuits that we share with other primates and that allow us to understand numbers and mathematics. He is also a specialist of reading and uncovered the function of the ''visual word form area'', a left-hemisphere region that specializes for letters when we learn to read. Those discoveries have fostered his strong interest for learning and education. With his wife Ghislaine Dehaene-Lambertz, he has made fundamental discoveries on infants’ brain organization for language, and on how education to mathematics, reading and bilingualism shape the human brain. He has also observed some of the earliest “signatures of consciousness", i.e. patterns of brain responses that are unique to conscious processing and can be used to diagnose coma and vegetative-state patients.
Prof. Dehaene has accumulated numerous awards and prizes. In 2014, he was awarded the Grete Lundbeck Brain Prize, a 1-million € award which is considered the Nobel prize in the field (with G. Rizzolatti and T. Robbins). He is also a member of eight academies: the US National Academy of Sciences, the American Philosophical Society, the Pontifical Academy of Sciences, the French Académie des Sciences, the British Academy, Academia Europae, the Royal Academies for Science and the Arts of Belgium, and the European Molecular Biology Organization EMBO.
With an h-index of 173, Prof. Dehaene is a Thomas Reuters highly cited researcher. His research has been featured in numerous publications including a full-length portrait in the New Yorker (“The Numbers Guy”, by Jim Holt, 2008). He is the author of five books, three television documentaries, and over 400 scientific publications in journals such as Science, Nature, Nature Neuroscience, and PNAS. 70 of his articles were cited more than 500 times.
His books are a huge success, have been translated in fifteen languages, and several have received awards for best science writing:
• The Number Sense (1999): Jean Rostand award
• Reading in the Brain (2009): A Washington Post science book of the year
• Consciousness and the brain (2013): Grand Prix RTL-Lire for Best science book of the year
• How we Learn: why brains learn better than any machine… for now. (2020) Penguin Viking. Book of the year, the French Society for Neurology.
• Seeing the mind (2023). To appear at MIT Press.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviewed in the United States on January 29, 2020
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
How We Learn is Stanislas Dehaene’s fourth book that I have read, and it does not disappoint. Dehaene effortlessly and compassionately moves between the abstract and the useful, carefully and methodically guiding the reader through a veritable mountain range of information from fields as different as neuroscience and education. And The Wall Street Journal got it right for this book as well when it declared (of Reading In The Brain) that Dehaene “never oversimplifies; he takes the time to tell the whole story; and he tells it in a literate way.”
All in all this is an incredible book, whether you’re interested in neuroscience, education, how brain plasticity and literacy are related, AI or even the brains of babies. There’s really something in it for everyone, whether you’re looking to apply your knowledge to study (or help someone else study) more effectively, or improve your own understanding of how the brain works. Dehaene is on the cutting edge, and he’s incredibly compassionate without ever being tendentious or moralistic. Below is a more detailed breakdown.
How We Learn is divided into three parts. Part One answers the question “What is Learning?” In the first chapter he discusses seven definitions of learning. One of the most interesting definitions (which isn’t even included among the first seven) is “Learning is inferring the grammar of a domain” in which he submits: “Characteristic of the human species is a relentless search for abstract rules, high-level conclusions that are extracted from a specific situation and subsequently tested on new observations” (35).
In Chapter 2 Dehaene wrestles for 20 pages with “Why our brain learns better than current machines,” continuing the discussion of learning all the while. Dehaene emphatically disagrees with the belief that “machines are about to overtake us” (27). A handful of the things he argues humans still do much better includes: Learning Abstract concepts; Data-efficient learning; Social learning; One-trial Learning; and, Systematicity and the language of thought.
in Part 2 Dehaene delves into “How Our Brain Learns.” This is the most scientifically granular section and, for many more technical readers, may be the most interesting. The neuroscience underpinning the four chapters in Part 2 is where Dehaene really shows off how dynamic a mind he has. Essentially, human thought is itself a kind of symbolic language. Furthermore, the literacy of thought starts almost as soon as a baby starts to develop as a fetus. By the time a baby is born, it is an incredibly well-developed instrument ready for its second (rather than first) phase of life, for which it has been preparing for three seasons. Dehaene’s thoughts and work on infants alone in this book is well worth ten times its price.
Part Three, more of the applied education section, starts with the “Four Pillars of Learning”: Attention (Ch 7, about 30 pages), Active Engagement (Ch 8, about 20 pages), Error Feedback (Ch 9, about 20 pages), Consolidation (Ch 10, about 15 pages). Each of these chapters is a combinatory mine of research, experimental data and studies, as well as practical advice for learners and teachers, reminiscent of Brown, Roediger and McDaniel’s excellent book Make It Stick.
The following are some kernels of very useful information from Chapters 7-10:
“The intellectual quotient [IQ] is just a behavioral ability, and as such, it is far from being unchangeable by education. Like any of our abilities, IQ rests on specific brain circuits whose synaptic weights can be changed by training” (167).
“A passive organism does not learn” (178).
“To learn, our brain must first form a hypothetical mental model [algorithm] of the outside world, which it then projects onto its environment and puts to a test by comparing its predictions to what it receives from the senses. This algorithm implies an active, engaged, and attentive posture. Motivation is essential: we learn well only if we have a clear goal and we fully commit to reaching it” (178).
“While it is crucial for students to be motivated, active, and engaged, this does not mean they should be left to their own devices” (184).
“Pure discovery learning, the idea that children can teach themselves, is one of the many educational myths that have been debunked but still remain curiously popular. […] Two other major misconceptions are linked to it: the myth of the digital native [and] the myth of learning styles” 185).
“Zero error, zero learning,” but… “We do not need an actual error in order to learn—all we need is an internal sign that travels in the brain” (204)
“It would be wrong, therefore, to believe that what matters most for learning is to make a lot of mistakes […] What matters is receiving explicit feedback that reduces the learner’s uncertainty. […] The theory of error backpropogation predicts: every unexpected event leads to corresponding adjustment of the internal model of the world" (205).
“This is the golden rule: it is always better to spread out the training periods rather than cram them into a single run. […] Decades of psychological research show that if you have a fixed amount of time to learn something, spacing out the lessons is a much more effective strategy than grouping them” (218).
“Sleep and leaning are strongly linked” (228).
“Computer scientists have already designed several learning algorithms that mimic the sleep/wake cycle” (231).
“From an educational perspective there is little doubt that improving the length and quality of sleep can be an effective intervention for all children, especially those with learning difficulties” (235).
Part Three ends with the Dehaene’s “Conclusion: Reconciling Education with Neuroscience.” He conveniently provides a bullet point summary as well as “Thirteen Take-Home Messages to Optimize Children’s Potential.” Here they are, without their supporting paragraphs.
Do not underestimate children.
Take advantage of the brain’s sensitivity periods.
Enrich the environment.
Rescind the idea that all children are different.
Pat attention to attention.
Keep children active, curious, engaged, and autonomous.
Make every school day enjoyable.
Encourage efforts.
Help students deepen their thinking.
Set clear learning objectives.
Accept and correct mistakes.
Practice regularly.
Let students sleep.
Dehaene ends with his insistence that “schools should devote more time to parents training,” and that “scientists must engage with teachers and schools in order to consolidate the growing field of educational science” (244).
Reviewed in the United States 🇺🇸 on January 29, 2020
How We Learn is Stanislas Dehaene’s fourth book that I have read, and it does not disappoint. Dehaene effortlessly and compassionately moves between the abstract and the useful, carefully and methodically guiding the reader through a veritable mountain range of information from fields as different as neuroscience and education. And The Wall Street Journal got it right for this book as well when it declared (of Reading In The Brain) that Dehaene “never oversimplifies; he takes the time to tell the whole story; and he tells it in a literate way.”
All in all this is an incredible book, whether you’re interested in neuroscience, education, how brain plasticity and literacy are related, AI or even the brains of babies. There’s really something in it for everyone, whether you’re looking to apply your knowledge to study (or help someone else study) more effectively, or improve your own understanding of how the brain works. Dehaene is on the cutting edge, and he’s incredibly compassionate without ever being tendentious or moralistic. Below is a more detailed breakdown.
How We Learn is divided into three parts. Part One answers the question “What is Learning?” In the first chapter he discusses seven definitions of learning. One of the most interesting definitions (which isn’t even included among the first seven) is “Learning is inferring the grammar of a domain” in which he submits: “Characteristic of the human species is a relentless search for abstract rules, high-level conclusions that are extracted from a specific situation and subsequently tested on new observations” (35).
In Chapter 2 Dehaene wrestles for 20 pages with “Why our brain learns better than current machines,” continuing the discussion of learning all the while. Dehaene emphatically disagrees with the belief that “machines are about to overtake us” (27). A handful of the things he argues humans still do much better includes: Learning Abstract concepts; Data-efficient learning; Social learning; One-trial Learning; and, Systematicity and the language of thought.
in Part 2 Dehaene delves into “How Our Brain Learns.” This is the most scientifically granular section and, for many more technical readers, may be the most interesting. The neuroscience underpinning the four chapters in Part 2 is where Dehaene really shows off how dynamic a mind he has. Essentially, human thought is itself a kind of symbolic language. Furthermore, the literacy of thought starts almost as soon as a baby starts to develop as a fetus. By the time a baby is born, it is an incredibly well-developed instrument ready for its second (rather than first) phase of life, for which it has been preparing for three seasons. Dehaene’s thoughts and work on infants alone in this book is well worth ten times its price.
Part Three, more of the applied education section, starts with the “Four Pillars of Learning”: Attention (Ch 7, about 30 pages), Active Engagement (Ch 8, about 20 pages), Error Feedback (Ch 9, about 20 pages), Consolidation (Ch 10, about 15 pages). Each of these chapters is a combinatory mine of research, experimental data and studies, as well as practical advice for learners and teachers, reminiscent of Brown, Roediger and McDaniel’s excellent book Make It Stick.
The following are some kernels of very useful information from Chapters 7-10:
“The intellectual quotient [IQ] is just a behavioral ability, and as such, it is far from being unchangeable by education. Like any of our abilities, IQ rests on specific brain circuits whose synaptic weights can be changed by training” (167).
“A passive organism does not learn” (178).
“To learn, our brain must first form a hypothetical mental model [algorithm] of the outside world, which it then projects onto its environment and puts to a test by comparing its predictions to what it receives from the senses. This algorithm implies an active, engaged, and attentive posture. Motivation is essential: we learn well only if we have a clear goal and we fully commit to reaching it” (178).
“While it is crucial for students to be motivated, active, and engaged, this does not mean they should be left to their own devices” (184).
“Pure discovery learning, the idea that children can teach themselves, is one of the many educational myths that have been debunked but still remain curiously popular. […] Two other major misconceptions are linked to it: the myth of the digital native [and] the myth of learning styles” 185).
“Zero error, zero learning,” but… “We do not need an actual error in order to learn—all we need is an internal sign that travels in the brain” (204)
“It would be wrong, therefore, to believe that what matters most for learning is to make a lot of mistakes […] What matters is receiving explicit feedback that reduces the learner’s uncertainty. […] The theory of error backpropogation predicts: every unexpected event leads to corresponding adjustment of the internal model of the world" (205).
“This is the golden rule: it is always better to spread out the training periods rather than cram them into a single run. […] Decades of psychological research show that if you have a fixed amount of time to learn something, spacing out the lessons is a much more effective strategy than grouping them” (218).
“Sleep and leaning are strongly linked” (228).
“Computer scientists have already designed several learning algorithms that mimic the sleep/wake cycle” (231).
“From an educational perspective there is little doubt that improving the length and quality of sleep can be an effective intervention for all children, especially those with learning difficulties” (235).
Part Three ends with the Dehaene’s “Conclusion: Reconciling Education with Neuroscience.” He conveniently provides a bullet point summary as well as “Thirteen Take-Home Messages to Optimize Children’s Potential.” Here they are, without their supporting paragraphs.
Do not underestimate children.
Take advantage of the brain’s sensitivity periods.
Enrich the environment.
Rescind the idea that all children are different.
Pat attention to attention.
Keep children active, curious, engaged, and autonomous.
Make every school day enjoyable.
Encourage efforts.
Help students deepen their thinking.
Set clear learning objectives.
Accept and correct mistakes.
Practice regularly.
Let students sleep.
Dehaene ends with his insistence that “schools should devote more time to parents training,” and that “scientists must engage with teachers and schools in order to consolidate the growing field of educational science” (244).
To escape to Yallingup to write a book, just says a lot about it.
In terms of depth and writing style it's approachable for the average science reader, maybe a little dry. I would say it's somewhere between pop science level of discourse and a serious college text or book written for scientists and doctors. Books by Steven Pinker and others give more in-depth treatment to specific kinds of neural processes like how the brain stores and makes use of its own symbology (for example), but there's a price to be paid in those kinds of books. Namely you need to re-read stuff sometimes to really understand it, which generally is not required here.
Bottom line: I liked this book enough and learned enough that I will be buying more of the author's books on cognition and the brain as a problem solving machine. I think any will be a safe bet if you're into reading about the human brain and how it works.
Top reviews from other countries
Here are just a couple of random examples:
"Training in executive control can even change one's IQ. This may come as a surprise, because IQ is often viewed as a given - a fundamental determinant of children's mental potential. However, the intellectual quotient is just a behavioural ability, and as such, it is far from being unchangeable by education. Like any of our abilities, IQ rests on specific brain circuits whose synaptic weights can be changed by training..."
"Brain imaging is beginning to clarify the origins of this processing depth effect. Deeper processing leaves a stronger mark in memory because it activates areas of the prefrontal cortex that tare associated with conscious word processing and because these areas form powerful loops with the hippocampus, which stores information in the form of explicit episodic memories"
Don't be put off by the fact that the four pillars of learning proposed in the book doesn't sound like revolutionary ideas. I had the same thought in mind but was amazed to find how much is there to learn. To know the apple will fall to the ground is not quite the same as knowing why it falls to the ground!
If I am to list all the things I love about this book I can go on for many pages. But to keep it short, if you prefer to read science, not anecdote, based discussion, then this is THE book on learning.
There appear to be groups of neurons that all brains have that respond to concepts such as relative size of numbers and the number line. This refutes the common perception that some people happen to be born with a good sense of mathematics.
There is a lot in this book to challenge and inspire anyone in education - indeed anyone who is interested in how children learn.
The idea that ‘surprise’ in learning is not a new one, but it’s explained well and Dehaene makes the case for how research studies support its importance.
There also appear to be limits to what the brain can reprogram its neurons to do. For example - it’s not possible to get any group of neurons to assume the function of a damaged part of the brain.
But it appears possible for neurons that are close in original function to the damaged ones to become reprogrammed.
It’s fascinating and I thoroughly recommend it.
Dehaene define a aprendizagem como a formulação de um modelo cerebral preditivo e auto-ajustável do mundo externo (esta definição não aparece nesses exatos termos, mas creio ser bem fiel à ideia do autor), mecanismo altamente vantajoso do ponto de vista evolutivo que pressupõe que alguns parâmetros estão inseridos geneticamente desde sempre em nosso cérebro, enquanto outros são desenvolvidos a partir do influxo do ambiente. Aprender (de uma maneira muito mais elaborada que qualquer animal ou máquina pode fazer) é a característica que nos define como espécie, a ponto de a transformarmos em uma experiência coletiva nas salas de aula, o que faz da metacognição (aprender a aprender) a característica mais importante para nossa existência enquanto seres humanos.
Dehaene afasta a ideia empirista de que o cérebro é uma folha de papel em branco sobre o qual se inscrevem os dados da realidade. Na verdade, nosso cérebro foi se desenvolvendo para estabelecer, desde o nascimento, alguns padrões prévios ao contato com o mundo externo que nos permitem organizar e manejar os dados da experiência (como o conceito de objeto, o senso de número, a intuição de probabilidades, a percepção de rostos como um objeto específico, a tendência ao desenvolvimento de uma linguagem, um certo GPS que nos permite localizarmo-nos no espaço).
Para aprender, o cérebro vai criando experimentos randomizados que permitem levar a uma generalização probabilística dos dados da consciência, mais ou menos como faz a Inteligência Artificial. Ao formular modelos que melhor correspondem à realidade, o cérebro vai produzindo um processo de recompensa por meio do sistema de dopamina, o que torna altamente prazeroso descobrir como a realidade funciona. No entanto, o cérebro consegue fazer isso de maneira muito mais eficiente, como muito menos dados, que a Inteligência artificial. Não se trata apenas de reconhecer um padrão nos dados (que é o que a Inteligência Artificial atualmente faz), mas de formular um modelo mental explicativo para a realidade por meio dos dados. Na evolução desse modelo, novos dados (convergentes) vão aumentando a amplitude e aplicabilidade do modelo, enquanto dados divergentes vão corrigindo o modelo (o que faz do erro algo fundamental ao processo de aprendizagem).
Durante a infância (especialmente nos dois primeiros anos de vida, mas até o final da adolescência, o que me lembrou muito o livro de Frances E. Jensen, O Cérebro Adolescente), uma incrível plasticidade faz com que as sinapses se tornem mais eficientes para que o cérebro seja apto a elaborar tais modelos cognitivos, sob a influência da genética e do ambiente ao mesmo tempo (por isso um ambiente rico é fundamental para o desenvolvimento das suas potencialidades). Existem “períodos sensíveis” em que determinadas habilidades se desenvolvem, o que é determinado pela maturação biológica do cérebro, universalmente idêntica nos seres humanos, mas é possível “reciclar” as regiões do cérebro (durante os períodos sensitivos) para que uma parte do cérebro se especialize em uma função (ou assuma a função de outra parte do cérebro), aumentando sua eficiência. Isso não acontece apenas com o cérebro que possui uma lesão, mas em toda maturação normal do cérebro humano (por exemplo, no processo de aprender a ler ou a calcular, à medida que automatizamos o processo, regiões distintas das iniciais se ocupam da tarefa). Dehaene afasta a ideia que cada pessoa tem um modo diferente de aprender. Dito de uma maneira melhor: a técnica de ressonância magnética mostra que todos aprendem do mesmo modo, apesar de a velocidade com que isso ocorra seja altamente variável, sobretudo por causa do ambiente. A notícia ruim é que a ideia de um período sensitivo indica que há um período ótimo para que isso ocorra (por exemplo, pessoas que aprendem a ler quando adultas, ou que têm que reaprender a ler por causa de um AVC, nunca serão leitores tão competentes quanto crianças que aprenderam a ler na idade correta porque a transferência do processo para regiões do cérebro em que isso ocorre automaticamente não ocorre mais).
Toda a pesquisa de Dehaene mostra que há equívocos nas teorias comportamentais da aprendizagem (como Pavlov e Skinner, que erraram ao pensar que o estímulo fosse o elemento causalmente determinante na aprendizagem), mas também nas cognitivistas (como Piaget, que errou ao afirmar que noções como o número e a permanência de objetos físicos são desenvolvidas pela experiência).
Como sabemos que o meio ambiente interfere no processo, há quatro pontos (chamados de “pilares da aprendizagem”) em que se pode intervir para se aumentar a eficiência da aprendizagem: a) atenção (o professor ou quem ensina precisa chamar atenção do aluno para que ele saiba quando prestar atenção e sobre o que, para que ele possa selecionar na multiplicidade de dados a informação relevante, evitando a saturação de informação, – que é exatamente o que a Inteligência Artificial ainda não consegue fazer); b) levar o aluno a se engajar ativamente (ao performar a ação, ainda que apenas mentalmente, o cérebro não apenas a memoriza, mas testa modelos alternativos pouco eficientes); c) dar feedback do erro (quanto mais próximo temporalmente do erro e quanto mais preciso o feedback, maior a capacidade de a criança descartar o motivo do erro); d) fornecer consolidação (o cérebro continua testando hipóteses e modelos durante o sono, inconscientemente, e agora de modo exclusivo, sem se preocupar com outros estímulos, e é por isso que o sono auxilia na fixação do conhecimento e na resolução de problemas).
Por isso ensinar é prestar atenção ao conhecimento de outra pessoa. Isso explica porque a aula expositiva é tão bem sucedida se bem realizada (se conseguir provocar a atenção, o engajamento e se produzir o feedback do erro – cuja melhor forma de se realizar é através de testes não punitivos, ou seja, sem atribuição de nota), mais do que uma perspectiva radicalmente construtivista. Por isso o espaçamento da aprendizagem (como preconizada por Ebbinghaus e Leitner, ainda que com algumas críticas a eles) se revela a melhor estratégia para prolongar a permanência da memória do que foi aprendido (por causa, dentre outras coisas, da intervenção do sono no processo de aprendizagem).
Trata-se de um livro importante para educadores e pais, a que eu daria uma nota 04 (em 05).
The recommendations at the end are not new or surprising. It is the rigour of how the book takes you there that makes it so interesting.











