- Explore more great deals on thousands of titles in our Deals in Books store.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Other Sellers on Amazon
+ $3.99 shipping
On Intelligence Hardcover – Bargain Price, September 9, 2004
|New from||Used from|
All Books, All the Time
Read author interviews, book reviews, editors picks, and more at the Amazon Book Review. Read it now
Frequently bought together
Customers who bought this item also bought
Special offers and product promotions
Jeff Hawkins, the high-tech success story behind PalmPilots and the Redwood Neuroscience Institute, does a lot of thinking about thinking. In On Intelligence Hawkins juxtaposes his two loves--computers and brains--to examine the real future of artificial intelligence. In doing so, he unites two fields of study that have been moving uneasily toward one another for at least two decades. Most people think that computers are getting smarter, and that maybe someday, they'll be as smart as we humans are. But Hawkins explains why the way we build computers today won't take us down that path. He shows, using nicely accessible examples, that our brains are memory-driven systems that use our five senses and our perception of time, space, and consciousness in a way that's totally unlike the relatively simple structures of even the most complex computer chip. Readers who gobbled up Ray Kurzweil's (The Age of Spiritual Machines and Steven Johnson's Mind Wide Open will find more intriguing food for thought here. Hawkins does a good job of outlining current brain research for a general audience, and his enthusiasm for brains is surprisingly contagious. --Therese Littleton
From Publishers Weekly
Hawkins designed the technical innovations that make handheld computers like the Palm Pilot ubiquitous. But he also has a lifelong passion for the mysteries of the brain, and he's convinced that artificial intelligence theorists are misguided in focusing on the limits of computational power rather than on the nature of human thought. He "pops the hood" of the neocortex and carefully articulates a theory of consciousness and intelligence that offers radical options for future researchers. "[T]he ability to make predictions about the future... is the crux of intelligence," he argues. The predictions are based on accumulated memories, and Hawkins suggests that humanoid robotics, the attempt to build robots with humanlike bodies, will create machines that are more expensive and impractical than machines reproducing genuinely human-level processes such as complex-pattern analysis, which can be applied to speech recognition, weather analysis and smart cars. Hawkins presents his ideas, with help from New York Times science writer Blakeslee, in chatty, easy-to-grasp language that still respects the brain's technical complexity. He fully anticipates—even welcomes—the controversy he may provoke within the scientific community and admits that he might be wrong, even as he offers a checklist of potential discoveries that could prove him right. His engaging speculations are sure to win fans of authors like Steven Johnson and Daniel Dennett.
Copyright © Reed Business Information, a division of Reed Elsevier Inc. All rights reserved.
Top customer reviews
As a testament to it's relevancy today (I'm writing this Sept 2012, seven years after the book was published) he predicts three technological applications that may become available in the short term (5-10 years) due to breakthroughs in the kind of trainable AI this book discusses:
Computer vision and teaching a computer to tell the difference between a cat and a dog (this was successfully demonstrated in a study published in June 2012 - the paper is called "Building High-level Features Using Large Scale Unsupervised Learning" and is available online, or just search for "computer learns to recognize cats" for articles)
PDAs (as they were called back then) will understand naturally spoken instructions like "Move my daughter's basketball game on Sunday to 10 in the morning" (this kind of sentence, copied from the book verbatim, is exactly where Apple's AI application SIRI shines)
Smart/autonomous cars - in Aug 2012, Google announced that their self driving cars have logged 300 K accident free miles in live traffic on public roads, exceeding the average distance a human drives without accident.
The thing to note here is that when he wrote the book these three things had hurdles that we did not know how to solve, and at the time there was no clear linear progression of existing solutions that would guarantee they would be solved. His prediction is that we'll be able to train computers to recognize patterns by themselves which will allow us to eventually solve the problems (and this is exactly how the computer learned to recognize cat faces from youtube videos)
Furthermore, he predicts that AI will become one of the hottest fields within the next 10 years - and with the current explosion of interest in Big Data, Machine Learning, and applications like SIRI it is hard to deny that it lookslike we're right in the midst of seeing just this happen.
The grander implications of the model of this book won't be known for another 10-20 years or more, but 7 years in his general predictions about the field of AI have been very accurate.
Hawkins defines intelligence as the ability to make predictions. I think this is an excellent definition of intelligence.
He says the cortex makes predictions via memory. The rat in the maze has a memory which includes both the motor activity of turning right and the experience of food. This activates turning right again, which is equivalent to the prediction that if he turns right, food will occur.
The primate visual system, which is the sense best understood, has four cortical areas that are in a hierarchy. In the lowest area, at the back of the head, cells respond to edges in particular locations, sometimes to edges moving in specific directions. In the highest area you can find cells that respond to faces, sometimes particular faces, such as the face of Bill Clinton.
But the microscopic appearance of the cortex is basically the same everywhere. There is not even much difference between motor cortex and sensory cortex. The book makes sense of the connections found in all areas of the cortex.
The cortex is a sheet covering the brain composed of small adjacent columns of cells, each with six layers. Information from a lower cortical area excites the layer 4 of a column. Layer 4 cells excite cells in layers 2 and 3 of the same column, which in turn excite cells in layers 5 and 6. Layers 2 and 3 have connections to the higher cortical area. Layer 5 has motor connections (the visual area affects eye movements) and layer 6 connects to the lower cortical area. Layer 6 goes to the long fibers in layer 1 of the area below, which can excite layers 2 and or 3 in many columns.
So there are two ways of exciting a column. Either by the area below stimulating layer 4, or by the area above stimulating layers 2 and 3. The synapses from the area above are far from the cell bodies of the neurons, but Hawkins suggests that synapses far from the cell body may fire a cell if several synapses are activated simultaneously.
The lowest area, at the back of the head, is not actually the beginning of processing. It receives input from the thalamus, in the middle of the brain (which receives input from the eyes). Cells in the thalamus respond to small circle of light, and the first stage of processing is to convert this response to spots to response to moving edges.
And the highest visual area is not the end of the story. It connects to multisensory areas of the cortex, where vision is combined with hearing and touch, etc.
The very highest area is not cortex at all, but the hippocampus.
Perception always involves prediction. When we look at a face, our fixation point is constantly shifting, and we predict what the result of the next fixation will be.
According to Hawkins, when an area of the cortex knows what it is perceiving, it sends to the area below information on the name of the sequence, and where we are in the sequence. If the next item in the sequence agrees with what the higher area thought it should be, the lower area sends no information back up. But if something unexpected occurs, it transmits information up. If the higher area can interpret the event, it revises its output to the lower area, and sends nothing to the area above it.
But truly unexpected events will percolate all the way up to the hippocampus. It is the hippocampus that processes the truly novel, eventually storing the once novel sequence in the cortex. If the hippocampus on both sides is destroyed, the person may still be intelligent, but can learn nothing new (at least, no new declarative memory).
When building an artificial auto-associative memory, which can learn sequences, it is necessary to build in a delay so that the next item will be predicted when it will occur. Hawkins suggests that the necessary delay is embodied in the feedback loop between layer 5 and the nonspecific areas of the thalamus. A cell in a nonspecific thalamic area may stimulate many cortical cells.
I think this theory of how the cortex works makes a lot of sense, and I am grateful to Hawkins and Blakeslee for writing it in a book that is accessible to people with limited AI and neuroscience.
But I am not convinced that the mammalian cortex is the only way to achieve intelligence. Hawkins suggests that the rat walks and sniffs with its "reptilian brain", but needs the cortex to learn the correct turn in the maze. But alligators can learn mazes using only their reptilian brains. I would have been quite surprised if they could not.
Even bees can predict, using a brain of one cubic millimeter. Not only can they learn to locate a bowl of sugar water, if you move the bowl a little further away each day, the bee will go to the correct predicted location rather than to the last experienced location.
And large-brained birds achieve primate levels of intelligence without a cortex. The part of the forebrain that is enlarged in highly intelligent birds has a nuclear rather than a laminar (layered) structure. The parrot Alex had language and intelligence equivalent to a two year old human, and Aesop's fable of the crow that figured out to get what he wanted from the surface of the water by dropping stones in the water and raising the water level, has been replicated in crows presented with the problem.