Other Sellers on Amazon
+ $3.99 shipping
97% positive over last 12 months
FREE Shipping
67% positive over last 12 months
FREE Shipping
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle Cloud Reader.
Using your mobile phone camera - scan the code below and download the Kindle app.
Follow the Authors
OK
On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines Hardcover – October 3, 2004
| Jeff Hawkins (Author) Find all the books, read about the author, and more. See search results for this author |
| Sandra Blakeslee (Author) Find all the books, read about the author, and more. See search results for this author |
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
Enhance your purchase
From the inventor of the PalmPilot comes a new and compelling theory of intelligence, brain function, and the future of intelligent machines
Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one stroke, with a new understanding of intelligence itself.
Hawkins develops a powerful theory of how the human brain works, explaining why computers are not intelligent and how, based on this new theory, we can finally build intelligent machines.
The brain is not a computer, but a memory system that stores experiences in a way that reflects the true structure of the world, remembering sequences of events and their nested relationships and making predictions based on those memories. It is this memory-prediction system that forms the basis of intelligence, perception, creativity, and even consciousness.
In an engaging style that will captivate audiences from the merely curious to the professional scientist, Hawkins shows how a clear understanding of how the brain works will make it possible for us to build intelligent machines, in silicon, that will exceed our human ability in surprising ways.
Written with acclaimed science writer Sandra Blakeslee, On Intelligence promises to completely transfigure the possibilities of the technology age. It is a landmark book in its scope and clarity.
- Print length272 pages
- LanguageEnglish
- PublisherTimes Books
- Publication dateOctober 3, 2004
- Dimensions6.14 x 0.62 x 9.21 inches
- ISBN-100805074562
- ISBN-13978-0805074567
Frequently bought together

- +
- +
Customers who viewed this item also viewed
Editorial Reviews
Amazon.com Review
From Publishers Weekly
Copyright © Reed Business Information, a division of Reed Elsevier Inc. All rights reserved.
From Scientific American
Richard Lipkin
From Booklist
Copyright © American Library Association. All rights reserved
Review
“On Intelligence will have a big impact; everyone should read it. In the same way that Erwin Schrödinger's 1943 classic What is Life? made how molecules store genetic information then the big problem for biology, On Intelligence lays out the framework for understanding the brain.” ―James D. Watson, president, Cold Spring Harbor Laboratory, and Nobel laureate in Physiology
“Brilliant and embued with startling clarity. On Intelligence is the most important book in neuroscience, psychology, and artificial intelligence in a generation.” ―Malcolm Young, neurobiologist and provost, University of Newcastle
“Read this book. Burn all the others. It is original, inventive, and thoughtful, from one of the world's foremost thinkers. Jeff Hawkins will change the way the world thinks about intelligence and the prospect of intelligent machines.” ―John Doerr, partner, Kleiner Perkins Caufield & Byers
About the Author
Jeff Hawkins, co-author of On Intelligence, is one of the most successful and highly regarded computer architects and entrepreneurs in Silicon Valley. He founded Palm Computing and Handspring, and created the Redwood Neuroscience Institute to promote research on memory and cognition. Also a member of the scientific board of Cold Spring Harbor Laboratories, he lives in northern California.
Sandra Blakeslee has been writing about science and medicine for The New York Times for more than thirty years and is the co-author of Phantoms in the Brain by V. S. Ramachandran and of Judith Wallerstein's bestselling books on psychology and marriage. She lives in Santa Fe, New Mexico.
Excerpt. © Reprinted by permission. All rights reserved.
Let me show why computing is not intelligence. Consider the task of catching a ball. Someone throws a ball to you, you see it traveling towards you, and in less than a second you snatch it out of the air. This doesn't seem too difficult-until you try to program a robot arm to do the same. As many a graduate student has found out the hard way, it seems nearly impossible. When engineers or computer scientists try to solve this problem, they first try to calculate the flight of the ball to determine where it will be when it reaches the arm. This calculation requires solving a set of equations of the type you learn in high school physics. Next, all the joints of a robotic arm have to be adjusted in concert to move the hand into the proper position. This whole operation has to be repeated multiple times, for as the ball approaches, the robot gets better information about its location and trajectory. If the robot waits to start moving until it knows exactly where the ball will land it will be too late to catch it. A computer requires millions of steps to solve the numerous mathematical equations to catch the ball. And although it's imaginable that a computer might be programmed to successfully solve this problem, the brain solves it in a different, faster, more intelligent way.
Product details
- Publisher : Times Books; Adapted edition (October 3, 2004)
- Language : English
- Hardcover : 272 pages
- ISBN-10 : 0805074562
- ISBN-13 : 978-0805074567
- Item Weight : 1.2 pounds
- Dimensions : 6.14 x 0.62 x 9.21 inches
- Best Sellers Rank: #839,312 in Books (See Top 100 in Books)
- #2,045 in Medical Cognitive Psychology
- #2,074 in Cognitive Psychology (Books)
- #3,286 in History & Philosophy of Science (Books)
- Customer Reviews:
About the authors

Sandra (aka Sandy) Blakeslee. I am a science writer with endless curiosity and interests but have spent the past 35 years or so writing about the brain, mostly for the New York Times where I started my career back in the dark ages (late 60s.) I've been writing books for the past few years (The Body Has a Mind of It's Own, On intelligence, Sleights of Mind, Dirt Is Good and more.) As for back story -- I graduated from Berkeley in 1965 (Free Speech Movement major), went to Peace Corps in Borneo, joined the NYT in 1968 as a staff writer, then took off on my own, raised a family, lived in many parts of the world, now live in Santa Fe NM and even have grandchildren. To quote Churchill, so much to do....

Jeff Hawkins is a well-known scientist and entrepreneur, considered one of the most successful and highly regarded computer architects in Silicon Valley. He is widely known for founding Palm Computing and Handspring Inc. and for being the architect of many successful handheld computers. He is often credited with starting the entire handheld computing industry.
Despite his successes as a technology entrepreneur, Hawkins’ primary passion and occupation has been neuroscience. From 2002 to 2005, Hawkins directed the Redwood Neuroscience Institute, now located at U.C. Berkeley. He is currently co-founder and chief scientist at Numenta, a research company focused on neocortical theory.
Hawkins has written two books, "On Intelligence" (2004 with Sandra Blakeslee) and "A Thousand Brains: A new theory of intelligence" (2021). Many of his scientific papers have become some of the most downloaded and cited papers in their journals.
Hawkins has given over one hundred invited talks at research universities, scientific institutions, and corporate research laboratories. He has been recognized with numerous personal and industry awards. He is considered a true visionary by many and has a loyal following – spanning scientists, technologists, and business leaders. Jeff was elected to the National Academy of Engineering in 2003.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonTop reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Taking his lead from Johns Hopkins neuroscience researcher Vernon Mountcastle back in the seventies, Hawkins presumes that the remarkably uniform appearance of the cortex (it basically consists, he tells us, of six layers of neuronal cells throughout) suggests that the various areas of the cortex, demonstrated by researchers to be responsible for different functions (vision, touch, hearing, conceptualizing, etc.), really do everything they do by performing the same processes. He is clear, of course, to emphasize that he is not talking about other things brains presumably do including emotions, instinctual drives, somatic sensations, etc. which he assigns to the lizard brain. It's just the intelligence part that he is interested in though he's certainly aware that for intelligence to work as it does in us it must be integrated with the broad range of other features found in consciousness including those produced in the lizard brain. So his argument is not that the cortex, in its special capacity, is a stand-alone but that it is a significant and inextricable add-on to the rest of our brain and works only with and in support of the other features.
For Hawkins, the key to understanding how the cortex does intelligence comes down to understanding the pertinent algorithm. He argues that neuronal groups work in two hierarchical ways, both up and down the line in linked columns spanning the six layers of neurons, found more or less uniformly throughout the cortex, and also by combining and linking different cortical areas horizontally (responsible for different functions , e.g., shapes, colors, sound, touch, taste, smell, language, motor control) in other, non-physically determined (because non physically contiguous) hierarchies via links established between cortical layers through extension of myriads of cellular axons traveling transversely across the cortical areas AND to other parts of the lizard brain (each of which axon produces multiple connections, through the tree-like dendrites at its end points, resulting in difficult to estimate -- but likely in the hundreds of millions [or more] -- connections).
The basic cortical algorithm, performed by all these interconnecting neurons in the cortex, on Hawkins' view, is one of patterning and of the capture and retention of so-called "invariant representations". He argues that human memory is not precise, the way computational memory is (a case made, as well, by Gerald Edelman in his own work). But, where Edelman ( Bright Air, Brilliant Fire: On The Matter Of The Mind ) emphasizes the dynamic and incomplete quality of human recollections, Hawkins emphasizes their general nature. We don't remember things precisely, in detail, he says, but, rather, in only general patterns (adumbrations rather than precise images).
This, he suggests, is because of the basic patterning algorithm of the neuronal group operations in the cortex.
When information flows in, he says, various neurons in the affected groups fire, in very fine detail, much as our taste buds operate in the tongue with different nerves for the different tastes which then pass the captured information up the line to combine further upstream via the brain's more comprehensive processes. In the vision parts of the cortex for instance, Hawkins notes that some cortical cells at the input end of the relevant cellular columns will fire in response to vertical lines, others to horizontals or diagonals, while others, nearby, presumably pick up color information, etc. The various firings pass up the line in increasingly broad (and more generalized) combinations, eventually losing much of the detail but generating patterns driven by the lower level details received.
At the highest level of the cortex, Hawkins reasons we have only the broadest, most general pictures, combining the increasingly broad and more general patterns passed up from below with related general patterns from other areas (say visual patterns with touch patterns and sound patterns, etc.) to give us still larger patterns via associative linkage. When new inputs come in (as they are constantly doing) the passage of the information up the line encounters the stored general patterns higher up which respond by sending signals down the same routes (and also down our motor routes if and when actions are called for).
The ability of the incoming inputs to match stored generic patterns higher up (when the information coming down the line matches the information heading up) is successful prediction. When there is no match, prediction fails and new general patterns form at the higher end of the cortical columns to replace the previous patterns. Thus memory in us is seen as an ongoing adjusting process with repetitive matches producing stronger and stronger traces of previously stored patterns.
Because patterning happens at every level, a kind of pyramid of patterns from the lowest level in the cortex to the highest is seen. At all levels, associative mechanisms are utilized and, at the highest levels, these connect and combine multiple specialized patterns into still larger overarching representational patterns. The capacity to retain invariant representations at all levels, until adjustments are made, gives us the invariant representational capability that forms the basis of human memory and underlies prediction which, he thinks, is what we mean by "intelligence" (i.e., the dynamic process of matching old patterns to new inputs where the more successful the matching, the more "intelligent" we deem the operations performed).
So the cortex, on this view, is a "memory machine" (as Hawkins puts it), using a patterning and matching mechanism to constantly fit the stored representations held in the cortex to the world. And intelligence is seen as the outcome of this massive process that is constantly going on in our brains, i.e., the ability to quickly adjust to incoming information and make successful predictions about it. It's this increasingly complex and generalizing capacity of cortexes, he argues, that gives us the ability to construct and use massively complex pictures of the world around us (the source of our sensory inputs)*.
Hawkins thinks that this is a whole different way of conceiving of intelligent machines, replacing the notion prevalent in mainstream AI that the way to build machine intelligence is to construct massive systems of complex algorithms to perform intelligent functions typical of human capability. Instead, of that, he proposes, we need to concentrate on building chips that will be hardwired to work like cortical neurons in picking up, storing and matching/adjusting a constant inflow of sensory information and which can then be linked in a cortex-like architecture matching the cortical arrangements found in human brains.
Such machines, he proposes, will learn about their world in a way that is analogous to how we do it, build pictures based on sensory information received, recognize patterns and connections and think out of the more confining algorithm-intensive computational box.
Hawkins notes that we don't have to give such machines the kinds of sensory information available to humans and suggests that there is a whole range of different kinds of sensory inputs that might make more sense for such machines, depending on what complex operations they are built to perform (which may include security monitoring, weather prediction, automobile control or work in areas outside ordinary human safety zones, say in outer space, in high radiation areas or at great depths on the ocean floor). Nor does he think we have to worry about such machine intelligences supplanting us (a la The Matrix ) since there is no reason, he argues, that we would have to give such machines drives or feelings, or even a sense of selves such as we have, any of which might make them competitors to humans in our own environment. (Of course, it bears noting that we don't really have any idea of how brains produce drives and selves, per se, so it's at least a moot question whether we can simply, as Hawkins suggests, resolve not to provide these to such machines. After all, what if the synthetic cortical array he envisions turns out to have some or all of the capabilities Hawkins now thinks are seated beyond the cortex in human brains? In such a case, mere resolve not to give such capabilities to the proposed cortical array machines might not be enough!)
One of the main reasons Hawkins argues for a simple hardwired algorithm configured in a cortex-like architecture, versus a massively computational AI application (as envisioned in many AI circles), is that he believes even the most powerful computers today, with far faster processing capacities than any human brain, cannot hope to keep up with this kind of cortical architecture. He comes to this conclusion because he believes too many steps are involved in order to program intelligence comparable to what humans have, thus requiring a computational platform of vast, likely unwieldy, size, and detailed programming that must prove too monumental to undertake and maintain error-free. Nature, he argues, chose a simpler, more elegant and, in the end, superior way: a simple patterning/predicting algorithm.
In many ways Hawkins is much better than Gerald Edelman in dealing with the brain since Edelman gets lost in complexities, vagueness and what look like linguistic confusions in trying to describe brain process or argue against the AI thesis. Hawkins, though he limits his scope to intelligence rather than the full range of consciousness features, gives us a much more detailed and structured picture of how the mechanism under consideration might actually work.
In the end he gives us a picture best understood as arrays of firing cells (think flashing lights) that constantly do what they do in response to incoming and outgoing signal flows, with the incoming reflecting the array of sensory inputs we get from the world outside and the outgoing the stored general patterns that serve as our world "pictures" (not unlike Plato's forms, as he suggests, albeit without the platonistic mysticism) which are built up by the constant inflow.
Thus, he envisions a constant upward and downward flow of signals in the cortical system which is not only dynamic based on the interplay of the dual directional flow of the signals but is reflective of the facts beyond the brain in the world through the compound construction of invariant representations (occurring at every level of cortical activity). To the extent the invariant representations he describes successfully match incoming signals, they are predicting effectively and the organism depending on them is more likely to succeed in its environment. To the extent they are unable to generate effective prediction, the organism depending on them suffers.
A key weakness of Hawkins' explanation lies in his failure to either show exactly how the pattern matching and adjusting of the neuronal group hierarchies become the world of which we are consciously aware, in all its rich detail (how mere physical inputs become mind -- the components of our mental lives) and how the cortex integrates the many inputs of the rest of the brain. As John Searle ( Minds, Brains and Science (1984 Reith Lectures) and Mind, Language, and Society : Philosophy in the Real World ) has noted, our idea of intelligence is very much intertwined with our idea of being aware, being a subject, having experience of the inputs we receive, etc. If we understand something, it's not just that we can produce effective responses to the stimuli received but that we are aware of the meanings of what we're doing, what is going on, etc.
Hawkins' "intelligence" looks to be a very much truncated form of this, albeit deliberately so, because he wants to argue for intelligent machines that will be "smarter" than computers but not quite smart enough to be a threat to us. Still, despite the fact that he has offered an intriguing possibility, which may well be an important step forward in the process of understanding minds and brains and of building real artificial intelligence, one can't escape the feeling he has still missed something along the way by distancing himself from the question of what it is to be aware -- to understand what one is doing when one is doing it.
SWM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* One of the critical differences between us and mammals lower down the development scale, he suggests, is the relative size of our cortexes. Many mammals with smaller brains just have smaller cortexes and, thus, fewer cells there, while some mammals, e.g., dolphins, actually have larger brains but less dense cortexes -- three layers vs. our six. Thus, says Hawkins, the intelligence we have reflects a greater capacity to form representations (covering more inputs, including past and present and a greater capacity for abstraction).
Hawkins defines intelligence as the ability to make predictions. I think this is an excellent definition of intelligence.
He says the cortex makes predictions via memory. The rat in the maze has a memory which includes both the motor activity of turning right and the experience of food. This activates turning right again, which is equivalent to the prediction that if he turns right, food will occur.
The primate visual system, which is the sense best understood, has four cortical areas that are in a hierarchy. In the lowest area, at the back of the head, cells respond to edges in particular locations, sometimes to edges moving in specific directions. In the highest area you can find cells that respond to faces, sometimes particular faces, such as the face of Bill Clinton.
But the microscopic appearance of the cortex is basically the same everywhere. There is not even much difference between motor cortex and sensory cortex. The book makes sense of the connections found in all areas of the cortex.
The cortex is a sheet covering the brain composed of small adjacent columns of cells, each with six layers. Information from a lower cortical area excites the layer 4 of a column. Layer 4 cells excite cells in layers 2 and 3 of the same column, which in turn excite cells in layers 5 and 6. Layers 2 and 3 have connections to the higher cortical area. Layer 5 has motor connections (the visual area affects eye movements) and layer 6 connects to the lower cortical area. Layer 6 goes to the long fibers in layer 1 of the area below, which can excite layers 2 and or 3 in many columns.
So there are two ways of exciting a column. Either by the area below stimulating layer 4, or by the area above stimulating layers 2 and 3. The synapses from the area above are far from the cell bodies of the neurons, but Hawkins suggests that synapses far from the cell body may fire a cell if several synapses are activated simultaneously.
The lowest area, at the back of the head, is not actually the beginning of processing. It receives input from the thalamus, in the middle of the brain (which receives input from the eyes). Cells in the thalamus respond to small circle of light, and the first stage of processing is to convert this response to spots to response to moving edges.
And the highest visual area is not the end of the story. It connects to multisensory areas of the cortex, where vision is combined with hearing and touch, etc.
The very highest area is not cortex at all, but the hippocampus.
Perception always involves prediction. When we look at a face, our fixation point is constantly shifting, and we predict what the result of the next fixation will be.
According to Hawkins, when an area of the cortex knows what it is perceiving, it sends to the area below information on the name of the sequence, and where we are in the sequence. If the next item in the sequence agrees with what the higher area thought it should be, the lower area sends no information back up. But if something unexpected occurs, it transmits information up. If the higher area can interpret the event, it revises its output to the lower area, and sends nothing to the area above it.
But truly unexpected events will percolate all the way up to the hippocampus. It is the hippocampus that processes the truly novel, eventually storing the once novel sequence in the cortex. If the hippocampus on both sides is destroyed, the person may still be intelligent, but can learn nothing new (at least, no new declarative memory).
When building an artificial auto-associative memory, which can learn sequences, it is necessary to build in a delay so that the next item will be predicted when it will occur. Hawkins suggests that the necessary delay is embodied in the feedback loop between layer 5 and the nonspecific areas of the thalamus. A cell in a nonspecific thalamic area may stimulate many cortical cells.
I think this theory of how the cortex works makes a lot of sense, and I am grateful to Hawkins and Blakeslee for writing it in a book that is accessible to people with limited AI and neuroscience.
But I am not convinced that the mammalian cortex is the only way to achieve intelligence. Hawkins suggests that the rat walks and sniffs with its "reptilian brain", but needs the cortex to learn the correct turn in the maze. But alligators can learn mazes using only their reptilian brains. I would have been quite surprised if they could not.
Even bees can predict, using a brain of one cubic millimeter. Not only can they learn to locate a bowl of sugar water, if you move the bowl a little further away each day, the bee will go to the correct predicted location rather than to the last experienced location.
And large-brained birds achieve primate levels of intelligence without a cortex. The part of the forebrain that is enlarged in highly intelligent birds has a nuclear rather than a laminar (layered) structure. The parrot Alex had language and intelligence equivalent to a two year old human, and Aesop's fable of the crow that figured out to get what he wanted from the surface of the water by dropping stones in the water and raising the water level, has been replicated in crows presented with the problem.
Top reviews from other countries
The broad essence of his argument is based on the observation of Vernon Montcastle that the mammalian cortex has a uniform global and microscopic structure. The cortex is the crinkly sheet that we see when looking at the brain from above and the sides, and that is wrapped around the more evolutionarily primitive inner components. A possible corollary of this observation is that `cortex is cortex', and that it is all implementing the same highly generalised processing algorithm. This is a rather counter-intuitive proposition as it would seem reasonable that the brain is doing a diversity of things and is therefore using a diversity of mechanisms to accomplish them. A vertical section through any part of the cortex reveals it to be comprised of six layers, each with a distinct composition of types and densities of neurones, and synaptic interconnections. Closer examination shows that these neurones are organised into a semi-astronomical number of transversely arranged microcolumns, with many interconnecting vertical synapses between the constituent neurones working to make each microcolumn into a tiny processing unit. Microcolumns operate together to make the functional areas that neuroscientists have been mapping in ever greater detail over the last century or so. These areas or regions are interconnected in a complex, but highly organised way, to establish a hierarchy in which areas connected to sensory inputs are at the bottom, and areas of increasingly abstract association are towards the top. The puzzling fact that there are more backward connections flowing down this hierarchy of areas, than there are forward/upward connections, has been known for a long while, but has arguably been largely ignored. This connectivity can be understood however in the light of Hawkin's proposed 'memory-prediction framework'. According to this model the brain's operation, and the essence of intelligence, consists of higher cortical areas constantly seeking to predict what patterns will be encountered next in the lower areas to which they are connected. These predictions are based on comparisons between memory, that is the cumulative analysis of previous patterns, as extracted by blind and simple algorithms, and the patterns of current input. Hawkin's thus argues that each area of the brain is constantly trying to anticipate its future inputs from its lower areas. Where such prediction fails we have the experience of surprise or novelty, and attention on behalf of areas further up the hierarchy is required in order to subsume that input under existing patterns, or to derive new patterns. Such new patterns will cause changes to flow up and down the hierarchy, this process being learning. He even argues that movement, as a result of activations in the motor cortex, is implemented in the same terms. Thus we actually move by anticipating the sensory inputs from our bodies, including the vestibular (balance), proprioceptive (disposition of the body in space), etc. that will arise as a result of issuing motor signals, and that it is these predictions themselves that drive the motor areas. He goes on to propose a reasonably detailed description of how this pattern-predicting model might be implemented down at the level of microcolumns and the synaptic connections between the neurones in the six layers.
For such an easy to read little book this is quite an extraordinary hypothesis that, at a stroke, makes a great deal of sense out of a mountain of baffling detail. If Hawkin's has achieved nothing else it is to demonstrate ways of thinking and writing about neural architecture that are more transparent and intuitive than has arguably been accomplished thus far. I am going to have to spend a while thinking about his theory, and considering whether his model really does capture everything that the cortex, and the generalised intelligence that gives us knowledge, skills, reasoning, language and so on, does for us. I have returned now to the Cotterill book and already I am finding myself thinking about what I am reading in a rather new and different way. Time will tell whether Hawkin's theory will turn out to be a master key that will bring some overarching sense to the mass of messy detail that my current knowledge of the brain presents me with. Time will also tell how his predictions about intelligent machines and the social revolution they could engender will transpire. That such machines are possible, and will be built I have no doubt. How long it will take is rather trickier. However, when they finally arrive it may be that we come to look back on this little book, which is as much a pamphlet or manifesto, as a milestone in intellectual history.
It is, without a doubt, suitable for anyone who has an interest in artificial intelligence, from complete newcomers with no science background and no interest in maths or algorithms right up to established professors who feel stuck in a rut!
I have an MSc in Robotics and am undertaking my PhD in an AI related field. I have been very disillusioned with studies into AI that revolve around optimising algorithms for some specific task and entered into many arguments with academics who assert with absolute certainty that intelligence is, ultimately just a very complex algorithm.
This book argues about what intelligence is in a way that leaves the open-minded reader staggered and excited about the possibilities. Ultimately, it is all just guesswork and hypothesis but I for one shall be very disappointed if the author isn't uncomfortably close to the mark!
Jeff Hawkins and Sandra Blakeslee appear to be doing for Computer Science and Intelligent machines what Edward Witten had done for String Theory. Remember the madness that String Physicists went through till M theory was pronounced in the University of South California sometime 1995!
If we allow JS to stand for the initials of the two authors, one may conclude Intelligent Machines can be defined as follows:
JS= IM(neocortex). In other words, intelligent machine are function of our ability to understand and then imitate how the Neocortex works.
The two others succeeded to simplify a complex subject that made us the dominant animals on planet earth, though we are yet prove our mastery of the space beyond our atmosphere. They truly shed a light on why the AI world and neural network proponents are still struggling to deliver what many of us thought was achievable by the end of the 20st century.
Even if you do not understand differential equations or even basic algebra, this book will give you an insight of how your brain works in a language that is so simple and absorbing. Even if you are not coding or do not have nerdy or some kind crazy tendency, you will still appreciate understanding how the grey stuff between your ears makes you what you are and worth. You may truly even start training your brain to master other fields that you have not thought about before. The authors' attention may have not been to help you retrained your brain, but this would be a by-product of reading this.
For those of us, who are striving to understand, decode and them emulate how our brains are so good in doing certain things, I think this book would help us to sit back and rethink about how we architect the software we develop, even if this is a small software that operates within the bully dark valleys - a.k.a black pools - that frightened John Lewis to write a book that painted an overweight chines nocturnal, writing a software in one night, with no unit, integration and acceptance testing that works well in the morning and beats the rest!
The strange thing about this book is that as you keep reading it, you will simply and subtly learn how you behave, see this world, value your relationships and respect others would always depend on the quality of information fed into your Cortex from the day you were born to day. Hence, if we had one liberal school that every child in this world attends, perhaps, we would have lived in a fairer world, where we do not see abuse, unfairness and killings and so forth! While the authors do not mention, you would get to understand, during the end of the II World War, why PM Winston Churchill and his European counterparts believed in the art of Sphere of influence, while their North American counterparts abhorred this strange foreign policy.
If you ever happened to watch the "Gifted hand', after you read this book, you would appreciate how an illiterate mother succeeded to get her son, Ben Carson, to become a renown neurosurgeon. Remember, when she asked her sons to go the library and read and read. And the did this and the young Ben becomes the best in his class. It was all about feeding his brain with information that made him more informative than his class mates. His Neocortex got the memory it needed to predict what his teachers expected from him. Every thing you look would make sense for you, once you have gone through this book. You would even further predict the what would have happened to young Ben, if his mother did not go to work for the professor with house of full of books!
The authors also appear to have an unchallengeable knowledge of how a computers and programming languages work. They do understand how the SSDs has transformed the way we do use data, while they never mention the letters SSDs in their book and explain how we could make a memories that the applications we design can tap on demand without latency. They talk about the beauty of allowing machines to learn and then passing that knowledge from one machine to another, just like the way we use fast USB drivers to copy data from one place to another.
They even go deep on explaining why it would be plausible that we do not build one humongous software that mimics the entire Cortex, but modules that can specialise on different functionalities. And, if the need arises, all of this can be brought together one day. Here it looks like they did not only tell you how the magic stuff works, but also how we can utilise the art of SOA so as to bring together different sensors, brain like software and even machines that can react to or commanded by this software.
The authors view on the separation between the software and the mechanical parts is another design architect that can allow, for example, our intelligent devices to even share the same intelligent software hosted somewhere, where the art of SOA could be brought into play.
Although the authors were hesitant to precisely predict when this Intelligent thing should happen, though they mentioned in 10 ten years this may start happening, I think unknowingly we are already in the era of Intelligent Software - here I am avoiding the word machines - as I do not want the fainthearted among us to think we are sleep walking into the SKYNET situation. Just think about the software that gives you a quick and accurate answer about the historical exchange rates by just calling simple Restful Web API, hosted somewhere in the world. The application does not retrieve any data from any HD. But it use a collection of objects that lives or resides in Memory. Although this is a tiny example, it is a microcosm of what is to come. Think about the current claims on Big Data and how this would aide the creation of Cortex memory that would one day do more than then crunching numbers. Think about the art of correlation instead of that of causation - the era of big data.
I would urge every software architect, who had an interest in designing better applications, to read this book. This would help you think about the behaviour of your software from when the machine is turned on till it is switched off. This May also lead you to think about how much you could have achieved if you have used servers that never get switched off and argument it with Restful Web APIs as conduit for getting requests and returning what the client software wants; where this client software could be hosted on any lightweight devices.
I would recommend to ever ordinary (non-nerdy/crazy) individual of us to read this book, as it would help you understand how the art of prediction works.
However, I hope this book would not provide an excuse for those, who murder and abuse - from statesmen/women to ordinary individuals -to use this as an excuse by claiming that the horrendous acts they did was due to the corrupt memory they had in their Cortex!
On Intelligence is such a good read for anyone interested in computer intelligence. Hawkins is a computer man talking about biology, analysing how the brain makes sense of the world. To what extent his theory about the neo-cortex's role in intelligence is accurate I cannot judge and only time will tell. But for this part-time programmer the operation of the brain now seems considerably less mysterious and the path to artifial intelligence seems a lot clearer.
One of the best books I have read for a long time.








