Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Beyond AI: Creating the Conscience of the Machine Hardcover – May 1, 2007
|New from||Used from|
Frequently bought together
Customers who bought this item also bought
"Taking us on an eloquent journey through an astonishingly diverse intellectual terrain, J. Storrs Hall’s Beyond AI articulates an optimistic view – in both capability and impact – of the future of AI. This is a must read for anyone interested in the future of the human-machine civilization."
RAY KURZWEIL, AI scientist, inventor
Author of The Singularity Is Near
"An entertaining and very thought-provoking ramble through the wilds of AI."
ERIC S. RAYMOND
"Hall argues that our future superintelligent friends in the mechanical kingdom may develop superior moral instincts. I'm almost convinced. I learned a lot from reading this book. You will too."
ROBERT A. FREITAS JR.
Author of "The Legal Rights of Robots"
and Kinematic Self-Replicating Machines
About the Author
J. Storrs Hall, PhD (Laporte, PA), the founding chief scientist of Nanorex, Inc., is a research fellow for the Institute for Molecular Manufacturing and the author of Nanofuture, the "Nanotechnologies" section for The Macmillan Encyclopedia of Energy, and numerous scientific articles. He has designed technology for NASA and was a computer systems architect at the Laboratory for Computer Science Research at Rutgers University from 1985 to 1997.
Browse award-winning titles. See more
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
This book gives a realistic appraisal of progress in artificial intelligence and sheds considerable light on these questions. It is careful to distinguish between fact and fiction, between what has been accomplished and what has not, and it does so without falling into the trap of extreme skepticism, the latter of which seems to happen to so many who are deeply involved in AI research. Indeed, after an initial period of extreme confidence in research results, and a designation as "intelligent", the confidence wanes until it is eventually viewed as a "trivial" discovery or merely a "program." There are many indications from the historical accounts of AI research that this pattern is repeated often.
This author though takes the general reader through this history and also takes a view of future developments in artificial intelligence, discussing at various places in the book the possibility of a technological "singularity" sometime in the next fifty years. Readers who are curious about the status of machine intelligence will find an understandable overview in this book, but it can still be of interest to those, such as this reviewer, who are working "in the trenches" of applied artificial intelligence and are interested in the opinions of researchers affiliated with the academy. The author does not delve deeply into the technologies, algorithms, and mathematics for this type of reader, but there are some new ideas within the covers that definitely make the book worth reading.
Machine intelligence has advanced, the author argues, and he gives many examples. Robots for example, can currently navigate with the same adeptness as a three-year old child, which is astounding considering what was possible just ten years ago. Readers who own and develop AIBO robot dogs will understand this claim, as their navigation abilities are impressive. If one thinks qualitatively, then one can project with a fair degree of confidence that robots will be able to interact with the environment with the same adeptness as an adult human within the next two decades. This prediction would however be difficult to put on a quantitative foundation, one must arrive at a measurable definition of robot-environment interaction. The lack of quantitative measures of progress has plagued the AI community since its inception in the early 1950's, especially the lack of a general, measurable definition of intelligence. In the opinion of this reviewer, the field of cybernetics and control theory, its generalization, has formulated the best quantitative notion of intelligence to date. Cybernetics is discussed in some detail in this book (along with its "death"). The author seems to believe though that it is information theory that promises the best measurable definition of intelligence. He discusses some reasons for this view, but does not elaborate with any detail, except brief commentary on how it can be used to measure, by using the concept of entropy, the predictive power of theories.
Particularly interesting in the book is the discussion of the "ELIZA effect" that refers to the program invented by Joseph Weizenbaum in the early 1960's that was designed to converse with a human subject, with the intent of fooling the subject into believing that the program understood what she was saying. The author scoffs at any imputation of understanding by ELIZA, and uses the "ELIZA effect" to describe any effort or claim by AI researchers that their work is a significant advance, and not just a bag of tricks that can easily mislead. But there is a serious problem with the author's use of the "ELIZA effect", in that significant advances may indeed have been made, but then after they are studied and understood they are then viewed as insignificant, and the discoverers are then labeled as falling prey to the ELIZA effect. As an example of how this scenario might be played out, consider an English professor who retired ten years before the advent of sophisticated spell checkers and real-time English grammar. She then comes out of retirement and decides to write a novel, and discovers this spelling/grammar checker, marveling at its abilities and definitely convinced that it displays intelligence. But sometime thereafter an AI expert reveals to her how it actually works and she then begins to accept it as merely a software program, no different really than some of the crude writing software she used years earlier.
But the author's belief in the ELIZA effect does not mean that he does not believe that intelligent machines (or "software") have not been achieved. However, this intelligence has only been able to operate in specific domains. As an example, he discusses the SHRDLU system invented by Terry Winograd, which was able to converse about a tabletop on which were placed a set of children's blocks. The author believes that SHRDLU was able to achieve genuine understanding, albeit in a very specific domain: the blocks world. This domain-specificity has been the hallmark of all of the commercial successes of artificial intelligence, since business are primarily concerned with automating tasks in very specific domains, such as managing and analyzing networks, collecting and interpreting information from competitors, or finding profitable financial opportunities by sifting through mountains of data. Machines that are able to think in many different domains may not be useful in this regard. A machine that is able to troubleshoot a network would be useful to a network manager, but if it had expertise in chess playing and decided to do this instead, this would raise the ire of the network manager.
For this reviewer, the most interesting part of the book was the discussion on `autogenous systems' because of its novelty and because it is related to the efforts to build machines that possess general intelligence. The author defines such a system as one that is able to extend itself arbitrarily, and thus go beyond preconceived limits. An autogenous machine will therefore be able to confront new and innovative situations or problems without excessive fiddling by the designer. Its cognitive structure, as one might call it, can build concepts and engage in learning on-the-fly without external intervention. At the present time, such systems are the holy grail of AI and there are concentrated efforts to build them. If they are built, and this reviewer is confident that this will be the case, will they fall into the usual pattern of first being viewed as major breakthroughs, and then latter as merely "programs?" If history is a guide this will happen, but such machines will be the tour de force of the twenty-first century, possibly bringing about a "singularity" as the author discusses, but also serving as an example of what can be accomplished with that low-voltage mass of biological matter called the human brain, which is the most impressive machine, and maybe indeed a universal one, that has yet arisen.
I've read, or leafed through, a number of popular books on artificial intelligence (AI). They are all pretty bad in the same way. 'Beyond AI' is a more extreme example of the genre. On the jacket, a huge robot, with the stupified expression science fiction robots seem to have, one fingertip wet with blood, holds a dead human figure in one hand. So the robot killed the human. The author does not appear interested in how this might happen.
AI theorists have a still-lively memory of an "AI winter " in the 1980s when projects couldn't get funded and many were shut down. The problem was in the limited expandability of the artificial "brains " that were created. The programs operated within a fixed world of concepts that was difficult or impossible to expand. This limitation was especially difficult in view of the limited memory and processing speeds of computers of the time, as well as the undeveloped state of evolutionary theory. With the coming of Wilson's 'Sociobiology', Noam Chomsky's theory of deep structure in language, and Jerry Fodor's Modularity of Mind, the brain was conceived as a large number of modules for functions such as language, vision, number and so on to a rough total of 25. (The author cites a tentative list from Steven Pinker.) He comments "Some of these are innate, some are learned. Some are instinctive biases, some are full-fledged perceptual; machinery. Some appear localized in the brain and surely others are not." (p. 113) This is something a computer programmer would love; to build his AI he can work on one module at a time. Massive modularity seems the way to go. But he must realize how tentative his list of modules is. This is a basic problem with real, live brains: often the wrong modules get used. The author illustrates this with the fox-and-crow fable: " ... the fox has started the interaction in the realm of a "social intercourse" module", where all the action is verbal. But the fox jumps to a "possession and properties of physical objects" module. (p. 114) What module would enable the crow to escape trickery by overruling other modules? The author proposes a "general -purpose portion" (he doesn't call it a module) which can go on learning when other modules are stuck. But he does not explain the issue of designing a general agent to govern the modules. Instead he goes on to speculate on the possibility of humans creating an AI capable of unlimited self-improvement or at least trans-human self-improvement. Again he does not elaborate. No conclusions emerge.
In a way, he returns to the question of control in "Philosophical Extrapolations". Traditional philosophical questions are considered a waste of time: "... the only refutation worth doing is simply to build the AI, and then we will see who is right.." (p.265) He wants an AI as near to being human as possible. But then, after a brief exposition of a mechanistic theory of mind, he discusses free will, based on a theory of philosopher Drew McDermott, who states that the universe is deterministic, but that humans are convinced of their free will. This is true for an AI as it is true for humans, but rather than being "convinced" of its free will, the AI has a "utility function" that constantly evaluates the anticipated situation of the world, computes a numerical "utility" for each situation evaluated, and acts to achieve the situation with the highest anticipated utility. To avoid an infinite regress, the AI does not consider its own state as part of the situation. To me "utility" sounds like what humans would call "happiness". The question of how the AI sees the world is thus pretty well answered -- if we can believe in the utility function. The author makes stabs at questions like symbols and meaning, consciousness, sentience, self awareness, and qualia, but there just isn't much to say about what goes on in the head of a hypothetical AI. He then embarks on a whirlwind tour of ethics, which he considers based in human evolution, from classical theory to Kant and Rawls, then on to Asimov's Three Laws of Robotics, which he considers unsatisfactory but roughly parallel to Freud's division of the mind into id, ego, and superego. (I will keep my mouth shut on this.) His non-conclusion is that "We want -- or at least I want -- to think of our AIs as our children ... they must be autogenous [self-improving] creatures in an autogenous community."
We can't begin to educate and uplift our AIs without some understanding of what will be going on in their heads. The author needs to resume his discussion of what it is in the massively modular human brain that switches control from one module to another, and how it switches, as in the fox and crow fable. How do we even know if there is just one control module? If there is more than one, how do THEY switch? Philosopher Jerry Fodor can state the modular theory and leave such issues in the air. But to build the AI, the author must know exactly. Without this knowledge no AI can be constructed that is more intelligent than my laptop computer.
Yet the author can continue for some 250 pages, chattering about classical ethics, golden rules, eudaimonia, utilitarianism, Rawls' Veil of Ignorance, and so on. This book is the author's testament of faith in transhumanism. If I had such faith, I suppose I would rate it more highly.
Most recent customer reviews
-the casual reader or scifi type who'd love a high-level, and well written overview of...Read more