Beyond AI: Creating the Conscience of the Machine Hardcover – May 30, 2007
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
RAY KURZWEIL, AI scientist, inventor
Author of The Singularity Is Near
"An entertaining and very thought-provoking ramble through the wilds of AI."
ERIC S. RAYMOND
"Hall argues that our future superintelligent friends in the mechanical kingdom may develop superior moral instincts. I'm almost convinced. I learned a lot from reading this book. You will too."
ROBERT A. FREITAS JR.
Author of "The Legal Rights of Robots"
and Kinematic Self-Replicating Machines
About the Author
- Publisher : Prometheus; Illustrated edition (May 30, 2007)
- Language : English
- Hardcover : 408 pages
- ISBN-10 : 1591025117
- ISBN-13 : 978-1591025115
- Item Weight : 1.48 pounds
- Dimensions : 6.22 x 1.12 x 9.27 inches
- Best Sellers Rank: #3,111,415 in Books (See Top 100 in Books)
- Customer Reviews:
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
I don't know the author personally, but I can tell you this about him: he is truly educated. In the classical tradition. By that I mean he has not only been a student of things technical, he has been a student of great writing, poetry, social science, economics, politics and more. It's not that he attempts to parade his knowledge in these areas; rather, it's that his strong liberal arts education, very naturally, simply permeates his expository style. More than that, he has the rare ability to present complex topics in a way that any curious reader can comprehend. Isaac Asimov, R. Buckmister Fuller, Richard Feynman, Freeman Dyson and Carl Sagan are the writers of which the author reminds me. And, like the erudite writers in that list, it is quite obvious that the author is truly interested (dare I say fascinated?) in the subject about which he is writing. His enthusiasm is contagious. Above all, he wants you to "get it."
I don't think I've read a book that was written this well and inspired me intellectually this much since I read R. Buckmister Fuller's "Utopia or Oblivion" back in 1968. That book changed my life. Now, forty years later, I find another book that is so well written and intellectually provocative that it may just change my life again. This is a fascinating book. You must read it. Seriously. J. Storrs Hall is the Robert Ludlum of non-fiction. The only time I put this book down is when I'm driving because I'm pretty sure reading and driving at the same time is illegal in my state. I'm even reading it while I write this (OK, that's not true - but you get my point).
This book is a ripping good read. It'll tickle your neurons until they cry out for mercy.
This book gives a realistic appraisal of progress in artificial intelligence and sheds considerable light on these questions. It is careful to distinguish between fact and fiction, between what has been accomplished and what has not, and it does so without falling into the trap of extreme skepticism, the latter of which seems to happen to so many who are deeply involved in AI research. Indeed, after an initial period of extreme confidence in research results, and a designation as "intelligent", the confidence wanes until it is eventually viewed as a "trivial" discovery or merely a "program." There are many indications from the historical accounts of AI research that this pattern is repeated often.
This author though takes the general reader through this history and also takes a view of future developments in artificial intelligence, discussing at various places in the book the possibility of a technological "singularity" sometime in the next fifty years. Readers who are curious about the status of machine intelligence will find an understandable overview in this book, but it can still be of interest to those, such as this reviewer, who are working "in the trenches" of applied artificial intelligence and are interested in the opinions of researchers affiliated with the academy. The author does not delve deeply into the technologies, algorithms, and mathematics for this type of reader, but there are some new ideas within the covers that definitely make the book worth reading.
Machine intelligence has advanced, the author argues, and he gives many examples. Robots for example, can currently navigate with the same adeptness as a three-year old child, which is astounding considering what was possible just ten years ago. Readers who own and develop AIBO robot dogs will understand this claim, as their navigation abilities are impressive. If one thinks qualitatively, then one can project with a fair degree of confidence that robots will be able to interact with the environment with the same adeptness as an adult human within the next two decades. This prediction would however be difficult to put on a quantitative foundation, one must arrive at a measurable definition of robot-environment interaction. The lack of quantitative measures of progress has plagued the AI community since its inception in the early 1950's, especially the lack of a general, measurable definition of intelligence. In the opinion of this reviewer, the field of cybernetics and control theory, its generalization, has formulated the best quantitative notion of intelligence to date. Cybernetics is discussed in some detail in this book (along with its "death"). The author seems to believe though that it is information theory that promises the best measurable definition of intelligence. He discusses some reasons for this view, but does not elaborate with any detail, except brief commentary on how it can be used to measure, by using the concept of entropy, the predictive power of theories.
Particularly interesting in the book is the discussion of the "ELIZA effect" that refers to the program invented by Joseph Weizenbaum in the early 1960's that was designed to converse with a human subject, with the intent of fooling the subject into believing that the program understood what she was saying. The author scoffs at any imputation of understanding by ELIZA, and uses the "ELIZA effect" to describe any effort or claim by AI researchers that their work is a significant advance, and not just a bag of tricks that can easily mislead. But there is a serious problem with the author's use of the "ELIZA effect", in that significant advances may indeed have been made, but then after they are studied and understood they are then viewed as insignificant, and the discoverers are then labeled as falling prey to the ELIZA effect. As an example of how this scenario might be played out, consider an English professor who retired ten years before the advent of sophisticated spell checkers and real-time English grammar. She then comes out of retirement and decides to write a novel, and discovers this spelling/grammar checker, marveling at its abilities and definitely convinced that it displays intelligence. But sometime thereafter an AI expert reveals to her how it actually works and she then begins to accept it as merely a software program, no different really than some of the crude writing software she used years earlier.
But the author's belief in the ELIZA effect does not mean that he does not believe that intelligent machines (or "software") have not been achieved. However, this intelligence has only been able to operate in specific domains. As an example, he discusses the SHRDLU system invented by Terry Winograd, which was able to converse about a tabletop on which were placed a set of children's blocks. The author believes that SHRDLU was able to achieve genuine understanding, albeit in a very specific domain: the blocks world. This domain-specificity has been the hallmark of all of the commercial successes of artificial intelligence, since business are primarily concerned with automating tasks in very specific domains, such as managing and analyzing networks, collecting and interpreting information from competitors, or finding profitable financial opportunities by sifting through mountains of data. Machines that are able to think in many different domains may not be useful in this regard. A machine that is able to troubleshoot a network would be useful to a network manager, but if it had expertise in chess playing and decided to do this instead, this would raise the ire of the network manager.
For this reviewer, the most interesting part of the book was the discussion on `autogenous systems' because of its novelty and because it is related to the efforts to build machines that possess general intelligence. The author defines such a system as one that is able to extend itself arbitrarily, and thus go beyond preconceived limits. An autogenous machine will therefore be able to confront new and innovative situations or problems without excessive fiddling by the designer. Its cognitive structure, as one might call it, can build concepts and engage in learning on-the-fly without external intervention. At the present time, such systems are the holy grail of AI and there are concentrated efforts to build them. If they are built, and this reviewer is confident that this will be the case, will they fall into the usual pattern of first being viewed as major breakthroughs, and then latter as merely "programs?" If history is a guide this will happen, but such machines will be the tour de force of the twenty-first century, possibly bringing about a "singularity" as the author discusses, but also serving as an example of what can be accomplished with that low-voltage mass of biological matter called the human brain, which is the most impressive machine, and maybe indeed a universal one, that has yet arisen.
Top reviews from other countries
Storrs Hall in this excellent book shows how A.I. researchers lost the thread in the following decades with a fixation on coding everything and building systems that worked fine in closed environments with fixed rules (e.g. chess games) but hopelessly in unpredictable real life situations.
He concludes that robots need to learn and adapt to their environments (be autogenous) although they may have some hard wired basic abilities upon which they can develop a "self" against which to make environmental tests (i.e. increase the capability/ adaption of their "self"). Another interesting aspect of the book is his discussion from chapter 18 onwards of robotic/ A.I. ethics as applicable to this new "self" and he opts for the Boy Scout Law: "One should be trustworthy, loyal, helpful, friendly, kind, obedient, cheerful, thrifty, brave, clean and reverent", and he sees the task as, "... building a machine that understands what these qualities mean and what can we do to ensure that the machines that are built will have them."
Perhaps the author could have explored at greater length the concept of a robotic/ A.I. "self " to answer this question.
For example he says that A.I. would be rid of many human pressures like sexual jealousy, but if robotic A.I.'s adapt to different environments they will likely have differing abilities and "selves" that will vary in ability and capacity to protect their "self" (i.e. they may well be jealous and competitive if they are required to survive and adapt). Equally, differing robotic "selves" may cooperate to gain a group advantage (e.g. a robotic Apollo, Aphrodite and Hephaestus or maybe the whole lot of them contributing their differing abilities).
His comparative advantage in human / robotic A.I. trade is not very convincing. We don't do a lot of trade with the great apes and we in turn may be even more distant from future autogenous A.I.'s.
Also he says that humans will have an "open source guarantee" with regard to robot/ A.I. code (i.e. we will have access to and will be able to delete undesirable variants) but this assumes 1) that we understand it and 2) that a robotic/ A.I. "self " will allow access (human or otherwise) to its code . It has invested a good deal in the evolution a viable "self" which could be put at risk with such a procedure.
Nevertheless it's a really good book, with Storrs Hall favouring good environments for autonomous learning machines and quoting the Christian Golden Rule, "Do unto others as you would have them do unto you" which seems like a good place to start with early autogenous evolving A.I.s.