- Hardcover: 408 pages
- Publisher: Prometheus Books (May 30, 2007)
- Language: English
- ISBN-10: 1591025117
- ISBN-13: 978-1591025115
- Product Dimensions: 6.2 x 1.1 x 9.3 inches
- Shipping Weight: 1.4 pounds
- Average Customer Review: 15 customer reviews
- Amazon Best Sellers Rank: #356,081 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Beyond AI: Creating the Conscience of the Machine Hardcover – May 1, 2007
Customers who viewed this item also viewed
Customers who bought this item also bought
"Taking us on an eloquent journey through an astonishingly diverse intellectual terrain, J. Storrs Hall’s Beyond AI articulates an optimistic view – in both capability and impact – of the future of AI. This is a must read for anyone interested in the future of the human-machine civilization."
RAY KURZWEIL, AI scientist, inventor
Author of The Singularity Is Near
"An entertaining and very thought-provoking ramble through the wilds of AI."
ERIC S. RAYMOND
"Hall argues that our future superintelligent friends in the mechanical kingdom may develop superior moral instincts. I'm almost convinced. I learned a lot from reading this book. You will too."
ROBERT A. FREITAS JR.
Author of "The Legal Rights of Robots"
and Kinematic Self-Replicating Machines
About the Author
J. Storrs Hall, PhD (Laporte, PA), the founding chief scientist of Nanorex, Inc., is a research fellow for the Institute for Molecular Manufacturing and the author of Nanofuture, the "Nanotechnologies" section for The Macmillan Encyclopedia of Energy, and numerous scientific articles. He has designed technology for NASA and was a computer systems architect at the Laboratory for Computer Science Research at Rutgers University from 1985 to 1997.
Try the Kindle edition and experience these great reading features:
There was a problem filtering reviews right now. Please try again later.
I don't know the author personally, but I can tell you this about him: he is truly educated. In the classical tradition. By that I mean he has not only been a student of things technical, he has been a student of great writing, poetry, social science, economics, politics and more. It's not that he attempts to parade his knowledge in these areas; rather, it's that his strong liberal arts education, very naturally, simply permeates his expository style. More than that, he has the rare ability to present complex topics in a way that any curious reader can comprehend. Isaac Asimov, R. Buckmister Fuller, Richard Feynman, Freeman Dyson and Carl Sagan are the writers of which the author reminds me. And, like the erudite writers in that list, it is quite obvious that the author is truly interested (dare I say fascinated?) in the subject about which he is writing. His enthusiasm is contagious. Above all, he wants you to "get it."
I don't think I've read a book that was written this well and inspired me intellectually this much since I read R. Buckmister Fuller's "Utopia or Oblivion" back in 1968. That book changed my life. Now, forty years later, I find another book that is so well written and intellectually provocative that it may just change my life again. This is a fascinating book. You must read it. Seriously. J. Storrs Hall is the Robert Ludlum of non-fiction. The only time I put this book down is when I'm driving because I'm pretty sure reading and driving at the same time is illegal in my state. I'm even reading it while I write this (OK, that's not true - but you get my point).
This book is a ripping good read. It'll tickle your neurons until they cry out for mercy.
I've read, or leafed through, a number of popular books on artificial intelligence (AI). They are all pretty bad in the same way. 'Beyond AI' is a more extreme example of the genre. On the jacket, a huge robot, with the stupified expression science fiction robots seem to have, one fingertip wet with blood, holds a dead human figure in one hand. So the robot killed the human. The author does not appear interested in how this might happen.
AI theorists have a still-lively memory of an "AI winter " in the 1980s when projects couldn't get funded and many were shut down. The problem was in the limited expandability of the artificial "brains " that were created. The programs operated within a fixed world of concepts that was difficult or impossible to expand. This limitation was especially difficult in view of the limited memory and processing speeds of computers of the time, as well as the undeveloped state of evolutionary theory. With the coming of Wilson's 'Sociobiology', Noam Chomsky's theory of deep structure in language, and Jerry Fodor's Modularity of Mind, the brain was conceived as a large number of modules for functions such as language, vision, number and so on to a rough total of 25. (The author cites a tentative list from Steven Pinker.) He comments "Some of these are innate, some are learned. Some are instinctive biases, some are full-fledged perceptual; machinery. Some appear localized in the brain and surely others are not." (p. 113) This is something a computer programmer would love; to build his AI he can work on one module at a time. Massive modularity seems the way to go. But he must realize how tentative his list of modules is. This is a basic problem with real, live brains: often the wrong modules get used. The author illustrates this with the fox-and-crow fable: " ... the fox has started the interaction in the realm of a "social intercourse" module", where all the action is verbal. But the fox jumps to a "possession and properties of physical objects" module. (p. 114) What module would enable the crow to escape trickery by overruling other modules? The author proposes a "general -purpose portion" (he doesn't call it a module) which can go on learning when other modules are stuck. But he does not explain the issue of designing a general agent to govern the modules. Instead he goes on to speculate on the possibility of humans creating an AI capable of unlimited self-improvement or at least trans-human self-improvement. Again he does not elaborate. No conclusions emerge.
In a way, he returns to the question of control in "Philosophical Extrapolations". Traditional philosophical questions are considered a waste of time: "... the only refutation worth doing is simply to build the AI, and then we will see who is right.." (p.265) He wants an AI as near to being human as possible. But then, after a brief exposition of a mechanistic theory of mind, he discusses free will, based on a theory of philosopher Drew McDermott, who states that the universe is deterministic, but that humans are convinced of their free will. This is true for an AI as it is true for humans, but rather than being "convinced" of its free will, the AI has a "utility function" that constantly evaluates the anticipated situation of the world, computes a numerical "utility" for each situation evaluated, and acts to achieve the situation with the highest anticipated utility. To avoid an infinite regress, the AI does not consider its own state as part of the situation. To me "utility" sounds like what humans would call "happiness". The question of how the AI sees the world is thus pretty well answered -- if we can believe in the utility function. The author makes stabs at questions like symbols and meaning, consciousness, sentience, self awareness, and qualia, but there just isn't much to say about what goes on in the head of a hypothetical AI. He then embarks on a whirlwind tour of ethics, which he considers based in human evolution, from classical theory to Kant and Rawls, then on to Asimov's Three Laws of Robotics, which he considers unsatisfactory but roughly parallel to Freud's division of the mind into id, ego, and superego. (I will keep my mouth shut on this.) His non-conclusion is that "We want -- or at least I want -- to think of our AIs as our children ... they must be autogenous [self-improving] creatures in an autogenous community."
We can't begin to educate and uplift our AIs without some understanding of what will be going on in their heads. The author needs to resume his discussion of what it is in the massively modular human brain that switches control from one module to another, and how it switches, as in the fox and crow fable. How do we even know if there is just one control module? If there is more than one, how do THEY switch? Philosopher Jerry Fodor can state the modular theory and leave such issues in the air. But to build the AI, the author must know exactly. Without this knowledge no AI can be constructed that is more intelligent than my laptop computer.
Yet the author can continue for some 250 pages, chattering about classical ethics, golden rules, eudaimonia, utilitarianism, Rawls' Veil of Ignorance, and so on. This book is the author's testament of faith in transhumanism. If I had such faith, I suppose I would rate it more highly.