372 of 418 people found the following review helpful
Important extrapolations, but not as careful or concise as I wanted,
This review is from: The Singularity Is Near: When Humans Transcend Biology (Hardcover)
Kurzweil does a good job of arguing that extrapolating trends such as Moore's Law is better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I've read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (talking about the world circa 2030): "The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies." Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference to "the devastating effect that illegal file sharing has had on the music-recording industry" seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says "Is the problem that we are not running the evolutionary algorithms long enough? ... This won't work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won't help." If "conventional" excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) "Drexler's dissertation ... laid out the foundation and provided the road map still being followed today." (page 234): "each aspect of Drexler's conceptual designs has been validated". I've been following this area pretty carefully, and I'm aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler's road map, it's a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that "no serious flaw in Drexler's nanoassembler concept has been described".
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I'm bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don't indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky's Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky's guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Tracked by 1 customer
Sort: Oldest first | Newest first
Showing 1-4 of 4 posts in this discussion
Initial post: Jan 5, 2007 12:09:33 PM PST
Kurt B says:
I agree particularly with the piffling off of friendly-AI as just another consideration. This is paramount. I'd be the first to suggest not allowing machines go off and think by themselves until humans had a chance to catch up a bit. That would mean we max out with expert systems as helpers, no AI, no AI seeds, no self-modifying programs of any kind except for humans. And humans would take the long road to figuring everything out esp in regards to modifying ourselves. Then and only then, when we'd leveled the playing field with bio-technology, would it be appropriate to dabble with AI. In 80k years, the blink of an eye, humans have taken complete control of ever mammalian lifeform before us. They are enslaved. How much faster could an AI or two do this to us? A day...a week..(okay, I sound paranoid now).
In reply to an earlier post on Jan 12, 2009 10:47:34 PM PST
21st Century Sanity Man says:
Possible that AI will be just radically better expert programs, and not ever "wake up".
I know my scientific calculator can do multi-root calculation in an instant far more accurately than I can on paper, and a million (or more?) times as fast.
The calculator isn't self willed, and todays evolutionary programs are just a much more sophisticated version of my calculator, but isn't conscious..
Posted on Feb 7, 2013 5:04:57 AM PST
Nicholas Heston says:
Mainly negative feedback - yet you gave it four stars...
Posted on Feb 15, 2014 1:26:24 AM PST
Bo Jonson says:
"363 of 407 people found the following review helpfull"
I don't know why those folks found your review of the book 'help full'. I found your posting full of techno-babble & arcane references. That makes it seem like you are just advancing your own agenda, rather than providing a concise review of Kurzweil's work.
‹ Previous 1 Next ›