Customer Reviews: The Singularity Is Near: When Humans Transcend Biology
Amazon Vehicles Oct16 Amazon Fashion nav_sap_plcc_ascpsc Electronics Holiday Gift Guide Starting at $39.99 Halloween Candy Cozy Knits Book 2 or More Hours of House Cleaning on Amazon fin fin fin  All-New Echo Dot Introducing new colors All-New Kindle Oasis hhnsweeps Shop Cycling on Amazon

Your rating(Clear)Rate this item

There was a problem filtering reviews right now. Please try again later.

HALL OF FAMEon September 22, 2005
The author is definitely one of the most inspiring of all researchers in the field of applied artificial intelligence. For those, such as this reviewer, who are working "in the trenches" of applied AI, his website is better than morning coffee. One does not have to agree with all the conclusions reached by the author in order to enjoy this book, but he does make a good case, albeit somewhat qualitative, for the occurrence, in this century, of what he and other futurists have called a `technological singularity.' He defines this as a period in the future where the rate of technological change will be so high that human life will be `irreversibly transformed.' There is much debate about this notion in the popular literature on AI, but in scientific and academic circles it has been greeted with mixed reviews. Such skepticism in the latter is expected and justified, for scientists and academic researchers need more quantitative justification than is usually provided by the enthusiasts of the singularity, which in this book the author calls "singularitarians." Even more interesting though is that the notion of rapid technological change seems to be ignored by the business community, who actually stand to gain (or lose) the most by it.

Since this book is aimed primarily at a wide audience, and not professional researchers, the author does not include detailed arguments or definitions for the notion of machine intelligence or a list of the hundreds of examples of intelligent machines that are now working in the field. Indeed, if one were to include a discussion of each of these examples, this book would swell to thousands of pages. There are machines right now used in business and industry that can manage, troubleshoot, and analyze networks, diagnose illnesses, compose music definitely worth listening to, choreograph dances, simulate human behavior in computer games, recommend and engage in financial transactions and bargaining, and many, many other tasks, a detailed list of which would, again, entail many thousands of pages.

There are various psychological issues that arise when discussing machine intelligence, which if believed might prohibit the acceptance of any kind of notion of a technological singularity. For example, it is one of the historical peculiarities of research in AI that advances in the field are later trivialized, i.e. when a problem in AI becomes solved it no longer holds any mystery and is then considered to be just another part of information processing. It is then no longer regarded as `intelligent' in any sense of the term. This phenomenon in AI research might be called the "Michie-McCorduck-Hofstader effect", named after the three individuals, Donald Michie, Barbara McCorduck, and Douglas Hofstader, who discussed it some detail in their writings. If one examines the history of AI, one finds many examples of this effect, such as in knowledge discovery from databases, the use of business rules in database technologies, and the use of ontologies for information systems development. One of the best examples of this effect though is the backgammon player TD-Gammon, a highly sophisticated example of machine intelligence but which is now considered to be merely part of the "programmer's toolbox." The Michie-McCorduck-Hofstader effect is important in discussing the notion of a technological singularity since if one does occur this effect would diminish one's ability to recognize it as being real. The author does not name this phenomenon as such in the book, but a reading of it definitely reveals that he is aware of the skepticism expressed by many towards any "advances" in machine intelligence.

Another one of these psychological issues regards the attitude of many philosophers on the notion of machine intelligence. In most cases they are extremely skeptical, and many AI researchers seem to feel the need to "refute" their opinions on the "impossibility" of intelligent machines. Unfortunately the author is one of these, and devotes space in the book to counter various philosophical arguments against AI. His arguments, although valid, are really a waste of time though. Such time would be better spent, both for the author and for AI researchers, in the actual development of intelligent machines. A moratorium should be declared among AI researchers on all philosophical speculation. Such musings are best left to professional philosophers, who have the time and the inclination to indulge themselves in them.

There are other issues that should have been given more attention in the book, such as more details on the energy requirements needed to bring about such a singularity. In addition, the author needs to sharpen just what he means by intelligence and move away from the Turing test/human brain benchmark that he uses in the book. There are many examples of intelligence in the natural world, and these can and have been emulated in many different types of machines. Interestingly, the fixation on human intelligence and the reverse engineering of the human brain (that is exemplified in this book) has inspired a few research teams to attempt to build a machine of "general intelligence", i.e. one that can think in many different domains, as clearly humans can. But it is still an open question whether this intelligence is "entangled" over these domains, i.e. whether or not a decrease in ability in one domain will affect the ability in another. From an evolutionary or efficiency standpoint it would seem that that domain specific intelligence is more optimal.

The notion of a technological singularity can be met with both exhilaration and a sense of foreboding, since (radical) change can be embraced with enthusiasm and with some feelings of anxiety. Even the author expresses this when he writes in the book that he is not "entirely comfortable" with all the consequences of a technological singularity. He has though made a fairly strong case for rapidly accelerating change. If the book concentrated more on the actual examples of intelligent machines and included the enormous amount of data from activities in applied AI that are now going on, an even stronger case could be made.
55 comments| 194 people found this helpful. Was this review helpful to you?YesNoReport abuse
on September 22, 2005
Kurzweil does a good job of arguing that extrapolating trends such as Moore's Law is better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.

Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.

His discussion of the practical importance of reversible computing is clearer than anything else I've read on this subject.

When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.

For instance (talking about the world circa 2030): "The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies." Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.

His reference to "the devastating effect that illegal file sharing has had on the music-recording industry" seems to say something undesirable about his perspective.

His comments on economists thoughts about deflation are confused and irrelevant.

On page 92 he says "Is the problem that we are not running the evolutionary algorithms long enough? ... This won't work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won't help." If "conventional" excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.

His description of nanotech progress seems naive. (page 228) "Drexler's dissertation ... laid out the foundation and provided the road map still being followed today." (page 234): "each aspect of Drexler's conceptual designs has been validated". I've been following this area pretty carefully, and I'm aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler's road map, it's a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that "no serious flaw in Drexler's nanoassembler concept has been described".

Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.

I'm bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don't indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky's Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky's guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
55 comments| 408 people found this helpful. Was this review helpful to you?YesNoReport abuse
on October 13, 2005
To say that Mr. Kurzweil is a bit of an optimist is like saying Shaq is a bit on the tall side. Mr K is positively bubbling with enthusiasim. Had it not been taken by Joe Namath a suitable title might have been "The Future's So Bright I Just Gotta Wear Shades". But therein lies the problem. Mr K comes across more like a passionate evangelical than a reasoned scientist. Whenever someone is absolutley convinced about the rightness of his assumptions I become skeptical.

If you're reading this you know the premise of the book. Mr. K maintains that the pace of technological change (and by technology he means the really cool technologies, like infotech, biotech, and nanotech) is not simply increasing, but increasing exponentially, so fast that we will soon reach a point where man and machine have become one, and are brains are a million (or maybe a billion) times more powerful. When this happens everything we know will have changed forever.

Moreover, this is not someting that will happen at some vague time in the far future. It's just around the corner. Mr. K even gives us a date: 2045.

While reading the book I kept thinking, What if Mr. K had written this in the mid 1950's? Certainly he'd have backup for his basic premise--the changes that occured in the first half of the 20th century were indeed tremendous. Take aviation, a hot technology in those days. Mr. K would no doubt have observed that we went from Kitty Hawk to the Boeing 707 in just 50 years. Projecting ahead, Mr. K would have concluded that the second half of the century would see an even greater rate of advancement, so that by now we'd all have our own personal flying devices, zipping off to Europe in just minutes.

But that hasn't happened. Certainly there has been signigicant progress in aviation in the last 50 years, but not like the 50 years before that. In some says it's worse. I suspect that since 9/11 the time it takes to fly from Los Angeles to San Francisco (from the time you get to one airport to the time you leave the other) may be longer now than it was in the 1950's.

Why has this happened? A lot of this has to do with social conditions, not technological ones. Supersonic trasport never got off the ground (so to speak) in part because people didn't want the sonic booms near populated areas. These same social factors may well put the brakes on a lot of what Mr. K predicts.

It's not that Mr. K's book isn't based on hard science. It's positively larded with science, so much so that my eyes tended to glaze over many times. It's just that he doesn't seem very critical. While he does acknowledge the existence of contrary opinion, he quickly (albeit politely) dismisses any cautionary thoughts. Those who disagree with his beliefs are clearly stuck-in-the-mud, nay-saying Luddites.

Mr K is obviously a brilliant, well-informed scientist. I don't have enough knowledge to judge the accuracy of his facts, except in a few situations. When that does occur, though, I become unimpressed. For example, he spends a few pages talking about the increases that have occured in life expectancy, and uses this to project further increases to 150 years and then to 500 years. But he fails to distinguish between life exoectancy and life span. The former has indeed increased, but the latter has not. I am certain Mr. K knows the difference. His failure to make the distinction is misleading and disingenuous. It makes me wonder about the veracity of the rest of the book.

As to the book itself, it's far too long. He repeats his points so much it seems as though he thinks that by mere repetition the reader will become more convinced that he's right. And some parts of the book are simply annoying, like the smug pseudo-conversations among past, present, and future personages that appear throughout the work.

To his credit, though, his optimisim about the future is refreshing, and certainly an antidote to the dystopian views typical in literature and Hollywood (Brave New World, 1984, Blade Runner, Mad Max, The Terminator, Waterworld, etc.).

The bottom line here is that Mr. K. doesn't seem to remember that virtually all predictions about the future are wrong, since the predictions are simply extrapolations of current trends. The future is never what we think it will be, and Mr. K is no exception.

Then again, he could be right. If so, I just hope I can live long enough to enjoy the sigularity, so I can have my body filled with nanobots and my brain uploaded to (as he would say) a suitable substrate. Maybe being a cyborg won't be so bad.
1414 comments| 281 people found this helpful. Was this review helpful to you?YesNoReport abuse
Ray Kurzweil is unquestionably the most brilliant guru for the future of information technology, but Joel Garreau's book Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human covers the same ground, with the same lack of soul, but more interesting and varied detail.

This is really four booklets in one: a booklet on the imminence of exponential growth within information technologies including genetics, nano-technology, and robotics; a booklet on the general directions and possibilities within each of these three areas; a booklet responding to critics of his past works; and lengthy notes. All four are exceptional in their detail, but somewhat dry.

I was disappointed to see no mention of Kevin Kelly's Out of Control: The Rise of Neo-Biological Civilization and just one tiny reference to Stewart Brand (co-evolution) in a note. Howard Rheingold (virtual reality) and Tom Atlee (collective intelligence) go unmentioned. It is almost as if Kurzweil, who is surely familiar with these "populist" works, has a disdain for those who evaluate the socio-cultural implications of technology, rather than only its technical merits.

This is an important book, but it is by a nerd for nerds. [Sorry, but anyone who takes 250 vitamin supplements and has a schedule of both direct intravenous supplements and almost daily blood testing, is an obsessive nerd however worthy the cause.] It assumes that information technologies, growing exponentially, will solve world hunger, eliminate disease, replenish water, create renewable energy, and allow all of us to have the bodies we want, and to see and feel in our mates the bodies they want. All of this is said somewhat blandly, without the socio-cultural exploration or global evaluation that is characteristic of other works by reporters on the technology, rather than the technologists themselves.

The book is, in short, divorced from the humanities and the human condition, and devoid of any understanding of the pathos and pathology of immoral governments and corporations that will do anything they can to derail progress that is not profitable. It addresses, but with cursory concern, most of the fears voiced by various critics about run-away machines and lethal technologies that self-replicate in toxic manners to the detriment of their human creators.

The book is strongest in its detailed discussion of both computing power and draconian drops in needed energy for both computing and for manufacturing using new forms of computing. The charts are fun and helpful. The index is quite good.

I put the book down, after a pleasant afternoon of study, with several feelings.

First, that I should give Joel Garreau higher marks for making this interesting, and recommend that his book be bought at the same time as this one.

Second, that there is an interesting schism between the Kurzweil-Gates gang that believes they can rule the world with machines; and the Atlee-Wheatley gang that believes that collective **human** intelligence, with machines playing a facilitating but not a dominant role, is the desired outcome.

Third that there really are very promising technologies with considerable potential down the road, but that government is not being serious about stressing peaceful applications--the author is one of five advisors to the U.S. military on advanced technologies, and it distresses me that he supports a Defense Advanced Research Programs Agency (DARPA) that focuses on making war rather than peace--imagine if we applied the same resources to preventing war and creating wealth?

Fourth, information technologies are indeed going to change the balance of power among nations, states, and neighborhoods--on balance, based on his explicit cautions, I predict a real estate collapse in the over-priced major cities of the US, and a phenomenal rise of high-technology villages in Costa Rica and elsewhere.

The singularity may be near, as the author suggests, but between now and then tens of millions more will die. Technology in isolation is not enough--absent broad ethical context, it remains primarily a vehicle for nerds to develop and corporations to exploit. As I told an internal think session at Interval in the 1990's ("GOD, MAN, & INFORMATION:. COMMENTS TO INTERVAL IN-HOUSE". Tuesday, 9 March 1993" can use as a Yahoo search) until our technologies can change the lives of every man, woman, and child in the Third World, they are not truly transformative. This book hints at a future that may not be achieved, not for lack of technology, but for lack of good will.

EDIT of 24 Oct 05: Tonight I will review James Howard Kunstler's The Long Emergency: Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the Twenty-First Century His bottom line is that cheap oil underlies all of our surburban, high-rise, mega-agriculture, and car-based mobility, and that the end of cheap oil is going to have catastrophic effects on how we live, driving much of the country into poverty and dislocation, with the best lives being in those communities that learn to live with local agriculture and local power options. Definitely the opposite of what Kurzweil sees, and therefore recommended as a competing viewpoint.

EDIT of 12 Dec 07: ethics is something I have thought about a lot, and my first public article outside the intelligence community was entitled "E3i: Ethics, Ecology, Evolution, & Intelligence: An Alternative Paradigm for *National* Intelligence." It must be something about engineers. Neither the author of this book, nor the Google Triumverate, seem to grasp the moral implications of technology run amuk without respect for ethics, privacy, copyright, humanity, etc. This is one reason I admire E. O. Wilson so much--the first of his works that I read, Consilience: The Unity of Knowledge, answered the question: "Why do the sciences need the humanities?" The second, The Future of Life, answered the question, "What is the cost and how do we save the planet?" Science had little to do with the latter. The two authors are poles apart.
3434 comments| 692 people found this helpful. Was this review helpful to you?YesNoReport abuse
on October 27, 2010
This book is very well known, and the question how many of the rather rapidly advancing technological trends will continue and how they will influence humanity's future is a very interesting one. So I bought the book and read it. I found it much, much weaker than I had anticipated it to be.

Ray Kurzweil wrote a thick volume combining 50's style naive technology-optimism, uncritical extrapolation of current trends (especially, but not only, Moore's law) and somewhat-more-than-half knowledge of biology. He assembles all of that into his own personal pseudo-religion, and even uses a terminology that sounds very religious (He calls himself a "singulatarian"). According to Kurzweil, all will be well: hunger, disease, aging and even death will be eradicated once we fuse with computers and have nano-robots populate our bloodstreams. Even wars will be less bloody - he includes a graph of declining US war deaths over time, conveniently ignoring the numbers of foreign human beings killed by the US in these wars.

In most cases, his arguments are not very sound, in my opinion. One problem is that he strongly believes that all the current technological trends will continue to accelerate, disregarding physical boundaries and resource constraints. Often his argument goes as in: X has been achieved. Therefore XX is maybe, theoretically possible, said some expert. Once we have XX, we will be able to achieve YY. Hence, YY is about to become reality within a decade.

In my own field, neurobiology, he mistakes models (intellectual tools to explain certain aspects of a phenomenon) with complete, reverse engineered, functional reproductions of neural systems. There are certainly good models out there, but no neural structure has so far been reverse engineered, not even close.

Always suspicious: the use of quotations of old or dead wise men to cover up the lack of content in a book. Just because someone managed to look up what Ein- or Wittgenstein once said, that does not make his arguments stronger, does it? But, it leaves the reader in this aura of just having being confronted with the words of these intellectual giants, and some of that must rub off to what the author had to say, no? Kurzweil wins Olympic gold in name-dropping with "The singularity is near", where there are rarely less than three quotations in front of a chapter, and whole chapters are only made up of quotations, nothing else!

This is in fact a rather involuntarily interesting book. Why does a member of the US upper class come up with a technology based salvation story? I think what we have here is an extremely interesting fusion of the American believe in the power of technology to solve problems with the strong US religious tradition.
1919 comments| 140 people found this helpful. Was this review helpful to you?YesNoReport abuse
on October 2, 2005
I'm going to rate this book five stars, because at nearly 500 pages packed with important ideas (plus another hundred pages of notes) there is no question that this weighty book was well worth my $20.

As you might expect, Ray is at the top of his game when examining trends in computer science. He has many examples of "narrow" A.I. to share. More importantly, he believes that computer modeling of brain functioning will yield the algorithms we need in order to eventually achieve an artificial general intelligence. Indeed, cognitive science is exploding thanks to increases in computing and scanning power, and the brain will likely yield up many of its secrets in coming years. I find his predictions in this area quite believable.

I found some of his arguments regarding nanotechnology to be less convincing. In particular, his predictions for nanorobotics seem optimistic beyond all reason given the currently nascent state of this technology. Examples drawn from the current state of the art seem almost hopelessly far removed from the robust and exceedingly powerful technology he imagines within 25 years. On the other hand, if these surprising predictions are borne out it will be a powerful confirmation of his "law of accelerating returns". I guess I'll be reserving judgement until then.

There's alot more I could say (good and bad) about this important book, but the bottom line is that if you frequently find yourself wondering about the role of technology in the future of our species, "The Singularity is Near" will give you far more than your money's worth in food for thought.
0Comment| 41 people found this helpful. Was this review helpful to you?YesNoReport abuse
on December 18, 2006
I think some of the reviewers are missing the point of this book. Kurzweil is not an optimist - and I don't even think he would consider himself a 'proponent' of GNR, specific IT advances, or the changes he is predicting. The whole point is that these advances are part of our evolution as a species - any resistance by governments, ethicists, or individuals are automatically calcuated into his predictions. He's looking at the net effect of progress (spurned primarily through economics and economic darwinism) and not by renegade or revolutionary scientists or technologists.

The advances he is predicting are based on the worldview that these advances are inevitable - just as our biological evolution was inevitable (especially with hindsight) - and, all the technological advances (especially in the past 100 years) are the proof that the speed of developing and adopting technologies into society is ever increasing, to a point where it is unstoppable and ubiquitous.

Take the cell phone example - some may resist the adoption of cell phones - saying that they invade their privacy, and overcomplicate their lives to a point that is unacceptable to them. This is a valid view, and individuals have the option to choose not to adopt this technology. But, the fact of the matter is that this technology has and is changing the world - the overwhelming majority of the world population does not object to cell phone use, and in fact many are being empowered by them (look at subscription rates in China and India over the past 6 months - something in the millions of new subscribers every month).

This technology changes society - it changes human interrelationships - and it changes human-technology relationships. Having a cellphone brings us one step closer to being 'always-on' - always connected. It comes closer to being integrated into our biology (you can sleep with a cellphone - carry it where-ever you go - this level of connectivity previously would have required being physically tethered to a land-line)

There is little (if any) judgement in Kurzweil's conclusions. They are logically grounded (which is why he provides so many counter arguments, and supporting data). They are based specifically on the worldview that our evolution is now in our hands, and much of what we do with it can be predicted by how we've developed and adopted technologies in the past - or how biological evolution occurred. He admits to a large unknown - the fact that we don't know what the resulting convergence of technology and biology will look, or feel like. The fact that this will happen does not allow us to see or even comprehend what this will mean for us.

My personal feeling is that this is the most worriesome part - the fact that the change may be so radical, that some people (or even class of people) may not even survive the transition - or it could in fact create multiple classes of humans (humans & proto-humans). But, again, there is no judgement in this - if that is our fate, it will be. Just as wars in the past have determined the current global power-structure - there will likely be conflict involved in the process. I hope that some of these advances and their inherent connected nature will preclude or somehow prevent the conflict from being a violent one - but, you have to imagine it is a possibility.

There is a lot of evidence to support the likelihood of Kurzweil's near-term worldview. If his predictions about the speed of change are correct, if you are one of the few capable of internalizing and understanding the implications, I believe you will be at an advantage in life and business. If you understand and believe the potential of this, but close your eyes to it because you don't like the implications, you will be one of the worst off when it does happen. And, the best situation is if you understand the implications, and are in a position to direct them when they start to occur, you can help to make sure that they do so in the most equitable and positive fashion possible.
44 comments| 28 people found this helpful. Was this review helpful to you?YesNoReport abuse
on October 31, 2014
Kurzweil uses the words "growing exponentially" about so many phenomena, and so terribly often in this book, that it becomes both grating and incredible - incredible in the literal sense of the word. Kurzweil claims that technology has been "growing exponentially" for 2 centuries (funny that we still have - with some refinements - the internal combustion engines, steam boilers and refrigeration designs we had in 1895). He claims that computer technology has been "growing exponentially" (in capability) since its inception in the forties. He claims "evolution has been growing exponentially" - a completely ludicrous statement. He insists knowledge is "growing exponentially". Everything is exponential.
The problem with this, aside from the repetitively annoying frequency of saying so, is that few of the things Kurzweil claims are "growing exponentially" actually are, or can do so. Certainly "evolution" is not increasing exponentially.....evolution proceeds at the same pace - varying according to natural mutation rates - as it did 150 million years ago. Despite his claims, technology has NOT been "growing exponentially", unless you turn a blind eye to the many plateaus in both development and dispersion of each and all technological developments ( it's exponential .... just ignore this bit and this bit and this bit and this bit ).
It comes down to this: "<subjectively identified item> has been growing exponentially" COULD be true if one was more careful about what <subjectively identified item> one chooses.
Kurzweil also says, repeatedly and in a variety of phrasings, that evolution "has a purpose". Then evolution should be fired for taking so long to achieve its "goal". Evolution is a somewhat random process, whose result is the procreation of genes - its result, not its 'purpose'.
Aside from these many Scientific misdemeanors and Technological Truth-stretches (ie. outright misrepresentations), Kurzweil has many lengthy explanations of how his view and opinion of what consists in being a Human Being is "simply different" from other peoples' views. For him, being able to upload yourself to a cyber-world and discard your biological body doesn't change your 'humanity'.......he insists that, as long as we retain our intellectual/emotional/psychological outlook, it doesn't matter what our bodies are comprised of, nor the nature of our physical milieu. For him, as long as the robot claims it is Ray Kurzweil, it will BE Ray Kurzweil.
A tour-de-force of intellectual garbage.
66 comments| 21 people found this helpful. Was this review helpful to you?YesNoReport abuse
on June 7, 2012
After all, he credits me on p. 498 with coining "singularitarian" nearly 20 years ago, and he has turned my neologism into nearly a mainstream word due to his nonstop lecturing and self-promotion. I haven't checked to see if my word has made it into the Oxford English Dictionary yet, but if it does, I credit Kurzweil for pushing its use and acceptance.

But more and more people who bother to look at the facts can see now that the transhumanist movement which considers Kurzweil a celebrity and a guru assumes a false premise, namely, that we live in an era of "accelerating technological progress" or whatever gee-whiz slogan transhumanists use these days. Instead, some of the transhumanist subculture's other celebrities, namely Tyler Cowen, Peter Thiel and Neal Stephenson, have recently complained that apart from computing, most technologies have stagnated since about 1970. Thiel even argues in a recent debate with George Gilder (findable on YouTube) that most forms of engineering, for example in nuclear power, face so many restrictions now that they have become effectively illegal.

I have observed the transhumanist scene for over 20 years now, and I think the "Great Stagnation" phenomenon, as Cowen calls it, explains why this vision of a radical social movement to create "the future" has failed to turn into something practical and useful. Circa 1990 I read someone's review of FM-2030's book Are You a Transhuman?: Monitoring and Stimulating Your Personal Rate of Growth in a Rapidly Changing World where he compared FM, and by extension transhumanists in general, to mimes who imitate the motions of productive people, but without doing any real work. Transhumanism's grounding in science fiction and speculative futurology, instead of real science and technology, means that young, technologically hip adults cycle into transhumanism, and then out again, because the middle-aged reality principle (aging and mortality) asserts itself around the time you turn 40. As a result transhumanism lacks staying power, as we can see from the rapid turnover in current transhumanist celebrities and enthusiasms. We already live in transhumanism's "FM-who? Extropi-what?" era, and a few years from now a younger crop of transhumanists will ask, "Ray who? Eliezer who?" etc., as they latch onto other transhumanist celebrities currently unknown.

Unfortunately, some people who get caught up in the singularity cult apparently can't, or don't want to, extricate themselves. For example, Kurzweil himself suffers from the shameful delusion that he can build "bridges to immortality" through quackery and homeopathic woo about water; while another well known transhumanist figure I could name who has reached his late 40's still looks and talks like a college-age stoner. (Updating Dean Wormer's excellent advice from "Animal House" for the early 21st Century: Unkempt, stoned and transhumanist is no way to go through life, son.)

On top of that, neuroscientists like P.Z. Myers have criticized Kurzweil for promoting misconceptions about the human brain which would give Kurzweil failing grades in real university neuroscience courses. The man clearly hasn't done his homework in a lot of areas, or else he just doesn't understand what he has read, so as a result his book will eventually fall into deserved obscurity like, say, Eric Drexler's writings in the 1980's and early 1990's about physically impossible "nanotechnology." Read Kurzweil's book if you must, but perform a benefit/cost analysis first given what other smart people have written about his scientific ignorance and track record of bad predictions.
44 comments| 30 people found this helpful. Was this review helpful to you?YesNoReport abuse
on October 19, 2005
"Repent! The End is Near!"

If I saw a person holding up this sign on a street corner, I might think, "Poor fellow. Where has his mind gone? Too bad there are crazy people like this in the world."

Yet, in Ray Kurzweil's book "Singularity," his message is even more far out, but more like,

"Get Ready! The Beginning is Near!"

And yet, with Kurzweil, my response is, "Okay, I understand so far. Tell me more." Then I see the data. Then I see his inexorable logic. I would bet a lot of money on his predictions. "Singularity" is the most startling book I have ever read in my life (and I have read a lot of great books).

Well before the year 2030 (within 25 years), if you are still alive, you will have the choice about whether or not you want to "live forever" (in THIS reality; not some "afterlife").

Well before 2030, there will be a computer that, by all measures, will be smarter than the smartest "regular human" (i.e. non-computer-enhanced human) on this earth. This computer will then be able to invent an even smarter computer, which will then be able to invent an even smarter computer, which....

The changes in the next 14 years will be as much or more than the changes since 1955 (the last 50 years). And double again. And double again. And double again....we are fast approaching the asymptotic infinity of change and "progress"!

And there is basically nothing we can do about it. It will happen whether we like it or not (and most of us will end up liking it). We can "manage" it to some extent in order to provide a measure of protection against the end-of-the-world scenarios that could arise, either accidentally or intentionally, out of this run-away progress.

In his close-to-700-page manifesto, this is the essence of our future that Ray Kurzweil paints for us.

As an author myself ("Courage: the Choice that Makes the Difference-Your Key to a Thousand Doors"), I have a deep respect for what it takes to write a great book. The only other author that comes to mind whose breadth of knowledge and wisdom would compare with Kurzweil is philosopher Ken Wilber (although their writings are quite dissimilar). In reading Kurzweil I am continually amazed by the breadth and depth of his insights and conclusions.

There is one issue that he addresses from many perspectives (will computers become conscious? - his answer is "yes") that I cannot get my mind around. Even though his logic makes "sense" to me, I still can't quite accept it. However, that is not a big issue for me (as it might be for others), since, for all intents and purposes, I can totally accept that computers will be able to APPEAR as fully human (should they "choose" to do so).

I noticed that some of the other reviewers of "Singularity" have faulted Kurzweil for his optimism. Although I can see their point, I think that neither optimism nor pessimism is most appropriate here. Obviously we are facing an eventuality that holds the possibility of both the greatest promise as well as the greatest peril. Creativity, intelligence, and courage are our best tools at this unprecedented time in the history of our solar system.

I give "Singularity" five stars. It ranks that based solely upon the "wake up call" it is for humanity.

0Comment| 53 people found this helpful. Was this review helpful to you?YesNoReport abuse