372 of 418 people found the following review helpful
4.0 out of 5 stars Important extrapolations, but not as careful or concise as I wanted
Kurzweil does a good job of arguing that extrapolating trends such as Moore's Law is better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither...
Published on September 22, 2005 by Peter McCluskey
229 of 257 people found the following review helpful
3.0 out of 5 stars Brave New World
To say that Mr. Kurzweil is a bit of an optimist is like saying Shaq is a bit on the tall side. Mr K is positively bubbling with enthusiasim. Had it not been taken by Joe Namath a suitable title might have been "The Future's So Bright I Just Gotta Wear Shades". But therein lies the problem. Mr K comes across more like a passionate evangelical than a reasoned...
Published on October 13, 2005 by John St John
Most Helpful First | Newest First
372 of 418 people found the following review helpful
4.0 out of 5 stars Important extrapolations, but not as careful or concise as I wanted,
Kurzweil does a good job of arguing that extrapolating trends such as Moore's Law is better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I've read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (talking about the world circa 2030): "The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies." Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference to "the devastating effect that illegal file sharing has had on the music-recording industry" seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says "Is the problem that we are not running the evolutionary algorithms long enough? ... This won't work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won't help." If "conventional" excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) "Drexler's dissertation ... laid out the foundation and provided the road map still being followed today." (page 234): "each aspect of Drexler's conceptual designs has been validated". I've been following this area pretty carefully, and I'm aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler's road map, it's a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that "no serious flaw in Drexler's nanoassembler concept has been described".
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I'm bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don't indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky's Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky's guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
158 of 175 people found the following review helpful
5.0 out of 5 stars Technophilic ecstacy,
The author is definitely one of the most inspiring of all researchers in the field of applied artificial intelligence. For those, such as this reviewer, who are working "in the trenches" of applied AI, his website is better than morning coffee. One does not have to agree with all the conclusions reached by the author in order to enjoy this book, but he does make a good case, albeit somewhat qualitative, for the occurrence, in this century, of what he and other futurists have called a `technological singularity.' He defines this as a period in the future where the rate of technological change will be so high that human life will be `irreversibly transformed.' There is much debate about this notion in the popular literature on AI, but in scientific and academic circles it has been greeted with mixed reviews. Such skepticism in the latter is expected and justified, for scientists and academic researchers need more quantitative justification than is usually provided by the enthusiasts of the singularity, which in this book the author calls "singularitarians." Even more interesting though is that the notion of rapid technological change seems to be ignored by the business community, who actually stand to gain (or lose) the most by it.
Since this book is aimed primarily at a wide audience, and not professional researchers, the author does not include detailed arguments or definitions for the notion of machine intelligence or a list of the hundreds of examples of intelligent machines that are now working in the field. Indeed, if one were to include a discussion of each of these examples, this book would swell to thousands of pages. There are machines right now used in business and industry that can manage, troubleshoot, and analyze networks, diagnose illnesses, compose music definitely worth listening to, choreograph dances, simulate human behavior in computer games, recommend and engage in financial transactions and bargaining, and many, many other tasks, a detailed list of which would, again, entail many thousands of pages.
There are various psychological issues that arise when discussing machine intelligence, which if believed might prohibit the acceptance of any kind of notion of a technological singularity. For example, it is one of the historical peculiarities of research in AI that advances in the field are later trivialized, i.e. when a problem in AI becomes solved it no longer holds any mystery and is then considered to be just another part of information processing. It is then no longer regarded as `intelligent' in any sense of the term. This phenomenon in AI research might be called the "Michie-McCorduck-Hofstader effect", named after the three individuals, Donald Michie, Barbara McCorduck, and Douglas Hofstader, who discussed it some detail in their writings. If one examines the history of AI, one finds many examples of this effect, such as in knowledge discovery from databases, the use of business rules in database technologies, and the use of ontologies for information systems development. One of the best examples of this effect though is the backgammon player TD-Gammon, a highly sophisticated example of machine intelligence but which is now considered to be merely part of the "programmer's toolbox." The Michie-McCorduck-Hofstader effect is important in discussing the notion of a technological singularity since if one does occur this effect would diminish one's ability to recognize it as being real. The author does not name this phenomenon as such in the book, but a reading of it definitely reveals that he is aware of the skepticism expressed by many towards any "advances" in machine intelligence.
Another one of these psychological issues regards the attitude of many philosophers on the notion of machine intelligence. In most cases they are extremely skeptical, and many AI researchers seem to feel the need to "refute" their opinions on the "impossibility" of intelligent machines. Unfortunately the author is one of these, and devotes space in the book to counter various philosophical arguments against AI. His arguments, although valid, are really a waste of time though. Such time would be better spent, both for the author and for AI researchers, in the actual development of intelligent machines. A moratorium should be declared among AI researchers on all philosophical speculation. Such musings are best left to professional philosophers, who have the time and the inclination to indulge themselves in them.
There are other issues that should have been given more attention in the book, such as more details on the energy requirements needed to bring about such a singularity. In addition, the author needs to sharpen just what he means by intelligence and move away from the Turing test/human brain benchmark that he uses in the book. There are many examples of intelligence in the natural world, and these can and have been emulated in many different types of machines. Interestingly, the fixation on human intelligence and the reverse engineering of the human brain (that is exemplified in this book) has inspired a few research teams to attempt to build a machine of "general intelligence", i.e. one that can think in many different domains, as clearly humans can. But it is still an open question whether this intelligence is "entangled" over these domains, i.e. whether or not a decrease in ability in one domain will affect the ability in another. From an evolutionary or efficiency standpoint it would seem that that domain specific intelligence is more optimal.
The notion of a technological singularity can be met with both exhilaration and a sense of foreboding, since (radical) change can be embraced with enthusiasm and with some feelings of anxiety. Even the author expresses this when he writes in the book that he is not "entirely comfortable" with all the consequences of a technological singularity. He has though made a fairly strong case for rapidly accelerating change. If the book concentrated more on the actual examples of intelligent machines and included the enormous amount of data from activities in applied AI that are now going on, an even stronger case could be made.
229 of 257 people found the following review helpful
3.0 out of 5 stars Brave New World,
To say that Mr. Kurzweil is a bit of an optimist is like saying Shaq is a bit on the tall side. Mr K is positively bubbling with enthusiasim. Had it not been taken by Joe Namath a suitable title might have been "The Future's So Bright I Just Gotta Wear Shades". But therein lies the problem. Mr K comes across more like a passionate evangelical than a reasoned scientist. Whenever someone is absolutley convinced about the rightness of his assumptions I become skeptical.
If you're reading this you know the premise of the book. Mr. K maintains that the pace of technological change (and by technology he means the really cool technologies, like infotech, biotech, and nanotech) is not simply increasing, but increasing exponentially, so fast that we will soon reach a point where man and machine have become one, and are brains are a million (or maybe a billion) times more powerful. When this happens everything we know will have changed forever.
Moreover, this is not someting that will happen at some vague time in the far future. It's just around the corner. Mr. K even gives us a date: 2045.
While reading the book I kept thinking, What if Mr. K had written this in the mid 1950's? Certainly he'd have backup for his basic premise--the changes that occured in the first half of the 20th century were indeed tremendous. Take aviation, a hot technology in those days. Mr. K would no doubt have observed that we went from Kitty Hawk to the Boeing 707 in just 50 years. Projecting ahead, Mr. K would have concluded that the second half of the century would see an even greater rate of advancement, so that by now we'd all have our own personal flying devices, zipping off to Europe in just minutes.
But that hasn't happened. Certainly there has been signigicant progress in aviation in the last 50 years, but not like the 50 years before that. In some says it's worse. I suspect that since 9/11 the time it takes to fly from Los Angeles to San Francisco (from the time you get to one airport to the time you leave the other) may be longer now than it was in the 1950's.
Why has this happened? A lot of this has to do with social conditions, not technological ones. Supersonic trasport never got off the ground (so to speak) in part because people didn't want the sonic booms near populated areas. These same social factors may well put the brakes on a lot of what Mr. K predicts.
It's not that Mr. K's book isn't based on hard science. It's positively larded with science, so much so that my eyes tended to glaze over many times. It's just that he doesn't seem very critical. While he does acknowledge the existence of contrary opinion, he quickly (albeit politely) dismisses any cautionary thoughts. Those who disagree with his beliefs are clearly stuck-in-the-mud, nay-saying Luddites.
Mr K is obviously a brilliant, well-informed scientist. I don't have enough knowledge to judge the accuracy of his facts, except in a few situations. When that does occur, though, I become unimpressed. For example, he spends a few pages talking about the increases that have occured in life expectancy, and uses this to project further increases to 150 years and then to 500 years. But he fails to distinguish between life exoectancy and life span. The former has indeed increased, but the latter has not. I am certain Mr. K knows the difference. His failure to make the distinction is misleading and disingenuous. It makes me wonder about the veracity of the rest of the book.
As to the book itself, it's far too long. He repeats his points so much it seems as though he thinks that by mere repetition the reader will become more convinced that he's right. And some parts of the book are simply annoying, like the smug pseudo-conversations among past, present, and future personages that appear throughout the work.
To his credit, though, his optimisim about the future is refreshing, and certainly an antidote to the dystopian views typical in literature and Hollywood (Brave New World, 1984, Blade Runner, Mad Max, The Terminator, Waterworld, etc.).
The bottom line here is that Mr. K. doesn't seem to remember that virtually all predictions about the future are wrong, since the predictions are simply extrapolations of current trends. The future is never what we think it will be, and Mr. K is no exception.
Then again, he could be right. If so, I just hope I can live long enough to enjoy the sigularity, so I can have my body filled with nanobots and my brain uploaded to (as he would say) a suitable substrate. Maybe being a cyborg won't be so bad.
693 of 804 people found the following review helpful
4.0 out of 5 stars Technically brilliant, culturally constrained,
Ray Kurzweil is unquestionably the most brilliant guru for the future of information technology, but Joel Garreau's book Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human covers the same ground, with the same lack of soul, but more interesting and varied detail.
This is really four booklets in one: a booklet on the imminence of exponential growth within information technologies including genetics, nano-technology, and robotics; a booklet on the general directions and possibilities within each of these three areas; a booklet responding to critics of his past works; and lengthy notes. All four are exceptional in their detail, but somewhat dry.
I was disappointed to see no mention of Kevin Kelly's Out of Control: The Rise of Neo-Biological Civilization and just one tiny reference to Stewart Brand (co-evolution) in a note. Howard Rheingold (virtual reality) and Tom Atlee (collective intelligence) go unmentioned. It is almost as if Kurzweil, who is surely familiar with these "populist" works, has a disdain for those who evaluate the socio-cultural implications of technology, rather than only its technical merits.
This is an important book, but it is by a nerd for nerds. [Sorry, but anyone who takes 250 vitamin supplements and has a schedule of both direct intravenous supplements and almost daily blood testing, is an obsessive nerd however worthy the cause.] It assumes that information technologies, growing exponentially, will solve world hunger, eliminate disease, replenish water, create renewable energy, and allow all of us to have the bodies we want, and to see and feel in our mates the bodies they want. All of this is said somewhat blandly, without the socio-cultural exploration or global evaluation that is characteristic of other works by reporters on the technology, rather than the technologists themselves.
The book is, in short, divorced from the humanities and the human condition, and devoid of any understanding of the pathos and pathology of immoral governments and corporations that will do anything they can to derail progress that is not profitable. It addresses, but with cursory concern, most of the fears voiced by various critics about run-away machines and lethal technologies that self-replicate in toxic manners to the detriment of their human creators.
The book is strongest in its detailed discussion of both computing power and draconian drops in needed energy for both computing and for manufacturing using new forms of computing. The charts are fun and helpful. The index is quite good.
I put the book down, after a pleasant afternoon of study, with several feelings.
First, that I should give Joel Garreau higher marks for making this interesting, and recommend that his book be bought at the same time as this one.
Second, that there is an interesting schism between the Kurzweil-Gates gang that believes they can rule the world with machines; and the Atlee-Wheatley gang that believes that collective **human** intelligence, with machines playing a facilitating but not a dominant role, is the desired outcome.
Third that there really are very promising technologies with considerable potential down the road, but that government is not being serious about stressing peaceful applications--the author is one of five advisors to the U.S. military on advanced technologies, and it distresses me that he supports a Defense Advanced Research Programs Agency (DARPA) that focuses on making war rather than peace--imagine if we applied the same resources to preventing war and creating wealth?
Fourth, information technologies are indeed going to change the balance of power among nations, states, and neighborhoods--on balance, based on his explicit cautions, I predict a real estate collapse in the over-priced major cities of the US, and a phenomenal rise of high-technology villages in Costa Rica and elsewhere.
The singularity may be near, as the author suggests, but between now and then tens of millions more will die. Technology in isolation is not enough--absent broad ethical context, it remains primarily a vehicle for nerds to develop and corporations to exploit. As I told an internal think session at Interval in the 1990's ("GOD, MAN, & INFORMATION:. COMMENTS TO INTERVAL IN-HOUSE". Tuesday, 9 March 1993" can use as a Yahoo search) until our technologies can change the lives of every man, woman, and child in the Third World, they are not truly transformative. This book hints at a future that may not be achieved, not for lack of technology, but for lack of good will.
EDIT of 24 Oct 05: Tonight I will review James Howard Kunstler's The Long Emergency: Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the Twenty-First Century His bottom line is that cheap oil underlies all of our surburban, high-rise, mega-agriculture, and car-based mobility, and that the end of cheap oil is going to have catastrophic effects on how we live, driving much of the country into poverty and dislocation, with the best lives being in those communities that learn to live with local agriculture and local power options. Definitely the opposite of what Kurzweil sees, and therefore recommended as a competing viewpoint.
EDIT of 12 Dec 07: ethics is something I have thought about a lot, and my first public article outside the intelligence community was entitled "E3i: Ethics, Ecology, Evolution, & Intelligence: An Alternative Paradigm for *National* Intelligence." It must be something about engineers. Neither the author of this book, nor the Google Triumverate, seem to grasp the moral implications of technology run amuk without respect for ethics, privacy, copyright, humanity, etc. This is one reason I admire E. O. Wilson so much--the first of his works that I read, Consilience: The Unity of Knowledge, answered the question: "Why do the sciences need the humanities?" The second, The Future of Life, answered the question, "What is the cost and how do we save the planet?" Science had little to do with the latter. The two authors are poles apart.
39 of 42 people found the following review helpful
5.0 out of 5 stars Agree or disagree, it's well worth a read,
I'm going to rate this book five stars, because at nearly 500 pages packed with important ideas (plus another hundred pages of notes) there is no question that this weighty book was well worth my $20.
As you might expect, Ray is at the top of his game when examining trends in computer science. He has many examples of "narrow" A.I. to share. More importantly, he believes that computer modeling of brain functioning will yield the algorithms we need in order to eventually achieve an artificial general intelligence. Indeed, cognitive science is exploding thanks to increases in computing and scanning power, and the brain will likely yield up many of its secrets in coming years. I find his predictions in this area quite believable.
I found some of his arguments regarding nanotechnology to be less convincing. In particular, his predictions for nanorobotics seem optimistic beyond all reason given the currently nascent state of this technology. Examples drawn from the current state of the art seem almost hopelessly far removed from the robust and exceedingly powerful technology he imagines within 25 years. On the other hand, if these surprising predictions are borne out it will be a powerful confirmation of his "law of accelerating returns". I guess I'll be reserving judgement until then.
There's alot more I could say (good and bad) about this important book, but the bottom line is that if you frequently find yourself wondering about the role of technology in the future of our species, "The Singularity is Near" will give you far more than your money's worth in food for thought.
102 of 121 people found the following review helpful
1.0 out of 5 stars The singularity is far,
This review is from: The Singularity Is Near: When Humans Transcend Biology (Paperback)
This book is very well known, and the question how many of the rather rapidly advancing technological trends will continue and how they will influence humanity's future is a very interesting one. So I bought the book and read it. I found it much, much weaker than I had anticipated it to be.
Ray Kurzweil wrote a thick volume combining 50's style naive technology-optimism, uncritical extrapolation of current trends (especially, but not only, Moore's law) and somewhat-more-than-half knowledge of biology. He assembles all of that into his own personal pseudo-religion, and even uses a terminology that sounds very religious (He calls himself a "singulatarian"). According to Kurzweil, all will be well: hunger, disease, aging and even death will be eradicated once we fuse with computers and have nano-robots populate our bloodstreams. Even wars will be less bloody - he includes a graph of declining US war deaths over time, conveniently ignoring the numbers of foreign human beings killed by the US in these wars.
In most cases, his arguments are not very sound, in my opinion. One problem is that he strongly believes that all the current technological trends will continue to accelerate, disregarding physical boundaries and resource constraints. Often his argument goes as in: X has been achieved. Therefore XX is maybe, theoretically possible, said some expert. Once we have XX, we will be able to achieve YY. Hence, YY is about to become reality within a decade.
In my own field, neurobiology, he mistakes models (intellectual tools to explain certain aspects of a phenomenon) with complete, reverse engineered, functional reproductions of neural systems. There are certainly good models out there, but no neural structure has so far been reverse engineered, not even close.
Always suspicious: the use of quotations of old or dead wise men to cover up the lack of content in a book. Just because someone managed to look up what Ein- or Wittgenstein once said, that does not make his arguments stronger, does it? But, it leaves the reader in this aura of just having being confronted with the words of these intellectual giants, and some of that must rub off to what the author had to say, no? Kurzweil wins Olympic gold in name-dropping with "The singularity is near", where there are rarely less than three quotations in front of a chapter, and whole chapters are only made up of quotations, nothing else!
This is in fact a rather involuntarily interesting book. Why does a member of the US upper class come up with a technology based salvation story? I think what we have here is an extremely interesting fusion of the American believe in the power of technology to solve problems with the strong US religious tradition.
26 of 29 people found the following review helpful
5.0 out of 5 stars Insightful,
This review is from: The Singularity Is Near: When Humans Transcend Biology (Paperback)
I think some of the reviewers are missing the point of this book. Kurzweil is not an optimist - and I don't even think he would consider himself a 'proponent' of GNR, specific IT advances, or the changes he is predicting. The whole point is that these advances are part of our evolution as a species - any resistance by governments, ethicists, or individuals are automatically calcuated into his predictions. He's looking at the net effect of progress (spurned primarily through economics and economic darwinism) and not by renegade or revolutionary scientists or technologists.
The advances he is predicting are based on the worldview that these advances are inevitable - just as our biological evolution was inevitable (especially with hindsight) - and, all the technological advances (especially in the past 100 years) are the proof that the speed of developing and adopting technologies into society is ever increasing, to a point where it is unstoppable and ubiquitous.
Take the cell phone example - some may resist the adoption of cell phones - saying that they invade their privacy, and overcomplicate their lives to a point that is unacceptable to them. This is a valid view, and individuals have the option to choose not to adopt this technology. But, the fact of the matter is that this technology has and is changing the world - the overwhelming majority of the world population does not object to cell phone use, and in fact many are being empowered by them (look at subscription rates in China and India over the past 6 months - something in the millions of new subscribers every month).
This technology changes society - it changes human interrelationships - and it changes human-technology relationships. Having a cellphone brings us one step closer to being 'always-on' - always connected. It comes closer to being integrated into our biology (you can sleep with a cellphone - carry it where-ever you go - this level of connectivity previously would have required being physically tethered to a land-line)
There is little (if any) judgement in Kurzweil's conclusions. They are logically grounded (which is why he provides so many counter arguments, and supporting data). They are based specifically on the worldview that our evolution is now in our hands, and much of what we do with it can be predicted by how we've developed and adopted technologies in the past - or how biological evolution occurred. He admits to a large unknown - the fact that we don't know what the resulting convergence of technology and biology will look, or feel like. The fact that this will happen does not allow us to see or even comprehend what this will mean for us.
My personal feeling is that this is the most worriesome part - the fact that the change may be so radical, that some people (or even class of people) may not even survive the transition - or it could in fact create multiple classes of humans (humans & proto-humans). But, again, there is no judgement in this - if that is our fate, it will be. Just as wars in the past have determined the current global power-structure - there will likely be conflict involved in the process. I hope that some of these advances and their inherent connected nature will preclude or somehow prevent the conflict from being a violent one - but, you have to imagine it is a possibility.
There is a lot of evidence to support the likelihood of Kurzweil's near-term worldview. If his predictions about the speed of change are correct, if you are one of the few capable of internalizing and understanding the implications, I believe you will be at an advantage in life and business. If you understand and believe the potential of this, but close your eyes to it because you don't like the implications, you will be one of the worst off when it does happen. And, the best situation is if you understand the implications, and are in a position to direct them when they start to occur, you can help to make sure that they do so in the most equitable and positive fashion possible.
24 of 27 people found the following review helpful
5.0 out of 5 stars Impressive,
Ray Kurzweil is a well known inventor and entrepreneur, he founded and managed a string of successful companies, most of them related to the application of artificial intelligence.
One of Kurzweil's interests is predicting future technological trends. He analyses technological progress and builds mathematical models that can, with a reasonable degree of accuracy, anticipate the progress of different technologies. His track record of predicting things is better than you would expect.
The whole book revolves around the concept of "The Law of Accelerating Returns". This is an extrapolation of Moore's law. Moore's law states that the number of transistors on an integrated circuit is doubling every 18 months. The law of accelerating returns states that the rate of technological progress in general is increasing exponentially.
Another point is that these trends are VERY stable, they exhibit smooth acceleration, and thus, they can be used to accurately predict the future. He makes an analogy with a gas - while the trajectory of each individual particle inside a gas appears as essentially random, the behavior of the WHOLE SYSTEM is predictable. The same is true for technological progress, while individual events are apparently random, the whole system moves according to a stable pattern, which makes its future states predictable.
Most of the book centers on analyzing what the future has in store for us. According to Kurzweil, we are approaching "the knee of the curve" of technological progress. A point where progress will be so fast that unenhanced human intelligence will no longer be able to track it. This point is called "the singularity", meaning explosive technological growth. This, according to the book will happen around 2045. He predicts complete understanding of biology by 2020 (which will enable us to modify our bodies to live forever) self-replicating nanothech by 2025, strong AI by 2029, and eventually a fusion between human and machine intelligence, followed by a positive-feedback loop in which we continue to (exponentially) increase our intelligence until all matter in the universe becomes optimized for computation.
Do not dismiss the book simply because of its stranger than fiction conclusions. I found that the arguments behind his statements are VERY solid and I had great difficulty finding any fault with them. First READ the book, than judge for yourself.
Many people do not agree with him, but their main "reason" for not agreeing is basically that "this doesn't feel right". Ray explains that the main reason why it "doesn't feel right" is that people generally use linear thinking.
Suppose somebody asked you how the world will look like 10 years from now. How do you go about answering such a question? You'll probably try to remember how things were like 10 years ago and project a similar change into the future. That makes sense. Right? WRONG! The assumption underlying this reasoning is that progress is linear, that things will change AT THE SAME RATE in the next 10 years as they did in the last 10. Intuition is incapable of grasping exponential growth and thus fails miserably at predicting the future of technology.
The book gets quite technical and you find yourself reading the same paragraph 10 times over trying to understand some complex concept or trying figure out how some piece of exotic technology works, but, overall I'd say it can be read and understood 90% by people with basic technical skill. Ray's knowledge and understanding of numerous scientific fields, as well as his view of "the big picture" is impressive. He also has a very refreshing, clear and logical style of writing.
The law of accelerating returns is the main theme of the book and after having read the arguments in favor of it I think it's undisputable. Some people place too much emphasis on Kurzweil's timing of events. It doesn't matter if strong AI is achieved by 2025 or 2045, the idea of exponential progress however is VERY important and the book is worth its money for that alone.
If you're still there, thanks :)
This book is definitely worth it. The perspective it offers, if properly understood, can and WILL change your outlook on life. Read it!
50 of 60 people found the following review helpful
5.0 out of 5 stars "Get Ready! The Beginning is Near!",
"Repent! The End is Near!"
If I saw a person holding up this sign on a street corner, I might think, "Poor fellow. Where has his mind gone? Too bad there are crazy people like this in the world."
Yet, in Ray Kurzweil's book "Singularity," his message is even more far out, but more like,
"Get Ready! The Beginning is Near!"
And yet, with Kurzweil, my response is, "Okay, I understand so far. Tell me more." Then I see the data. Then I see his inexorable logic. I would bet a lot of money on his predictions. "Singularity" is the most startling book I have ever read in my life (and I have read a lot of great books).
Well before the year 2030 (within 25 years), if you are still alive, you will have the choice about whether or not you want to "live forever" (in THIS reality; not some "afterlife").
Well before 2030, there will be a computer that, by all measures, will be smarter than the smartest "regular human" (i.e. non-computer-enhanced human) on this earth. This computer will then be able to invent an even smarter computer, which will then be able to invent an even smarter computer, which....
The changes in the next 14 years will be as much or more than the changes since 1955 (the last 50 years). And double again. And double again. And double again....we are fast approaching the asymptotic infinity of change and "progress"!
And there is basically nothing we can do about it. It will happen whether we like it or not (and most of us will end up liking it). We can "manage" it to some extent in order to provide a measure of protection against the end-of-the-world scenarios that could arise, either accidentally or intentionally, out of this run-away progress.
In his close-to-700-page manifesto, this is the essence of our future that Ray Kurzweil paints for us.
As an author myself ("Courage: the Choice that Makes the Difference-Your Key to a Thousand Doors"), I have a deep respect for what it takes to write a great book. The only other author that comes to mind whose breadth of knowledge and wisdom would compare with Kurzweil is philosopher Ken Wilber (although their writings are quite dissimilar). In reading Kurzweil I am continually amazed by the breadth and depth of his insights and conclusions.
There is one issue that he addresses from many perspectives (will computers become conscious? - his answer is "yes") that I cannot get my mind around. Even though his logic makes "sense" to me, I still can't quite accept it. However, that is not a big issue for me (as it might be for others), since, for all intents and purposes, I can totally accept that computers will be able to APPEAR as fully human (should they "choose" to do so).
I noticed that some of the other reviewers of "Singularity" have faulted Kurzweil for his optimism. Although I can see their point, I think that neither optimism nor pessimism is most appropriate here. Obviously we are facing an eventuality that holds the possibility of both the greatest promise as well as the greatest peril. Creativity, intelligence, and courage are our best tools at this unprecedented time in the history of our solar system.
I give "Singularity" five stars. It ranks that based solely upon the "wake up call" it is for humanity.
23 of 26 people found the following review helpful
4.0 out of 5 stars Reversible computer projections may be over-optimistic,
Kurzweil's projections are all too frighteningly plausible in many respects, but I retain some doubts. This is because, on the one topic Kurzweil mentions that I am a certified expert in (having worked in the field in depth for 10 years now) - namely, reversible computing - I can attest that he is being much more optimistic than is warranted by a comprehensive examination of scientific and technical progress on the subject. To build a *practical* reversible computer has turned out to be an extremely difficult engineering problem, and might not even be possible. Although it is not technically a perpetual motion machine, the goal of reversible computing is really to get as close to perpetual motion as possible, and accomplish this in a complex machine with many interacting parts that goes through an intricate, non-cyclic trajectory. Achieving this requires a near-exact correspondence between the natural built-in physical dynamics of the manfuactured system and the logical structure of the desired computation; we must really track where *all* energy and information goes in the mechanism, and ensure that it all is continually redirected in a controlled way into new useful processes. One finds that this is much easier said then done when one gets down into the nitty-gritty engineering details having to do with eliminating unwanted reflections of resonator energy into undesired modes, precise load-balancing in the logic, and so forth. In fact, as of this writing, we still don't even have a truly *complete* and physically realistic *theoretical* model of reversible computing that fully accounts for all of the important physical constraints (such as momentum conservation), let alone a working demonstration of any physical system more complex than a simple cyclical oscillator that does anything computationally meaningful (i.e., beyond just "computing its own evolution") with a high system-level energy recovery efficiency. This doesn't mean that it can't eventually be accomplished, but the progress to date has been glacially slow, and I see little indication that the necessary heavy investments in basic research will be made any time soon. Further, even if the physical problems are solved, reversible computing in general imposes substantial computational complexity overheads (much more than Kurzweil suggests). If these difficulties are not soluble (and they might not be), it seems that computer performance per unit of power consumption may be forced to level off for all practical purposes (either temporarily or permanently) within the next few decades by Landauer's limit. Whether that will happen early enough to prevent singularity-like effects from occuring, I don't know. But, the fact that Kurzweil seems over-optimistic (and makes several misstatements) about the one field that I know the most about makes me suspicious that he might be being over-optimistic in other technical areas as well.
Most Helpful First | Newest First
The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil