Superintelligence: Paths, Dangers, Strategies Reprint Edition
Use the Amazon App to scan ISBNs and compare prices.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Frequently bought together
"I highly recommend this book" --Bill Gates
"Terribly important. Groundbreaking, extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole. If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom's excellent book "Superintelligence" is the best thing I've seen on this topic. It is well worth a read." --Sam Altman, President of Y Combinator and Co-Chairman of OpenAI
"Worth reading. We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments. The problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
About the Author
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy's Top 100 Global Thinkers list twice. He was included on Prospect's World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
- ASIN : 0198739834
- Publisher : Oxford University Press; Reprint edition (May 1, 2016)
- Language : English
- Paperback : 390 pages
- ISBN-10 : 9780198739838
- ISBN-13 : 978-0198739838
- Item Weight : 15.8 ounces
- Dimensions : 7.7 x 1 x 5.1 inches
- Best Sellers Rank: #7,609 in Books (See Top 100 in Books)
- Customer Reviews:
Reviews with images
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
It's time for the term AI to be “terminated”!
Eric Siegel, Ph.D.
Author, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or
First, the level of abstraction really is taken to an extreme. Forget about any relation between arguments in this book and anything we've actually been able to do in AI research today. You won't find a discussion of a single algorithm or even exploration of higher-level mathematical properties of existing algorithms in this book. As a result, this book could have been written 30 years ago, and its arguments wouldn't be any different. Fine, I guess (the author after all is a philosophy professor, not a computer scientist); but I found this lacking at times. It gets particularly boring when the author actually does spend pages over pages on introducing a framework on how our AI algorithms could improve (through speed improvement, or quality improvement, etc.) - but still doesn't tie it to anything concrete. If you want to take the abstraction high road, just dispense with super generalized frameworks like this altogether and get to the point. Similar to the discussion of where the recalcitrance of a future AI will come from, whether from software, content or hardware: purely abstract and speculative, even though there are real-world examples of hardware evolution speed outpacing software design speed and the other way around (e.g., the troubles of electronic design automation keeping up with Moore's Law).
Second, even if you operate fully in the realm of speculation, at least make that speculation tangible and interesting. A list of things an AI could be good at lists stuff like "social persuasion" (= convince governments to do something, and hack the internet). Struck me a lot of times as the kind of ideas you'd come up with if you thought about a particular scenario for a few minutes over a beer with friends. Very few counterintuitive ideas in there. One chapter grandly announces the presentation of an elaborate "takeover scenario", i.e., how would a superintelligence actually take over the world - and again it remains completely abstract and not original or practical. ("AI becomes smart, starts improving itself, takes over the world" - couldn't have guessed it myself.)
Third, a lot of the inferences in the book struck me as nothing more than one-step inferences, making it a relatively shallow brainstorming-type book. ("This could happen, and also this other thing could happen, and this third thing as well.") Systematic exploration of a large decision tree gets interesting when you start combining lots of different scenarios in counter-intuitive ways. Again the "friends over a beer" problem. At times the philosophizing in some chapters reads like a mildly interesting Star Trek episode (such as the one about how to best set goals for an AI so that it acts morally and doesn't kill us). In the best and worst ways.
But every now and then, there's a clever historical analogy, and an interesting idea. Ronald Reagan wasn't willing to share the technology on how to efficiently milk cows, but he offered to share SDI with the USSR - how would AI be shared? Or, the insight that the difference between the dumbest and smartest human alive is tiny on a total intelligence scale (from IQ 75 to IQ 180) - and that this means that an AI would likely look to humans as if it very suddenly leapt from being really dumb to unbelievably smart and bridge this tiny human intelligence gap extremely quickly. But what struck me with regards to the best ideas in the book is that the book almost always quotes just one guy, Eliezer Yudkovsky... which made me think that if I wanted to read a thought-provoking, counter-intuitive book on AI super intelligence (as opposed to a treatise that appears to at times gloss over the shallowness of its ideas by making up with long text), I should just go and read Yudkovsky.
All in all though, the topic itself is so interesting that it's worth giving the book a try.
The author considers three methods to produce a superintelligence.
1. Artificial intelligence programming (AI): In a brief history of AI, the author narrates its development from past failures to current accomplishments like self-driving cars. AI has had an uneven career but is an important technique today, and this seems to be the author's preferred method. From his narrative, it does not appear that superintelligence would require any new AI techniques.
2. Emulation of the human brain itself, by creating a simulated version derived from a vitrified brain whose structure has been scanned from multiple angles to produce a three-dimensional image. This "brain" could then be emulated on a computer. (He really uses the word "vitrified" and supplies notes on the procedure and a list of capabilities required. But I don't see how a vitrified brain can tell us very much about the paths of nerve impulses, nor about when they are triggered.)
3. A large "team brain" obtained through a drug- and genetic analysis-aided eugenics program on humans. Development would be speeded up by using stem cells converted to gametes. Offspring would be selected for IQ, although this might not work too well; everyone admits that environment plays a large part in IQ. Apparently all those intially selected to participate in the program would be eugenically superior. I don't think this would gain political acceptance.
Assuming a single SI (the author calls it a superintelligent singleton since initially there would be no other SI's to compete with it) is established in the world, what would it do? The author begins his analysis by stating that although humanity is dominant over other species, the human superiority in intelligence is not substantial. This implies humanity is vulnerable, so we must be on the alert. But even if he thinks little of humanity's intelligence, he should see that other factors are involved, notably the highly developed language of humans and their larger units of social organization. If the SI is antagonistic, as the author generally assumes, humanity is in danger from a scheming adversary with an unfriendly attitude to humanity. In dealing with this potential enemy, the fundamental question in dealing with this opponent is "How does the SI see the world?" A naïve question perhaps but in some way it needs to be answered. It's hard to ask this question about a machine since it is supposedly constructed by human beings with human purposes in mind.
The author has dismissed the question of SI's inner mental life early in the book. "The definition is noncommittal about how the superintelligence is implemented. It is also noncommittal regarding qualia: "whether a superintelligence would have subjective conscious experience might matter greatly for some questions (in particular for some moral questions), but our primary focus here is on the causal antecedents and consequences ..." (p.26). In the human brain, emotion, motivation, and long-term memory are centered in the limbic system, which is below the cortex (the intellectual part) and next to the brainstem (the reflex part). But the SI is supposed to have no such thing. Because there is no discussion of emotion, will, motivation, or values, most of the following chapters are incomplete.
The author seems unaware of this issue. He envisions (in accordance with Turing's famous 1950 essay) the SI starting as a "seed AI" and being "brought up" by a programmer-teacher. (One question that occurred to was that the intelligence programmed into the seed AI will be limited by the intelligence of its teacher.) It's hard for me to believe that he thinks of intelligence being merely a matter of knowledge but this is what he appears to believe. In Chapter 6 on "Cognitive Superpowers", he cautions, "It is important not to anthopomorphize superintelligence when thinking about its potential impacts. ... The most essential characteristic of a seed AI, aside from being easy to improve (having low recalcitrance), is being good at exerting optimization power to amplify a system's intelligence: a skill which is presumably closely related to doing well in mathematics, programming, engineering, computer science research, and other such "nerdy" pursuits. [The word "presumably" leaps over an abyss.]... With sufficient skill at intelligence amplification, all other intellectural abilities are within a system's indirect reach: the system can develop new cognitive modules and skills as needed -- including empathy, political acumen, and any other powers stereotypically wanting in computer-like personalities." (p.111)
"Intelligence amplification" is an old term that has recently been updated. As far as I can see from magazines like Wired and Gizmodo, it seems to be another form of AI but with electrodes implanted in the brain. But what's really surprising is the statement that empathy, political acumen, and other powers are developed by "sufficient skill at intelligence amplification." They are powers, not dispositions, i.e., their controller can turn them on and off. With this vast intellect, the SI will certainly have sufficient "political acumen" to avoid wasting empathy for those whom its intelligence can see are generally despised and lacking in political power. In short, the SI has nothing like democratic values at all. (Perhaps an instructional module can be developed for democratic values. But that too could be turned on and off.)
The context gets more unreal as we read on. In the following chapters the author speculates what the long-term goals of an SI would be, using an "orthogonality thesis" (that there is little if any relation between the intelligence of a system and its final goals) and an "instrumental convergence thesis" (that certain instrumental values are likely to be pursued by a wide range of intelligent agents). I can say nothing about these; the author can point to the cleverness of concealing goals harmful to humanity behind harmless ones (in accordance with orthogonality) and goals like cognitive enhancement, resource acqusition and so on as goals that every sensible SI will want to adopt (instrumental convergence). I can make nothing of this; these goals are too abstract for me to draw any conclusions Presumably the goals should be determined when the programmer-teacher is loading the SI's intelligence. Why can't goals be set at the earliest stage?
The basic problem with this book is that it makes no attempt to achieve an exact picture of anything. Consequently the reader is left floating in a world of unconvincing abstractions.
Top reviews from other countries
All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don't doubt Bostrom's skills with probability calculations or formalizations, but the principle "garbage in - garbage out" applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn't taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. "[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution" [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches ("We could postpone work on some of the eternal questions for a little while [...] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors" [p. 315]), and ultimately claims that "reduction of existential risk" is humanity's principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their "extreme end of the intelligence distribution" think this money would be better spent funding fellowships for philosophers and AI researchers working on the "control problem". Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom's publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!
Despite the criticism I've given above, the book isn't necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it's not bad. But if you're looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge - no magic allowed! - then this is definitely not the book for you.
Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.
This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.
Also a good understanding of economic theory would also help any reader.
Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.
At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.
Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.
“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.
The one area in which I feel Nick Bostrom's sense of balance wavers is in extrapolating humanity's galactic endowment into an unlimited and eternal capture of the universe's bounty. As Robert Zubrin lays out in his book Entering Space: Creating a Space-Faring Civilization , it is highly unlikely that there are no interstellar species in the Milky Way: if/when we (or our AI offspring!) develop that far we will most likely join a club.
The abolition of sadness , a recent novella by Walter Balerno is a tightly drawn, focused sci fi/whodunit showcasing exactly Nick Bostrom's point. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end has already happened...