Superintelligence: Paths, Dangers, Strategies Reprint Edition
Use the Amazon App to scan ISBNs and compare prices.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Frequently bought together
"I highly recommend this book" --Bill Gates
"Terribly important. Groundbreaking, extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole. If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom's excellent book "Superintelligence" is the best thing I've seen on this topic. It is well worth a read." --Sam Altman, President of Y Combinator and Co-Chairman of OpenAI
"Worth reading. We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments. The problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
About the Author
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy's Top 100 Global Thinkers list twice. He was included on Prospect's World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
- ASIN : 0198739834
- Publisher : Oxford University Press; Reprint edition (May 1, 2016)
- Language : English
- Paperback : 390 pages
- ISBN-10 : 9780198739838
- ISBN-13 : 978-0198739838
- Item Weight : 15.8 ounces
- Dimensions : 7.7 x 1 x 5.1 inches
- Best Sellers Rank: #4,737 in Books (See Top 100 in Books)
- Customer Reviews:
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
It's time for the term AI to be “terminated”!
Eric Siegel, Ph.D.
Author, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or
What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.
When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.
He sees three main paths to superintelligence:
1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.
2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body. If your memories, thoughts and capabilities arise from the connectivity structure and patterns/timings of neural firings of your brain, then your consciousness should awaken in that synthetic body.
The beauty of this approach is that humanity would not have to understand how the brain works. It would simply have to copy the structure of a given brain (to a sufficient level of molecular fidelity and precision).
3. The Neuromorphic path -- In this case, neural network modeling and brain emulation techniques would be combined with AI technologies to produce a hybrid form of artificial intelligence. For example, instead of copying a particular person's brain with high fidelity, broad segments of humanity's overall connectome structure might be copied and then combined with other AI technologies.
Although Bostrom's writing style is quite dense and dry, the book covers a wealth of issues concerning these 3 paths, with a major focus on the control problem. The control problem is the following: How can a population of humans (each whose intelligence is vastly inferior to that of the superintelligent entity) maintain control over that entity? When comparing our intelligence to that of a superintelligent entity, it will be (analogously) as though a bunch of, say, dung beetles are trying to maintain control over the human (or humans) that they have just created.
Bostrom makes many interesting points throughout his book. For example, he points out that a superintelligence might very easily destroy humanity even when the primary goal of that superintelligence is to achieve what appears to be a completely innocuous goal. He points out that a superintelligence would very likely become an expert at dissembling -- and thus able to fool its human creators into thinking that there is nothing to worry about (when there really is).
I find Bostrom's approach refreshing because I believe that many AI researchers have been either unconcerned with the threat of AI or they have focussed only on the threat to humanity once a large population of robots is pervasive throughout human society.
I have taught Artificial Intelligence at UCLA since the mid-80s (with a focus on how to enable machines to learn and comprehend human language). In my graduate classes I cover statistical, symbolic, machine learning, neural and evolutionary technologies for achieving human-level semantic processing within that subfield of AI referred to as Natural Language Processing (NLP). (Note that human "natural" languages are very very different from artificially created technical languages, such a mathematical, logical or computer programming languages.)
Over the years I have been concerned with the dangers posed by "run-away AI" but my colleagues, for the most part, seemed largely unconcerned. For example, consider a major introductory text in AI by Stuart Russell and Peter Norvig, titled: Artificial Intelligence: A Modern Approach (3rd ed), 2010. In the very last section of that book Norvig and Russell briefly mention that AI could threaten human survival; however, they conclude: "But, so far, AI seems to fit in with other revolutionary technologies (printing, plumbing, air travel, telephone) whose negative repercussions are outweighed by their positive aspects" (p. 1052).
In contrast, my own view has been that artificially intelligent, synthetic entities will come to dominate and replace humans, probably within 2 to 3 centuries (or less). I imagine three (non-exclusive) scenarios in which autonomous, self-replicating AI entities could arise and threaten their human creators.
(1) The Robotic Space-Travel scenario: In this scenario, autonomous robots are developed for space travel and asteroid mining. Unfortunately, many people believe in the alternative "Star Trek" scenario, which assumes that: (a) faster-than-light (warp drive) will be developed and (b) the galaxy will be teeming, not only with planets exactly like Earth, but also these planets will be lacking any type of microscopic life-forms dangerous to humans. In the Star Trek scenario, humans are very successful space travelers.
However, It is much more likely that, to make it to a nearby planet, say, 100 light years away, will require that humans travel for a 1000 years (at 1/10th the speed of light) in a large metal container, all the while trying to maintain a civilized society as they are being constantly radiated while they move about within a weak gravitational field (so their bones waste away while they constantly recycle and drink their urine). When their distant descendants finally arrive at the target planet, these descendants will very likely discover that the target planet is teeming with deadly, microscopic parasites.
Humans have evolved on the surface of the Earth and thus their major source of energy is oxygen. To survive they must carry their environment around with them. In contrast, synthetic entities will require no oxygen or gravity. They will not be alive (in the biological sense) and so therefore will not have to expend any energy during the voyage. A simple clock can turn them on once they have arrived at the target planet and they will be unaffected by any forms of alien microbial life.
If there were ever a conflict between humans and these space-traveling synthetic AI entities, who would have the advantage? The synthetic entities would be looking down on us from outer space -- a definitive advantage. (If an intelligent alien ever visits Earth, it is 99.9999% likely that whatever exits the alien spacecraft will be a non-biological, synthetic entity -- mainly because space travel is just too difficult for biological creatures.)
(2) The Robotic Warfare scenario: No one wants their (human) soldiers to die on the battlefield. A population of intelligent robots that are designed to kill humans will solve this problem. Unfortunately, if control over such warrior robots is ever lost, then this could spell disaster for humanity.
(3) The Increased Dependency scenario: Even if we wanted to, it is already impossible to eliminate computers because we are so dependent on them. Without computers our financial, transportation, communication and manufacturing services would grind to a halt. Imagine a near-future society in which robots perform most of the services now performed by humans and in which the design and manufacture of robots are handled also by robots. Assume that, at some point, a new design results in robots that no longer obey their human masters. The humans decide to shut off power to the robotic factory but it turns out that the hydroelectric plant (that supplies it with power) is run by robots made at that same factory. So now the humans decide to halt all trucks that deliver materials to the factory, but it turns out that those trucks are driven by robots, and so on.
I had always thought that, for AI technology to pose an existential danger to humanity, it would require processes of robotic self-replication. In the Star Trek series, the robot Data is more intelligence that many of his human colleagues, but he has no desire to make millions of copies of himself, and therefore he poses less of a threat than, say, south american killer bees (which have been unstoppable as they have spread northward).
Once synthetic entities have a desire to improve their own designs and to reproduce themselves, then they will have many advantages over humans: Here are just a few:
1. Factory-style replication: Humans require approximately 20 years to produce a functioning adult human. In contrast, a robotic factory could generate hundreds of robots every day. The closest event to human-style (biological) replication will occur each time a subset of those robots travel to a new location to set up a new robotic factory.
2. Instantaneous learning: Humans have always dreamt of a "learning pill" but, instead, they have to undergo that time-consuming process called "education". Imagine if one could learn how to fly a plane just by swallowing a pill. Synthetic entities would have this capability. The brains of synthetic entities will consist of software that executes on universal computer hardware. As a result, each robot will be able to download additional software/data to instantly obtain new knowledge and capabilities.
3. Telepathic communication: Two robots will be able communicate by radio waves, with robot R1 directly transmitting some capability (e.g., data and/or algorithms learned through experience) to another robot R2.
4. Immortality: A robot could back up a copy of its mind (onto some storage device) every week. If the robot were destroyed, a new version could be reconstructed with just the loss of one week's worth of memory.
5. Harsh Environments: Humans have developed clothing in order to be able to survive in cold environments. We go into a closet and select thermal leggings, gloves, goggles, etc. to go snowboarding. In contrast, a synthetic entity could go into its closet and select an alternative, entire synthetic body (for survival on different planets with different gravitational fields and atmospheres).
What is fascinating about Bostrom's book is that he does not emphasize any of the above. Instead, he focusses his book on the dangers, not from a society of robots more capable than humans, but, instead, on the dangers posed by a single entity with superintelligence coming about. (He does consider what he calls the "multipolar" scenario, but that is just the case of a small number of competing superintelligent entities.)
Bostrom is a professor of philosophy at Oxford University and so the reader is also treated to issues in morality, economics, utility theory, politics, value learning and more.
I have always been pessimistic about humanity's chance of avoiding destruction at the hands of it future AI creations and Bostrom's book focusses on the many challenges that humanity may (soon) be facing as the development of a superintelligence becomes more and more likely.
However, I would like to point out one issue that I think Prof. Bostrom mostly overlooks. The issue is Natural Language Processing (NLP). He allocates only two sentences to NLP in his entire book. His mention of natural language occurs in Chapter 13, in his section on "Morality models". Here he considers that, when giving descriptions to the superintelligence (of how we want it to behave), its ability to understand and carry out these descriptions may require that it comprehend human language, for example, the term "morally right".
"The path to endowing an AI with any of these concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by 'morally right' " (p. 218)
I fear that Bostrom has not sufficiently appreciated the requirements of natural language comprehension and generation for achieving general machine intelligence. I don't believe that an AI entity will pose an existential threat until it has achieved at least a human level of natural language processing (NLP).
Human-level consciousness is different than animal-level consciousness because humans are self-aware. They not only think thoughts about the world; they also think thoughts about the fact that they are thinking thoughts. They not only use specific words; they are aware of the fact that they are using words and how different categories of words differ in functionality. They are not only capable of following rules; they are aware of the fact that rules exist and that they are able to follow (or not follow) those rules. Humans are able to invent and modify rules.
Language is required to achieve this level of self-reflective thought and creativity. I define (human-level natural) language as any system in which the internal structures of thought (whatever those happen to be, whether probabilities or vectorial patterns or logic/rule structures or dynamical attractors or neural firing patterns, etc.) are mapped onto external structures -- ones that can then be conveyed to others.
Self-awareness arises because this mapping enables the existence of a dual system:
Internal (Thought) Structures <---> External (Language) Structures.
In the case of human language, these external structures are symbolic. This dual system enables an intelligent entity to take the results of its thought processes, map them to symbols and then use these symbols to trigger thoughts in other intelligent entities (or in oneself). An entity with human-level self-awareness can hold a kind of conversation with itself, in which it can refer to and thus think about its own thinking.
Something like NLP must therefore exist BEFORE machines can reach a level of self-awareness to pose a threat to humanity. In the case of a super-intelligence, this dual system may look different than human language. For example, a superintelligence might map internal thoughts, not only to symbols of language, but also to complex vectorial structures. But the point is the same -- something must act like an external, self-referential system -- a system than can externally refer to the thoughts and processes of that system itself.
In the case of humans, we do not have access to the internal structure of our own thoughts. But that doesn't matter. What matters is that we can map aspects of our thoughts out to external, symbolic structures. We can then communicate these structures to others (and also back to ourselves). Words/sentences of language can then trigger thoughts about the world, about ourselves, about our goals, our plans, our capabilities, about conflicts with others, about potential future events, about past events, etc.
Bostrom seems to imply (by his oversight) that human-level (and super-human levels) of general intelligence can arise without language. I think this is highly unlikely.
An AI system with NLP capability makes the control problem much more difficult than even Bostrom claims. Consider a human H1 who kills others because he believes that God has commanded him to kill those with different beliefs. Since he has human-level self-awareness, he should be explicitly aware of his own beliefs. If H1 is sufficiently intelligent then we should be able to communicate a counterfactual to H1 of the sort: "If you did not believe in God or if you did not believe that God commanded you to kill infidels, then you would not kill them." That is, H1 should have access (via language) to his own beliefs and to knowledge into how changes in those beliefs might (hypothetically) change his own behavior.
It is this language capability that enables a person to change their own beliefs (and goals, and plans) over time. It is the combination of the self-reflective nature of human language, combined with human learning abilities, that makes it extremely difficulty to both predict and control what humans will end up believing and/or desiring (let alone superintelligent entities)
It is extremely difficult but (hopefully) not impossible to control a self-aware entity. Consider two types of psychiatric patients: P1 and P2. Both have a compulsion to wash their hands continuously. P1 has what doctors call "insight" into his own condition. P1 states: "I know I am suffering from an obsessive/compulsive trait. I don't want to keep washing my hands but I can't help myself and I am hoping that you, the doctors, will cure me." In contrast, patient P2 lacks "insight" and states: "I'm fine. I wash my hands all the time because it's the only way to make be sure that they are not covered with germs."
If we were asked which patient appears more intelligent (all other things being equal) we would choose P1 as being more intelligent than P2 because P1 is aware of features of P1's own thinking processes (that P2 is not aware of).
As a superintelligent entity becomes more and more superintelligent, it will have more and more awareness of its own mental processes. With increased self-reflection it will become more and more autonomous and less able to be controlled. LIke humans, it will have to be persuaded to believe in something (or to take a certain course of action). Also, this superintelligent entity will be designing even more self-aware versions of itself. Increased intelligence and increased self-reflection go hand in hand. Monkeys don't persuade humans because monkeys lack the ability to refer to the concepts that humans are able to entertain. To a superintelligent entity we will be as persuasive as monkeys (and probably much less persuasive) .
Any superintelligent entity that incorporates human general intelligence will exhibit what is commonly referred to as "free will". Personally, I do not believe that my choices are made "freely". That is, my neurons fire -- not because they choose to, but because they had to (due to the laws of physics and biochemistry). But let us define "free will" as any deterministic system with the following components/capabilities:
a. The NLP ability to understand and generate words/sentences that refer to its own thoughts and thought processes, e.g. to be able to discuss the meaning of the word "choose".
b. Ability to generate hypothetical, possible futures before taking an action and also, ability to generate hypothetical, alternative pasts after having taken that action.
c. Ability to think/express counterfactual thoughts, such as "Even though I chose action AC1, I could have instead chosen AC2, and if I had done so, then the following alternative future (XYZ) would likely have occurred."
Such as system (although each component is deterministic and so does not violate the laws of physics) will subjectively experience having "free will". I believe that a superintelligence will have this kind of "free will" -- in spades.
Given all the recent advances in AI (e.g. autonomous vehicles, object recognition learning by deep neural networks, world master-level play at the game of Jeopardy by the Watson program, etc.) I think that Bostrom's book is very timely.
First, the level of abstraction really is taken to an extreme. Forget about any relation between arguments in this book and anything we've actually been able to do in AI research today. You won't find a discussion of a single algorithm or even exploration of higher-level mathematical properties of existing algorithms in this book. As a result, this book could have been written 30 years ago, and its arguments wouldn't be any different. Fine, I guess (the author after all is a philosophy professor, not a computer scientist); but I found this lacking at times. It gets particularly boring when the author actually does spend pages over pages on introducing a framework on how our AI algorithms could improve (through speed improvement, or quality improvement, etc.) - but still doesn't tie it to anything concrete. If you want to take the abstraction high road, just dispense with super generalized frameworks like this altogether and get to the point. Similar to the discussion of where the recalcitrance of a future AI will come from, whether from software, content or hardware: purely abstract and speculative, even though there are real-world examples of hardware evolution speed outpacing software design speed and the other way around (e.g., the troubles of electronic design automation keeping up with Moore's Law).
Second, even if you operate fully in the realm of speculation, at least make that speculation tangible and interesting. A list of things an AI could be good at lists stuff like "social persuasion" (= convince governments to do something, and hack the internet). Struck me a lot of times as the kind of ideas you'd come up with if you thought about a particular scenario for a few minutes over a beer with friends. Very few counterintuitive ideas in there. One chapter grandly announces the presentation of an elaborate "takeover scenario", i.e., how would a superintelligence actually take over the world - and again it remains completely abstract and not original or practical. ("AI becomes smart, starts improving itself, takes over the world" - couldn't have guessed it myself.)
Third, a lot of the inferences in the book struck me as nothing more than one-step inferences, making it a relatively shallow brainstorming-type book. ("This could happen, and also this other thing could happen, and this third thing as well.") Systematic exploration of a large decision tree gets interesting when you start combining lots of different scenarios in counter-intuitive ways. Again the "friends over a beer" problem. At times the philosophizing in some chapters reads like a mildly interesting Star Trek episode (such as the one about how to best set goals for an AI so that it acts morally and doesn't kill us). In the best and worst ways.
But every now and then, there's a clever historical analogy, and an interesting idea. Ronald Reagan wasn't willing to share the technology on how to efficiently milk cows, but he offered to share SDI with the USSR - how would AI be shared? Or, the insight that the difference between the dumbest and smartest human alive is tiny on a total intelligence scale (from IQ 75 to IQ 180) - and that this means that an AI would likely look to humans as if it very suddenly leapt from being really dumb to unbelievably smart and bridge this tiny human intelligence gap extremely quickly. But what struck me with regards to the best ideas in the book is that the book almost always quotes just one guy, Eliezer Yudkovsky... which made me think that if I wanted to read a thought-provoking, counter-intuitive book on AI super intelligence (as opposed to a treatise that appears to at times gloss over the shallowness of its ideas by making up with long text), I should just go and read Yudkovsky.
All in all though, the topic itself is so interesting that it's worth giving the book a try.
Top reviews from other countries
All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don't doubt Bostrom's skills with probability calculations or formalizations, but the principle "garbage in - garbage out" applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn't taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. "[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution" [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches ("We could postpone work on some of the eternal questions for a little while [...] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors" [p. 315]), and ultimately claims that "reduction of existential risk" is humanity's principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their "extreme end of the intelligence distribution" think this money would be better spent funding fellowships for philosophers and AI researchers working on the "control problem". Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom's publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!
Despite the criticism I've given above, the book isn't necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it's not bad. But if you're looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge - no magic allowed! - then this is definitely not the book for you.
The one area in which I feel Nick Bostrom's sense of balance wavers is in extrapolating humanity's galactic endowment into an unlimited and eternal capture of the universe's bounty. As Robert Zubrin lays out in his book Entering Space: Creating a Space-Faring Civilization , it is highly unlikely that there are no interstellar species in the Milky Way: if/when we (or our AI offspring!) develop that far we will most likely join a club.
The abolition of sadness , a recent novella by Walter Balerno is a tightly drawn, focused sci fi/whodunit showcasing exactly Nick Bostrom's point. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end has already happened...
Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.
This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.
Also a good understanding of economic theory would also help any reader.
Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.
At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.
Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.
“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.