Buy new:
$21.08$21.08
FREE delivery:
Nov 23 - 30
Ships from: Book Depository US Sold by: Book Depository US
Buy used:: $18.07
Other Sellers on Amazon
+ $3.99 shipping
85% positive over last 12 months
Usually ships within 2 to 3 days.
+ $3.99 shipping
92% positive over last 12 months
Usually ships within 4 to 5 days.
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle Cloud Reader.
Using your mobile phone camera - scan the code below and download the Kindle app.
Superintelligence: Paths, Dangers, Strategies 1st Edition
| Nick Bostrom (Author) Find all the books, read about the author, and more. See search results for this author |
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
|
MP3 CD, Audiobook, MP3 Audio, Unabridged
"Please retry" | $18.11 | $14.49 |
Enhance your purchase
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- ISBN-100199678111
- ISBN-13978-0199678112
- Edition1st
- PublisherOxford University Press
- Publication dateSeptember 3, 2014
- LanguageEnglish
- Dimensions9.3 x 1 x 6.2 inches
- Print length352 pages
Frequently bought together

More items to explore
Editorial Reviews
Review
"I highly recommend this book" --Bill Gates
"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom's excellent book "Superintelligence" is the best thing I've seen on this topic. It is well worth a read." --Sam Altman, President of Y Combinator and Co-Chairman of OpenAI
"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
About the Author
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy's Top 100 Global Thinkers list twice. He was included on Prospect's World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
Product details
- Publisher : Oxford University Press; 1st edition (September 3, 2014)
- Language : English
- Hardcover : 352 pages
- ISBN-10 : 0199678111
- ISBN-13 : 978-0199678112
- Item Weight : 1.5 pounds
- Dimensions : 9.3 x 1 x 6.2 inches
- Best Sellers Rank: #63,276 in Books (See Top 100 in Books)
- Customer Reviews:
About the author

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
For more, see www.nickbostrom.com
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviewed in the United States on February 10, 2021
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.
Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.
As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.
This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.
One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.
At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.
As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.
That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.
Bostrom initially lays out the many accomplishments of AI. There is the games dimension - chess, checkers, jeopardy and many more - for which an AI is now the champion player - though in all these, he notes, the achievement is via very specific algorithms good only for that game, i.e., with little application to a general intelligence. He notes AI's main paths or approaches to intelligence, their strengths, weaknesses and tradeoffs: 1) The neural network/connectionist approach, 2) the evolutionary algorithms, 3) and the symbolic manipulation approach (GOFAI) which chronologically preceded, and yielded things like theorem provers, problem solving programs like GPS, "conversation" programs like ELIZA, expert systems, etc. He leaves implicit that these three paths lead to a giant black hole from which no exit is seen, for as he notes, standing in the distance on the other side are two huge, untaken hills: common sense knowledge and true language comprehension. These, he notes, are utterly essential to human equivalent intelligence, but AI has no current strategy to take these hills as Bostrom again leaves implicit, nor is there any current indication the three main paths will yield one, in fact there is the opposite indication. Elsewhere, Hofstadter ( Surfaces and Essences: Analogy as the Fuel and Fire of Thinking ), in his extensive tome showing that analogy is foundational to thought and language, and while eviscerating current AI language achievements, is obviously doubtful that computers (as currently conceived) can deal with this (analogy, thus language) at all.
But after taking us to the edge of this black hole, Bostrom turns 90 degrees, ignoring the two hills, and now discusses very general methods by which AI will achieve human equivalence. In this, it is safe to say, his hopes primarily fall on whole brain emulation (WBE). But his description of this approach, while seemingly detailed, fails utterly to describe its true difficulties; WBE is an untaken Everest. I suggest the recent, The Future of the Brain: Essays by the World's Leading Neuroscientists and perhaps view my (5 star) review thereof. The authors, Marcus and Freeman, are neuroscience guys discussing the massive difficulties which the huge projects embarking on brain mapping actually face, for example: We face a 85 billion neuron brain with roughly 1000 types of neurons, the functions of none of which we understand. We do not know basic facts such as how memory (our experience) is stored. We are quite certain that the brain is NOT using what we understand currently as "computation," but we do not know what this other form is (Marcus ridicules current connectionism). We face data from neural recordings that will be so massive, it will be in zeta-bytes, yet any interpretation will be completely dependent on a guiding theory - note, a theory - when we have none such. It will be, they say, like trying to learn what a laptop is and does by taking electrical recordings, when we have no theory of, or knowledge of, the existence of something called "software."
This is to say, we really have no clue what type of "device" the brain actually is. This is exacerbated by the fact that the reason we have no understanding of how experience is stored in the brain (or if), is that we have no theory of what experience is, i.e., we cannot explain the origin of the image of the external world - the coffee cup, in front of us, on the table. This problem of the origin of the image is the more precise statement of Chalmers' famous "hard problem" of consciousness - a word (consciousness), so far as I can discern, never seen once in Bostrom's book. The whole book proceeds as though this is an unimportant problem. Yet this very subject forms part of that missing notion of "software." Just to give a quick idea of how important this could be in terms of the "device" the brain actually is and for the origin of the image of the external world, Bergson (Matter and Memory, 1896), presciently anticipating the essence of holography in 1896, viewed the brain as creating a reconstructive wave passing through the external, universal holographic field, where this "wave" is now specific to, or specifying, a portion of the vast information in the field, now by this process, an image - the coffee cup on the table. This requires achieving a very concrete dynamics, it would make the brain a very different form of "device" and perception, memory and cognition employing a far different form of "computation," and it begs the question of whether such a device - being simultaneously a very concrete wave - can be embodied in silicon, wire and transistors (or even "memristors") at all, but rather, to support such a reconstructive wave, all this biological stuff comprising the brain, with its quantum dynamics rampant, is absolutely required. In other words, it is not a question of speculating, as Bostrom does further on, whether we will achieve human equivalence in 2075, or 2100 or 3100, it is a question of what the ultimate "device" we ultimately create (brain/body - AI version) will look like, an answer that will completely determine whether controlling such a device, or imparting values, or significantly increasing its intelligence, is going to be any problem or reality at all. But this can only be glimpsed by engaging with and gaining answers - unto a comprehensive theory - within a number of subjects: perception, ecological psychology, memory, explicit memory (conscious knowing an event is in one's past), cognitive development, the origin and nature of consciousness, the role of consciousness in cognition, and more - but all here ignored completely.
But the book sails serenely on from this subject of brain emulation, confident without a qualm that we will have created the brain as a silicon and wires device - it seems, confusedly, a neural net-like device that still (somehow) uses software - and now beginning long considerations on approaches by which, since it is certain that we will have electronics, we can speed up the transmission velocities, etc., say by 10,000x (and as well, modify what may well be its non-existent "software") thus allowing the device to quickly develop, creating and moving to a super intelligence and thus inducing the ensuing problems the rest of the book deals with. At this juncture, though I read the rest (a couple hundred pages), I lost interest; the book has utterly failed to motivate the reality, the nature, or the essence - near or far future - of anything it is discussing from this point.
Consider, finally, in this "un-motivated" context, the "values-loading problem," coined as a term I believe by Bostrom and treated at some length, wherein we must load human values into our super intelligence to prevent it from destroying the human race. How would the super AI know NOT to get my mother out of a burning building by simply throwing her out of the tenth floor? More simply how would the super intelligence know not to stir the coffee with a Toyota or a chair? In reality, this is simply a version of the "frame problem," a problem discussed heavily by AI for 30 years, then "faded." It sits on top of one of those hills, for it is actually the old problem of commonsense knowledge. How does the robot, stirring his cup of coffee, recognize that giant bubbles and geysers arising from the liquid, or that a feeling like stirring molasses, are anomalous - features not "expected" in this event? In the "frame" formulation, the robot has to check (constantly, with great computational expense) his list of frame axioms, axioms which specify what should be unchanged in the world as he stirs. In reality, we recognize such an anomaly because it violates one of many invariance laws structuring the event: a radial flow field defining the coffee's swirl, an adiabatic invariance (a ratio of energy of oscillation to frequency of oscillation) carried over the periodic motion of the spoon and carried over haptic flows in the hand-arm, an inertial tensor defining the angular resistance as we wield the spoon, and much more. None of this - the concrete structure, forces and dynamics of this experience - can be handled by current AI, nor are the findings of the relevant science - ecological psychology - even considered. It is on the basis of this knowledge that we recognize the "value" of a spoon, or of the flat of a knife, or even of an orange peel, to create the forces required for stirring coffee. This structured experience with its laws is the basis for even higher order, yes, analogy-based value statements, "Its not nice to stir up people." But it is worse, for values impartation, while based in this very concrete knowledge and its invariance laws, is actually embedded over our cognitive development, in our interaction with the concrete world and its beings, and this development, it is now being understood, is itself a dynamic trajectory through which our brain is travelling as a self-organizing dynamic system.
This trajectory, in Piaget's model, unfolds over two years, enabling the child to achieve the basic concepts of causality, object, space and time, the capability of explicit memory (conscious localization of past events in time), and even the ability to symbolize - yes, even the ability to symbolize - the events of the world. It requires to the age of seven to achieve concrete operations which include further grasp of space, time and number, and to the age of 12 for formal operations which include forms of logic and thought we take for granted. In other words, the brain is not only an organized structure, but a structure changing its organization along a complex trajectory purposed to achieve these conceptual and logical capabilities. Not only then must we understand the structure of these billions of neurons and their 1000 types, but also the dynamic principles embedded deep within (via DNA?) by which the structure organizes towards the "intelligence" we are familiar with. It again goes without saying that we have no clue whether the actual biological organization of the brain, and the natural course of its interaction with the concrete world at our normal scale of time (also specified by this dynamics) and the reorganizations involved, can be achieved in any other way than by the concrete method nature designed. (Spare me the minor evolutionary mistakes nature has supposedly made.) Further, for all the "operations" above, it can be shown that consciousness is required. None of these subject areas make it anywhere into Bostrom. We are asked to worry about values-imparting and the supposed dangers of a silicon-based super intelligence entirely predicated upon an ignoring of all these issues and more, all of which beg or even scream questions not only on what intelligence actually is, but whether any of these proposed concerns have the slightest reality. It is just a bit difficult for me then to attend to a large structure of concerns which float in reality upon a puff of very inadequate analysis.
Top reviews from other countries
All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don't doubt Bostrom's skills with probability calculations or formalizations, but the principle "garbage in - garbage out" applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn't taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. "[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution" [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches ("We could postpone work on some of the eternal questions for a little while [...] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors" [p. 315]), and ultimately claims that "reduction of existential risk" is humanity's principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their "extreme end of the intelligence distribution" think this money would be better spent funding fellowships for philosophers and AI researchers working on the "control problem". Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom's publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!
Despite the criticism I've given above, the book isn't necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it's not bad. But if you're looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge - no magic allowed! - then this is definitely not the book for you.
Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.
This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.
Also a good understanding of economic theory would also help any reader.
Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.
At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.
Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.
“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.
I used to fear ai but now I know how far away we are from any real world dangers. Ai is still very early and there are some enormous obstacles to get past before we see real intelligence that beats the Turin test/imitation game every single time. Infact, some experts say that the Turin test is too easy and we need to come up with a better method to measure the abilities and limitations of an ai subject. I agree with that.
Extreamly interesting read. Great book.
The one area in which I feel Nick Bostrom's sense of balance wavers is in extrapolating humanity's galactic endowment into an unlimited and eternal capture of the universe's bounty. As Robert Zubrin lays out in his book Entering Space: Creating a Space-Faring Civilization , it is highly unlikely that there are no interstellar species in the Milky Way: if/when we (or our AI offspring!) develop that far we will most likely join a club.
The abolition of sadness , a recent novella by Walter Balerno is a tightly drawn, focused sci fi/whodunit showcasing exactly Nick Bostrom's point. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end has already happened...












