Start reading Superintelligence: Paths, Dangers, Strategies on the free Kindle Reading App or on your Kindle in under a minute. Don't have a Kindle? Get your Kindle here.

Deliver to your Kindle or other device

Enter a promotion code
or gift card

Try it free

Sample the beginning of this book for free

Deliver to your Kindle or other device

Sorry, this item is not available in
Image not available for
Image not available

Superintelligence: Paths, Dangers, Strategies [Kindle Edition]

Nick Bostrom
4.0 out of 5 stars  See all reviews (121 customer reviews)

Digital List Price: $19.99 What's this?
Print List Price: $29.95
Kindle Price: $9.99
You Save: $19.96 (67%)

Free Kindle Reading App Anybody can read Kindle books—even without a Kindle device—with the FREE Kindle app for smartphones, tablets and computers.

To get the free app, enter your email address or mobile phone number.


Amazon Price New from Used from
Kindle Edition $9.99  
Hardcover $22.19  
Audible Audio Edition, Unabridged $21.95 or Free with Audible 30-day free trial
MP3 CD, Audiobook, MP3 Audio, Unabridged $11.01  
Kindle Delivers
Kindle Delivers
Subscribe to the Kindle Delivers monthly e-mail to find out about each month's Kindle book deals, new releases, editors' picks and more. Learn more (U.S. customers only)

Book Description

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological
cognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

Editorial Reviews


"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." -- Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." -- Martin Rees, Past President, Royal Society

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" -- Professor Max Tegmark, MIT

"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever." -- Olle Haggstrom, Professor of Mathematical Statistics

"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" -- The Economist

"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." -- Clive Cookson, Financial Times

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" -- Elon Musk, Founder of SpaceX and Tesla

"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." -- Brian Clegg, Popular Science

About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Product Details

  • File Size: 2565 KB
  • Print Length: 352 pages
  • Publisher: OUP Oxford; 1 edition (July 3, 2014)
  • Sold by: Amazon Digital Services, Inc.
  • Language: English
  • Text-to-Speech: Enabled
  • X-Ray:
  • Word Wise: Not Enabled
  • Lending: Enabled
  • Amazon Best Sellers Rank: #11,734 Paid in Kindle Store (See Top 100 Paid in Kindle Store)
  •  Would you like to give feedback on images?

Customer Reviews

Most Helpful Customer Reviews
301 of 311 people found the following review helpful
By migedy
Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence.

What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.

When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.

He sees three main paths to superintelligence:

1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body.
Read more ›
Was this review helpful to you?
91 of 95 people found the following review helpful
5.0 out of 5 stars It's... complicated July 15, 2014
Format:Hardcover|Vine Customer Review of Free Product (What's this?)
Not surprisingly, 200+ pages later, the author can't answer the 'what is to be done' question concerning the likely emergence of non-human (machine-based) super-intelligence, sometime, possibly soon. This is expected because, as a species, we've always been the smartest ones around and never had to even think about the possibility of coexistence alongside something or someone impossibly smart and smart in ways well beyond our comprehension, possibly driven by goals we can't understand and acting in ways that may cause our extinction.

Building his arguments on available data and extrapolating from there, Bostrom is confident that:

- some form of self-aware, machine super-intelligence is likely to emerge
- we may be unable to stop it, even if we wanted to, no matter how hard we tried
- while we may be unable to stop the emergence of super-intelligence, we could prepare ourselves to manage it and possibly survive it
- us not taking this seriously and not being prepared may result in our extinction while serious pre-emergence debate and preparation may result in some form of co-existence

It's radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are about to confront would be a grave error given that, once super-intelligence begins to manifest itself and act, the change may be extremely quick and we may not be afforded a second chance.

Most of the book concerns itself with the several types of super-intelligence that may develop, the ways in which we may be able to control or at least co-exist with such entities or entity, what the world and literally the Universe may turn into depending on how we plant the initial super-intelligent seed.
Read more ›
Was this review helpful to you?
18 of 18 people found the following review helpful
5.0 out of 5 stars We are doomed November 5, 2014
There are three pieces to this book:
1. Superhuman machine intelligence is coming.
2. It's potentially catastrophic for humanity.
3. We might be able to tame it.

"Superintelligence" is not human-level artificial intelligence as in meeting the Turing Test. It's what comes after that. Once we build a machine as smart as a human, that machine writes software to improve itself, which enables it to further improve itself -- but faster, then faster and faster. The equal-to-human stage proves brief as the technology charges ahead into superhuman territory.

Before long, the thing is way smarter than any human, and it parlays its superhuman skill at programming into superhuman skill at everything else, including strategic thinking, practical psychology and propaganda. It can start innocuously enough as a device that only thinks, but using its superhuman strategic skills it persuades humans to give it control over physical resources like manipulators and nanofactories. At that point it becomes a deadly threat even without "wanting" to. For example, its goal could be perfecting battery technology, but as it pursues this goal it could decide it needs more computing resources (to really figure things out...) -- a lot more resources, and so it proceeds to turn half the earth's crust into computing machinery, while tiling the rest of the planet with solar cells to power that machinery! We lose.

If you're scared now, then you'll want to read the second 50% or so of the book, which is about how to tame superintelligence, or for now, how to prepare to tame it before it's already upon us and out of control.
Read more ›
Comment | 
Was this review helpful to you?
Most Recent Customer Reviews
5.0 out of 5 stars Five Stars
Wow, a bit repetitive in parts, but definitely an eye opener.
Published 2 days ago by KAC123
3.0 out of 5 stars Paths, Dangers, and Strategies Leading to Further Discussions...In a...
This book is interesting, but quite obviously written by and for an academia audience. It covers a lot of theory, has a lot of charts that can't be read on a kindle, and sums it... Read more
Published 9 days ago by Laurence A. Huston
3.0 out of 5 stars Not Easy To Read
Extremely informative and useful for creating visions for the future but also extremely difficult to read for casual readers.
Published 11 days ago by Honesty for Bezos
4.0 out of 5 stars A good discussion on this topic
A good discussion of the AI topic. I haven't finish the book as I am writing this, but I can tell the author is intelligent; has thought deeply about the subject and writes well.
Published 11 days ago by Brad Millman
4.0 out of 5 stars Four Stars
Very thought provoking.
Published 12 days ago by Peter Pavey
5.0 out of 5 stars An Outstanding Analysis of the Problem of a Potentially Evil...
Nick Bostrom has achieved a tour de force with the publication of this book. He clearly has in-depth knowledge of philosophy, physics and computer science; a rare combination... Read more
Published 12 days ago by Ravi Morey
2.0 out of 5 stars Worth reading but a lot of it is rather silly
The first part of the book surveys some of the ways superintelligence might happen, of which AI is only one. Read more
Published 17 days ago by Amazon Customer
5.0 out of 5 stars and the merely curious will find this the book to go to to better...
Nick Bostrom does a remarkable job of bringing together and organizing the wealth of reflections regarding the advent of a technology explosion, superintelligence, and a... Read more
Published 17 days ago by wwallach
2.0 out of 5 stars I guess you could expect a book like this to be speculative
I found this book surprisingly unconvincing. Given that we don't really understand intelligence, I guess you could expect a book like this to be speculative, but still I had... Read more
Published 23 days ago by Torbjørn Ness
3.0 out of 5 stars Good Ideas But Poorly Written
In Superintelligence, Dr. Bostrom offers unique insights and perspectives on an emerging area of concern that was previously the domain of science fiction. Read more
Published 25 days ago by Mike Collins
Search Customer Reviews
Search these reviews only

More About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the groundbreaking Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). Earlier this year he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 22 languages. There have been more than 100 translations and reprints of his works.

For more, see

What Other Items Do Customers Buy After Viewing This Item?


There are no discussions about this product yet.
Be the first to discuss this product with the community.
Start a new discussion
First post:
Prompts for sign-in

Look for Similar Items by Category