Superintelligence: Paths, Dangers, Strategies and over one million other books are available for Amazon Kindle. Learn more
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images
Sell yours for a Gift Card
We'll buy it for $9.60
Learn More
Trade in now
Have one to sell? Sell on Amazon

Superintelligence: Paths, Dangers, Strategies Hardcover – September 3, 2014

ISBN-13: 978-0199678112 ISBN-10: 0199678111 Edition: 1st
Buy new
$22.41
Temporarily out of stock.
Order now and we'll deliver when available.
Order now and we'll deliver when available. We'll e-mail you with an estimated delivery date as soon as we have more information. Your account will only be charged when we ship the item.
Details
Ships from and sold by Amazon.com. Gift-wrap available.
List Price: $29.95 Save: $7.54 (25%)
FREE Shipping on orders over $35.
Qty:1
Superintelligence: Paths,... has been added to your Cart

Used & new from other sellers Delivery options vary per offer
62 used & new from $18.98
Amazon Price New from Used from
Kindle
"Please retry"
Hardcover, September 3, 2014
"Please retry"
$22.41
$18.98 $19.04
Free Two-Day Shipping for College Students with Amazon Student Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Hero Quick Promo
Save up to 90% on Textbooks
Rent textbooks, buy textbooks, or get up to 80% back when you sell us your books. Shop Now
$22.41 FREE Shipping on orders over $35. Temporarily out of stock. Order now and we'll deliver when available. We'll e-mail you with an estimated delivery date as soon as we have more information. Your account will only be charged when we ship the item. Ships from and sold by Amazon.com. Gift-wrap available.

Frequently Bought Together

Superintelligence: Paths, Dangers, Strategies + Structures: Or Why Things Don't Fall Down
Price for both: $36.94

One of these items ships sooner than the other.

Buy the selected items together

Editorial Reviews

Review


"I highly recommend this book" --Bill Gates


"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." -- Stuart Russell, Professor of Computer Science, University of California, Berkley


"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." -- Martin Rees, Past President, Royal Society


"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" -- Professor Max Tegmark, MIT


"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." -- Olle Haggstrom, Professor of Mathematical Statistics


"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" -- The Economist


"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." -- Clive Cookson, Financial Times


"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" -- Elon Musk, Founder of SpaceX and Tesla


"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." -- Brian Clegg, Popular Science


About the Author


Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
NO_CONTENT_IN_FEATURE


Best Books of the Month
Best Books of the Month
Want to know our Editors' picks for the best books of the month? Browse Best Books of the Month, featuring our favorite new books in more than a dozen categories.

Product Details

  • Hardcover: 352 pages
  • Publisher: Oxford University Press; 1 edition (September 3, 2014)
  • Language: English
  • ISBN-10: 0199678111
  • ISBN-13: 978-0199678112
  • Product Dimensions: 9.3 x 0.6 x 6.4 inches
  • Shipping Weight: 1.5 pounds (View shipping rates and policies)
  • Average Customer Review: 3.9 out of 5 stars  See all reviews (153 customer reviews)
  • Amazon Best Sellers Rank: #3,166 in Books (See Top 100 in Books)

More About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the groundbreaking Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). Earlier this year he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 22 languages. There have been more than 100 translations and reprints of his works.

For more, see www.nickbostrom.com

Customer Reviews

3.9 out of 5 stars

Most Helpful Customer Reviews

358 of 370 people found the following review helpful By migedy on August 10, 2014
Format: Hardcover
Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence.

What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.

When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.

He sees three main paths to superintelligence:

1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body.
Read more ›
57 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
104 of 108 people found the following review helpful By A. Dent #1 HALL OF FAMETOP 50 REVIEWERVINE VOICE on July 15, 2014
Format: Hardcover Vine Customer Review of Free Product ( What's this? )
Not surprisingly, 200+ pages later, the author can't answer the 'what is to be done' question concerning the likely emergence of non-human (machine-based) super-intelligence, sometime, possibly soon. This is expected because, as a species, we've always been the smartest ones around and never had to even think about the possibility of coexistence alongside something or someone impossibly smart and smart in ways well beyond our comprehension, possibly driven by goals we can't understand and acting in ways that may cause our extinction.

Building his arguments on available data and extrapolating from there, Bostrom is confident that:

- some form of self-aware, machine super-intelligence is likely to emerge
- we may be unable to stop it, even if we wanted to, no matter how hard we tried
- while we may be unable to stop the emergence of super-intelligence, we could prepare ourselves to manage it and possibly survive it
- us not taking this seriously and not being prepared may result in our extinction while serious pre-emergence debate and preparation may result in some form of co-existence

It's radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are about to confront would be a grave error given that, once super-intelligence begins to manifest itself and act, the change may be extremely quick and we may not be afforded a second chance.

Most of the book concerns itself with the several types of super-intelligence that may develop, the ways in which we may be able to control or at least co-exist with such entities or entity, what the world and literally the Universe may turn into depending on how we plant the initial super-intelligent seed.
Read more ›
7 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
30 of 31 people found the following review helpful By Theodore D. Sternberg on November 5, 2014
Format: Hardcover
There are three pieces to this book:
1. Superhuman machine intelligence is coming.
2. It's potentially catastrophic for humanity.
3. We might be able to tame it.

"Superintelligence" is not human-level artificial intelligence as in meeting the Turing Test. It's what comes after that. Once we build a machine as smart as a human, that machine writes software to improve itself, which enables it to further improve itself -- but faster, then faster and faster. The equal-to-human stage proves brief as the technology charges ahead into superhuman territory.

Before long, the thing is way smarter than any human, and it parlays its superhuman skill at programming into superhuman skill at everything else, including strategic thinking, practical psychology and propaganda. It can start innocuously enough as a device that only thinks, but using its superhuman strategic skills it persuades humans to give it control over physical resources like manipulators and nanofactories. At that point it becomes a deadly threat even without "wanting" to. For example, its goal could be perfecting battery technology, but as it pursues this goal it could decide it needs more computing resources (to really figure things out...) -- a lot more resources, and so it proceeds to turn half the earth's crust into computing machinery, while tiling the rest of the planet with solar cells to power that machinery! We lose.

If you're scared now, then you'll want to read the second 50% or so of the book, which is about how to tame superintelligence, or for now, how to prepare to tame it before it's already upon us and out of control.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Most Recent Customer Reviews


Want to discover more products? Check out this page to see more: algorithmic trading