Superintelligence: Paths, Dangers, Strategies and over one million other books are available for Amazon Kindle. Learn more
Qty:1
  • List Price: $29.95
  • Save: $7.54 (25%)
FREE Shipping on orders over $35.
In Stock.
Ships from and sold by Amazon.com.
Gift-wrap available.
Superintelligence: Paths,... has been added to your Cart
+ $3.99 shipping
Used: Good | Details
Sold by -Daily Deals-
Condition: Used: Good
Comment: This Book is in Good Condition. Clean Copy With Light Amount of Wear. 100% Guaranteed.
Access codes and supplements are not guaranteed with used items.
Sell yours for a Gift Card
We'll buy it for $5.53
Learn More
Trade in now
Have one to sell? Sell on Amazon
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Superintelligence: Paths, Dangers, Strategies Hardcover – September 3, 2014

ISBN-13: 978-0199678112 ISBN-10: 0199678111 Edition: 1st

Buy New
Price: $22.41
50 New from $17.11 15 Used from $19.57
Amazon Price New from Used from
Kindle
"Please retry"
Hardcover
"Please retry"
$22.41
$17.11 $19.57
Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Frequently Bought Together

Superintelligence: Paths, Dangers, Strategies + Zero to One: Notes on Startups, or How to Build the Future
Price for both: $38.61

Buy the selected items together

NO_CONTENT_IN_FEATURE

Shop the new tech.book(store)
New! Introducing the tech.book(store), a hub for Software Developers and Architects, Networking Administrators, TPMs, and other technology professionals to find highly-rated and highly-relevant career resources. Shop books on programming and big data, or read this week's blog posts by authors and thought-leaders in the tech industry. > Shop now

Product Details

  • Hardcover: 352 pages
  • Publisher: Oxford University Press; 1 edition (September 3, 2014)
  • Language: English
  • ISBN-10: 0199678111
  • ISBN-13: 978-0199678112
  • Product Dimensions: 9.3 x 0.6 x 6.4 inches
  • Shipping Weight: 1.5 pounds (View shipping rates and policies)
  • Average Customer Review: 4.1 out of 5 stars  See all reviews (83 customer reviews)
  • Amazon Best Sellers Rank: #4,204 in Books (See Top 100 in Books)

Editorial Reviews

Review


"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." -- Stuart Russell, Professor of Computer Science, University of California, Berkley


"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." -- Martin Rees, Past President, Royal Society


"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" -- Professor Max Tegmark, MIT


"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever." -- Olle Haggstrom, Professor of Mathematical Statistics


"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" -- The Economist


"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." -- Clive Cookson, Financial Times


"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" -- Elon Musk, Founder of SpaceX and Tesla


"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." -- Brian Clegg, Popular Science


About the Author


Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

More About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the groundbreaking Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). Earlier this year he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 22 languages. There have been more than 100 translations and reprints of his works.

For more, see www.nickbostrom.com

Customer Reviews

Bostrom makes many interesting points throughout his book.
migedy
This book discusses issues related to what is likely a major, even possibly the most crucial turning point in human history.
Richard L. Rankin
It is now a week after reading this book, and I still thinking about it.
Michelle J. Stanton

Most Helpful Customer Reviews

198 of 207 people found the following review helpful By migedy on August 10, 2014
Format: Hardcover
Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence.

What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.

When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.

He sees three main paths to superintelligence:

1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body.
Read more ›
38 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
75 of 79 people found the following review helpful By A. Dent #1 HALL OF FAMETOP 50 REVIEWERVINE VOICE on July 15, 2014
Format: Hardcover Vine Customer Review of Free Product ( What's this? )
Not surprisingly, 200+ pages later, the author can't answer the 'what is to be done' question concerning the likely emergence of non-human (machine-based) super-intelligence, sometime, possibly soon. This is expected because, as a species, we've always been the smartest ones around and never had to even think about the possibility of coexistence alongside something or someone impossibly smart and smart in ways well beyond our comprehension, possibly driven by goals we can't understand and acting in ways that may cause our extinction.

Building his arguments on available data and extrapolating from there, Bostrom is confident that:

- some form of self-aware, machine super-intelligence is likely to emerge
- we may be unable to stop it, even if we wanted to, no matter how hard we tried
- while we may be unable to stop the emergence of super-intelligence, we could prepare ourselves to manage it and possibly survive it
- us not taking this seriously and not being prepared may result in our extinction while serious pre-emergence debate and preparation may result in some form of co-existence

It's radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are about to confront would be a grave error given that, once super-intelligence begins to manifest itself and act, the change may be extremely quick and we may not be afforded a second chance.

Most of the book concerns itself with the several types of super-intelligence that may develop, the ways in which we may be able to control or at least co-exist with such entities or entity, what the world and literally the Universe may turn into depending on how we plant the initial super-intelligent seed.
Read more ›
7 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
Format: Hardcover Vine Customer Review of Free Product ( What's this? )
An excellent review and analysis of the key questions and challenges surrounding the development of machine "super-intelligence", and the potential existential threat such a development might pose for humankind.

Bostrom achieves a remarkable feat here, managing to present a thorough, detailed analysis of a complex technological and philosophical issue, while keeping the discussion accessible to the layman. The book has a conversational tone, yet isn't dumbed down in the slightest. On the contrary, the key issues are examined from the standpoint of multiple disciplines and in multiple dimensions. The analysis that emerges is rich, thought-provoking, and yet accessible.

Bostrom also manages to avoid sounding shrill or overly dramatic when laying out the possible scenarios for disaster. On the one hand, it's clear that Bostrom sees the possibility for disastrous unintended consequences -- including the complete extinction of humankind -- as a very real potential outcome. Yet at the same time, the book manages to present these scenarios in a clearheaded, calm manner, devoid of any trace of hysteria.

A good complement to Bostrom's book would be Steven Levy's "Artificial Life" [http://amzn.com/0679743898], which looks at the principles and beginnings of the Artificial Life field, and how and where it both intersects with, and differs from, the more traditional field of Artificial Intelligence.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
23 of 27 people found the following review helpful By James David Morris on October 9, 2014
Format: Kindle Edition Verified Purchase
I'm a long-time fan of all things AI, and for example, I'd give 4-5 starts to "How to Create a Mind" by Ray Kurzweil.

This book needed an editor who could understand the difference between useful insight and mindless spouting off of sentences. There are thousands of paragraphs that read like this:

A super intelligence might have a deep and rich personality, possessing more humor, more love, and more loyalty than any human, or it might have none of these. If it had these rich personalities then they might not even be recognizable to humans. If they were recognizable, humans may appreciate them. If they are not easily recognized, humans may not appreciate them. It it turns out that they do not have any of these qualities, it may still however appear to humans that they do have them, because of their complexity. But complexity does not necessarily equate to richness. An emotion could be complex, but not deep, or rich. Or, an emotion could be rich, but not complex. In any case, it is not know whether they will indeed have personalities, or simply seem to have them. Nor is it certain how humans may react to their possessing, or lack of, credible emotions.

This type of completely useless information is 80% of the book. It has very little in the realm of real insight, but rather lists every possible possibility direction that could be taken, but then goes nowhere. In fact most of the book could have been written thousands of years ago because all it amounts to is a collection of "if this then that, or the other. But if not this, then maybe not that or maybe not the other".

I finished the whole thing just because I love the topic, but I cannot recommend it to anyone. The whole book should have been edited down to 10% of it's size. Then, it would seem like an interesting consideration of the many possible futures. But as it is, it's a nearly unbearable waste of 90% of the time it will take you to get through it.
8 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Most Recent Customer Reviews


What Other Items Do Customers Buy After Viewing This Item?