TofuFlyout Industrial-Sized Deals Best Books of the Month Shop Men's Classics Shop Men's Classics Shop Men's Learn more nav_sap_plcc_6M_fly_beacon Girlpool The Next Storm Free Fire TV Stick with Purchase of Ooma Telo Subscribe & Save Home Improvement Shop all gdwf gdwf gdwf  Amazon Echo  Amazon Echo All-New Kindle Paperwhite GNO Shop Cycling on Amazon Deal of the Day

Superintelligence: Paths, Dangers, Strategies 1st Edition

171 customer reviews
ISBN-13: 978-0199678112
ISBN-10: 0199678111
Why is ISBN important?
ISBN
This bar-code number lets you verify that you're getting exactly the right version or edition of a book. The 13-digit and 10-digit formats both work.
Scan an ISBN with your phone
Use the Amazon App to scan ISBNs and compare prices.
Sell yours for a Gift Card
We'll buy it for $8.93
Learn More
Trade in now
Have one to sell? Sell on Amazon

Sorry, there was a problem.

There was an error retrieving your Wish Lists. Please try again.

Sorry, there was a problem.

Wish List unavailable.
Buy new
$22.19
In Stock.
Ships from and sold by Amazon.com. Gift-wrap available.
Want it Saturday, July 25? Order within and choose Two-Day Shipping at checkout. Details
List Price: $29.95 Save: $7.76 (26%)
59 New from $17.44
FREE Shipping on orders over $35.
Qty:1
Superintelligence: Paths,... has been added to your Cart

Ship to:
Select a shipping address:
To see addresses, please
or
Please enter a valid zip code.
Amazon Price New from Used from
Kindle
"Please retry"
Hardcover, September 3, 2014
"Please retry"
$22.19
$17.44 $18.24
More Buying Choices
59 New from $17.44 13 Used from $18.24
Free Two-Day Shipping for College Students with Amazon Student Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


InterDesign Brand Store Awareness Rent Textbooks
$22.19 FREE Shipping on orders over $35. In Stock. Ships from and sold by Amazon.com. Gift-wrap available.

Frequently Bought Together

Superintelligence: Paths, Dangers, Strategies + Structures: Or Why Things Don't Fall Down
Price for both: $36.27

Buy the selected items together

NO_CONTENT_IN_FEATURE

Shop the New Digital Design Bookstore
Check out the Digital Design Bookstore, a new hub for photographers, art directors, illustrators, web developers, and other creative individuals to find highly rated and highly relevant career resources. Shop books on web development and graphic design, or check out blog posts by authors and thought-leaders in the design industry. Shop now

Product Details

  • Hardcover: 352 pages
  • Publisher: Oxford University Press; 1 edition (September 3, 2014)
  • Language: English
  • ISBN-10: 0199678111
  • ISBN-13: 978-0199678112
  • Product Dimensions: 9.3 x 0.6 x 6.4 inches
  • Shipping Weight: 1.5 pounds (View shipping rates and policies)
  • Average Customer Review: 3.9 out of 5 stars  See all reviews (171 customer reviews)
  • Amazon Best Sellers Rank: #3,704 in Books (See Top 100 in Books)
  •  Would you like to update product info, give feedback on images, or tell us about a lower price?

Customer Reviews

Most Helpful Customer Reviews

395 of 409 people found the following review helpful By migedy on August 10, 2014
Format: Hardcover
Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence.

What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.

When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.

He sees three main paths to superintelligence:

1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body.
Read more ›
61 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
110 of 114 people found the following review helpful By Cthulhu #1 HALL OF FAMETOP 50 REVIEWER on July 15, 2014
Format: Hardcover Vine Customer Review of Free Product ( What's this? )
Not surprisingly, 200+ pages later, the author can't answer the 'what is to be done' question concerning the likely emergence of non-human (machine-based) super-intelligence, sometime, possibly soon. This is expected because, as a species, we've always been the smartest ones around and never had to even think about the possibility of coexistence alongside something or someone impossibly smart and smart in ways well beyond our comprehension, possibly driven by goals we can't understand and acting in ways that may cause our extinction.

Building his arguments on available data and extrapolating from there, Bostrom is confident that:

- some form of self-aware, machine super-intelligence is likely to emerge
- we may be unable to stop it, even if we wanted to, no matter how hard we tried
- while we may be unable to stop the emergence of super-intelligence, we could prepare ourselves to manage it and possibly survive it
- us not taking this seriously and not being prepared may result in our extinction while serious pre-emergence debate and preparation may result in some form of co-existence

It's radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are about to confront would be a grave error given that, once super-intelligence begins to manifest itself and act, the change may be extremely quick and we may not be afforded a second chance.

Most of the book concerns itself with the several types of super-intelligence that may develop, the ways in which we may be able to control or at least co-exist with such entities or entity, what the world and literally the Universe may turn into depending on how we plant the initial super-intelligent seed.
Read more ›
7 Comments Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
36 of 37 people found the following review helpful By Theodore D. Sternberg on November 5, 2014
Format: Hardcover
There are three pieces to this book:
1. Superhuman machine intelligence is coming.
2. It's potentially catastrophic for humanity.
3. We might be able to tame it.

"Superintelligence" is not human-level artificial intelligence as in meeting the Turing Test. It's what comes after that. Once we build a machine as smart as a human, that machine writes software to improve itself, which enables it to further improve itself -- but faster, then faster and faster. The equal-to-human stage proves brief as the technology charges ahead into superhuman territory.

Before long, the thing is way smarter than any human, and it parlays its superhuman skill at programming into superhuman skill at everything else, including strategic thinking, practical psychology and propaganda. It can start innocuously enough as a device that only thinks, but using its superhuman strategic skills it persuades humans to give it control over physical resources like manipulators and nanofactories. At that point it becomes a deadly threat even without "wanting" to. For example, its goal could be perfecting battery technology, but as it pursues this goal it could decide it needs more computing resources (to really figure things out...) -- a lot more resources, and so it proceeds to turn half the earth's crust into computing machinery, while tiling the rest of the planet with solar cells to power that machinery! We lose.

If you're scared now, then you'll want to read the second 50% or so of the book, which is about how to tame superintelligence, or for now, how to prepare to tame it before it's already upon us and out of control.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Most Recent Customer Reviews

Set up an Amazon Giveaway

Amazon Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers. Learn more
Superintelligence: Paths, Dangers, Strategies
This item: Superintelligence: Paths, Dangers, Strategies
Price: $22.19
Ships from and sold by Amazon.com



Want to discover more products? Check out this page to see more: algorithmic trading