Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your email address or mobile phone number.
Superintelligence: Paths, Dangers, Strategies MP3 CD – Audiobook, MP3 Audio, Unabridged
|New from||Used from|
Featured Functional Programming Titles
Check out these featured titles from O'Reilly Media and distributed publishers.
Frequently Bought Together
Customers Who Bought This Item Also Bought
"I highly recommend this book" --Bill Gates
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." -- Brian Clegg, Popular Science
"Bostrom...delivers a comprehensive outline of the philosophical foundations of the nature of intelligence and the difficulty not only in agreeing on a suitable definition of that concept but in living with the possibility of dire consequences of that concept." -- A. Olivera, Teachers College, Columbia University, CHOICE
"Bostrom's achievement (demonstrating his own polymathic intelligence) is a delineation of a difficult subject into a coherent and well-ordered fashion. This subject now demands more investigation."--PopMatters
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
--This text refers to the Hardcover edition.
About the Author
Top Customer Reviews
What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.
When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.
He sees three main paths to superintelligence:
1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.
2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body.Read more ›
Building his arguments on available data and extrapolating from there, Bostrom is confident that:
- some form of self-aware, machine super-intelligence is likely to emerge
- we may be unable to stop it, even if we wanted to, no matter how hard we tried
- while we may be unable to stop the emergence of super-intelligence, we could prepare ourselves to manage it and possibly survive it
- us not taking this seriously and not being prepared may result in our extinction while serious pre-emergence debate and preparation may result in some form of co-existence
It's radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are about to confront would be a grave error given that, once super-intelligence begins to manifest itself and act, the change may be extremely quick and we may not be afforded a second chance.
Most of the book concerns itself with the several types of super-intelligence that may develop, the ways in which we may be able to control or at least co-exist with such entities or entity, what the world and literally the Universe may turn into depending on how we plant the initial super-intelligent seed.Read more ›
1. Superhuman machine intelligence is coming.
2. It's potentially catastrophic for humanity.
3. We might be able to tame it.
"Superintelligence" is not human-level artificial intelligence as in meeting the Turing Test. It's what comes after that. Once we build a machine as smart as a human, that machine writes software to improve itself, which enables it to further improve itself -- but faster, then faster and faster. The equal-to-human stage proves brief as the technology charges ahead into superhuman territory.
Before long, the thing is way smarter than any human, and it parlays its superhuman skill at programming into superhuman skill at everything else, including strategic thinking, practical psychology and propaganda. It can start innocuously enough as a device that only thinks, but using its superhuman strategic skills it persuades humans to give it control over physical resources like manipulators and nanofactories. At that point it becomes a deadly threat even without "wanting" to. For example, its goal could be perfecting battery technology, but as it pursues this goal it could decide it needs more computing resources (to really figure things out...) -- a lot more resources, and so it proceeds to turn half the earth's crust into computing machinery, while tiling the rest of the planet with solar cells to power that machinery! We lose.
If you're scared now, then you'll want to read the second 50% or so of the book, which is about how to tame superintelligence, or for now, how to prepare to tame it before it's already upon us and out of control.Read more ›
Most Recent Customer Reviews
Fairly enjoyable, I have a fair few problems with it that I'm too lazy to argue here, but a nice summation of that problem would be "It's written by a philosopher. Read morePublished 7 days ago by Kristopher
The book has some very interesting ideas, but I found, like Richard Feynman, that philosophers lack rigor. Several obvious problems were utterly ignored. Read morePublished 12 days ago by Thomas C. Jones
While it's obvious the author has a deep understanding of AI, his style of writing is extremely dense and his choice of vocabulary almost made the book inaccessible to me. Read morePublished 17 days ago by Nick H.
A bit too speculative, even for the topic. The organization of topics and general readability leave a lot to be desired. Read morePublished 17 days ago by Nicholas Kominitsky
Book is very wordy. A lot of the points it raises are well thought out, maybe too well thought out.Published 2 months ago by Casey
Reviewed by Dr. Andrea Diem-Lane
The fear of invaders from Mars or other planetary systems has been a staple of science fiction movies from the 1950s onwards, including... Read more