Enjoy fast, FREE delivery, exclusive deals and award-winning movies & TV shows with Prime
Try Prime
and start saving today with Fast, FREE Delivery
Amazon Prime includes:
Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.
Amazon Prime members enjoy:- Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
- Unlimited Free Two-Day Delivery
- Instant streaming of thousands of movies and TV episodes with Prime Video
- A Kindle book to borrow for free each month - with no due dates
- Listen to over 2 million songs and hundreds of playlists
- Unlimited photo storage with anywhere access
Important: Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.
Buy new:
$14.36$14.36
FREE delivery: Thursday, June 22 on orders over $25.00 shipped by Amazon.
Ships from: Amazon.com Sold by: Amazon.com
Buy used: $10.36
Other Sellers on Amazon
& FREE Shipping
92% positive over last 12 months
+ $4.59 shipping
100% positive over last 12 months
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Superintelligence: Paths, Dangers, Strategies Reprint Edition
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
|
MP3 CD, Audiobook, MP3 Audio, Unabridged
"Please retry" | $18.60 | $11.19 |
-
90 days FREE Amazon Music. Terms apply.
90 days FREE of Amazon Music Unlimited. Offer included with purchase. Only for new subscribers who have not received offer in last 90 days. Renews automatically. You will receive an email to redeem. Terms apply. Offered by Amazon.com. Here's how (restrictions apply)
Purchase options and add-ons
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- ISBN-109780198739838
- ISBN-13978-0198739838
- EditionReprint
- PublisherOxford University Press
- Publication dateMay 1, 2016
- LanguageEnglish
- Dimensions7.6 x 1 x 5 inches
- Print length390 pages
Frequently bought together

More items to explore
Special offers and product promotions
- 90 days FREE of Amazon Music Unlimited. Offer included with purchase. Only for new subscribers who have not received offer in last 90 days. Renews automatically. You will receive an email to redeem. Terms apply. Offered by Amazon.com. Here's how (restrictions apply)
Editorial Reviews
Review
"I highly recommend this book" --Bill Gates
"Terribly important. Groundbreaking, extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole. If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom's excellent book "Superintelligence" is the best thing I've seen on this topic. It is well worth a read." --Sam Altman, President of Y Combinator and Co-Chairman of OpenAI
"Worth reading. We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments. The problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
About the Author
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy's Top 100 Global Thinkers list twice. He was included on Prospect's World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
Product details
- ASIN : 0198739834
- Publisher : Oxford University Press; Reprint edition (May 1, 2016)
- Language : English
- Paperback : 390 pages
- ISBN-10 : 9780198739838
- ISBN-13 : 978-0198739838
- Item Weight : 15.8 ounces
- Dimensions : 7.6 x 1 x 5 inches
- Best Sellers Rank: #7,910 in Books (See Top 100 in Books)
- Customer Reviews:
About the author

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
For more, see www.nickbostrom.com
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.
Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.
As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.
This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.
One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.
At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.
As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.
That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.
Bostrom initially lays out the many accomplishments of AI. There is the games dimension - chess, checkers, jeopardy and many more - for which an AI is now the champion player - though in all these, he notes, the achievement is via very specific algorithms good only for that game, i.e., with little application to a general intelligence. He notes AI's main paths or approaches to intelligence, their strengths, weaknesses and tradeoffs: 1) The neural network/connectionist approach, 2) the evolutionary algorithms, 3) and the symbolic manipulation approach (GOFAI) which chronologically preceded, and yielded things like theorem provers, problem solving programs like GPS, "conversation" programs like ELIZA, expert systems, etc. He leaves implicit that these three paths lead to a giant black hole from which no exit is seen, for as he notes, standing in the distance on the other side are two huge, untaken hills: common sense knowledge and true language comprehension. These, he notes, are utterly essential to human equivalent intelligence, but AI has no current strategy to take these hills as Bostrom again leaves implicit, nor is there any current indication the three main paths will yield one, in fact there is the opposite indication. Elsewhere, Hofstadter ( Surfaces and Essences: Analogy as the Fuel and Fire of Thinking ), in his extensive tome showing that analogy is foundational to thought and language, and while eviscerating current AI language achievements, is obviously doubtful that computers (as currently conceived) can deal with this (analogy, thus language) at all.
But after taking us to the edge of this black hole, Bostrom turns 90 degrees, ignoring the two hills, and now discusses very general methods by which AI will achieve human equivalence. In this, it is safe to say, his hopes primarily fall on whole brain emulation (WBE). But his description of this approach, while seemingly detailed, fails utterly to describe its true difficulties; WBE is an untaken Everest. I suggest the recent, The Future of the Brain: Essays by the World's Leading Neuroscientists and perhaps view my (5 star) review thereof. The authors, Marcus and Freeman, are neuroscience guys discussing the massive difficulties which the huge projects embarking on brain mapping actually face, for example: We face a 85 billion neuron brain with roughly 1000 types of neurons, the functions of none of which we understand. We do not know basic facts such as how memory (our experience) is stored. We are quite certain that the brain is NOT using what we understand currently as "computation," but we do not know what this other form is (Marcus ridicules current connectionism). We face data from neural recordings that will be so massive, it will be in zeta-bytes, yet any interpretation will be completely dependent on a guiding theory - note, a theory - when we have none such. It will be, they say, like trying to learn what a laptop is and does by taking electrical recordings, when we have no theory of, or knowledge of, the existence of something called "software."
This is to say, we really have no clue what type of "device" the brain actually is. This is exacerbated by the fact that the reason we have no understanding of how experience is stored in the brain (or if), is that we have no theory of what experience is, i.e., we cannot explain the origin of the image of the external world - the coffee cup, in front of us, on the table. This problem of the origin of the image is the more precise statement of Chalmers' famous "hard problem" of consciousness - a word (consciousness), so far as I can discern, never seen once in Bostrom's book. The whole book proceeds as though this is an unimportant problem. Yet this very subject forms part of that missing notion of "software." Just to give a quick idea of how important this could be in terms of the "device" the brain actually is and for the origin of the image of the external world, Bergson (Matter and Memory, 1896), presciently anticipating the essence of holography in 1896, viewed the brain as creating a reconstructive wave passing through the external, universal holographic field, where this "wave" is now specific to, or specifying, a portion of the vast information in the field, now by this process, an image - the coffee cup on the table. This requires achieving a very concrete dynamics, it would make the brain a very different form of "device" and perception, memory and cognition employing a far different form of "computation," and it begs the question of whether such a device - being simultaneously a very concrete wave - can be embodied in silicon, wire and transistors (or even "memristors") at all, but rather, to support such a reconstructive wave, all this biological stuff comprising the brain, with its quantum dynamics rampant, is absolutely required. In other words, it is not a question of speculating, as Bostrom does further on, whether we will achieve human equivalence in 2075, or 2100 or 3100, it is a question of what the ultimate "device" we ultimately create (brain/body - AI version) will look like, an answer that will completely determine whether controlling such a device, or imparting values, or significantly increasing its intelligence, is going to be any problem or reality at all. But this can only be glimpsed by engaging with and gaining answers - unto a comprehensive theory - within a number of subjects: perception, ecological psychology, memory, explicit memory (conscious knowing an event is in one's past), cognitive development, the origin and nature of consciousness, the role of consciousness in cognition, and more - but all here ignored completely.
But the book sails serenely on from this subject of brain emulation, confident without a qualm that we will have created the brain as a silicon and wires device - it seems, confusedly, a neural net-like device that still (somehow) uses software - and now beginning long considerations on approaches by which, since it is certain that we will have electronics, we can speed up the transmission velocities, etc., say by 10,000x (and as well, modify what may well be its non-existent "software") thus allowing the device to quickly develop, creating and moving to a super intelligence and thus inducing the ensuing problems the rest of the book deals with. At this juncture, though I read the rest (a couple hundred pages), I lost interest; the book has utterly failed to motivate the reality, the nature, or the essence - near or far future - of anything it is discussing from this point.
Consider, finally, in this "un-motivated" context, the "values-loading problem," coined as a term I believe by Bostrom and treated at some length, wherein we must load human values into our super intelligence to prevent it from destroying the human race. How would the super AI know NOT to get my mother out of a burning building by simply throwing her out of the tenth floor? More simply how would the super intelligence know not to stir the coffee with a Toyota or a chair? In reality, this is simply a version of the "frame problem," a problem discussed heavily by AI for 30 years, then "faded." It sits on top of one of those hills, for it is actually the old problem of commonsense knowledge. How does the robot, stirring his cup of coffee, recognize that giant bubbles and geysers arising from the liquid, or that a feeling like stirring molasses, are anomalous - features not "expected" in this event? In the "frame" formulation, the robot has to check (constantly, with great computational expense) his list of frame axioms, axioms which specify what should be unchanged in the world as he stirs. In reality, we recognize such an anomaly because it violates one of many invariance laws structuring the event: a radial flow field defining the coffee's swirl, an adiabatic invariance (a ratio of energy of oscillation to frequency of oscillation) carried over the periodic motion of the spoon and carried over haptic flows in the hand-arm, an inertial tensor defining the angular resistance as we wield the spoon, and much more. None of this - the concrete structure, forces and dynamics of this experience - can be handled by current AI, nor are the findings of the relevant science - ecological psychology - even considered. It is on the basis of this knowledge that we recognize the "value" of a spoon, or of the flat of a knife, or even of an orange peel, to create the forces required for stirring coffee. This structured experience with its laws is the basis for even higher order, yes, analogy-based value statements, "Its not nice to stir up people." But it is worse, for values impartation, while based in this very concrete knowledge and its invariance laws, is actually embedded over our cognitive development, in our interaction with the concrete world and its beings, and this development, it is now being understood, is itself a dynamic trajectory through which our brain is travelling as a self-organizing dynamic system.
This trajectory, in Piaget's model, unfolds over two years, enabling the child to achieve the basic concepts of causality, object, space and time, the capability of explicit memory (conscious localization of past events in time), and even the ability to symbolize - yes, even the ability to symbolize - the events of the world. It requires to the age of seven to achieve concrete operations which include further grasp of space, time and number, and to the age of 12 for formal operations which include forms of logic and thought we take for granted. In other words, the brain is not only an organized structure, but a structure changing its organization along a complex trajectory purposed to achieve these conceptual and logical capabilities. Not only then must we understand the structure of these billions of neurons and their 1000 types, but also the dynamic principles embedded deep within (via DNA?) by which the structure organizes towards the "intelligence" we are familiar with. It again goes without saying that we have no clue whether the actual biological organization of the brain, and the natural course of its interaction with the concrete world at our normal scale of time (also specified by this dynamics) and the reorganizations involved, can be achieved in any other way than by the concrete method nature designed. (Spare me the minor evolutionary mistakes nature has supposedly made.) Further, for all the "operations" above, it can be shown that consciousness is required. None of these subject areas make it anywhere into Bostrom. We are asked to worry about values-imparting and the supposed dangers of a silicon-based super intelligence entirely predicated upon an ignoring of all these issues and more, all of which beg or even scream questions not only on what intelligence actually is, but whether any of these proposed concerns have the slightest reality. It is just a bit difficult for me then to attend to a large structure of concerns which float in reality upon a puff of very inadequate analysis.
Top reviews from other countries
Artificial Intelligence to me is a point whereby it is on the same evolutionary pathway we are on, or alternatively a point that is forming in conjunction and adjacent to our path and has its own evolutionary path. What this suggests to me is that we either suffuse and fuse with our technology (whatever form that may be and I am talking in a good way as I would argue we’re not doing too bad with our living with technology thus far as we are still here and thriving in a sense) or we co-exist side by side and treating each other like kin. I must admit, this is the utopian view for me and as Bostrom elaborates, there are many things to get right first before we even get to such a level of (co-)existence and growth.
There are things in this book that I have already thought about before I peered and pierced into the realm that illustrates the most concerning issues and strategies regarding this field and technology. But so many other things I had no idea existed that allowed my mind to wander and critically think, like really think. Furthermore, as I read, it allowed me to seriously consider what is at stake and what could be discovered and invented and resolved that concocts a world beyond our wildest dreams and imagination.
Although I thought the language used in this text is not so much accessible to the average reader, this just further illustrates the absolute demand that we need all the words and brain power we can muster to make sure as best we can to create something wonderful for All Humanity. In spite of the doom and gloom narratives that the media portrays with A.I, it is clear that there is much misunderstanding and misconceptions that exist through these news cycles. However, reading this gives you the ammunition to dispel such things. So my advice to those whom find this difficult to read, think of the challenge of reading such a sophisticated book as this as the simplest challenge to first overcome for yourself to better your understanding and expand your learnings of what could perhaps be something that could propound and compound possibilities of the infinite. Isn’t that what we’re aiming for anyway? To hone our abilities to contemplate the infinite. One of the ways, but perhaps one of the most significant and fundamental ways to do that is through A.I.
I stipulate that at the end of this book, I am left in awe of the research that went into this, the writing and effort of putting all the pieces together, and the hope that if we just decide to communicate and collaborate with each other, the future is nothing but a glowing North Star.
In conclusion, this is worth the read for one of the most uncertain yet paradoxically almost certain; technological developments in all our 200,000 years of Human History; that could elevate our civilisation by orders of magnitude. But since we are also the Creators (and thus our proclivity towards strife and destitute and desolation at times), it could in contrast be fatally catastrophic if we are not self aware about everything infused in the processes of this development. But I have hope for us. Because the alternative is not so good.
Anyway, have fun reading. :)
















