Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows with Prime
Try Prime
and start saving today with fast, free delivery
Amazon Prime includes:
Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.
Amazon Prime members enjoy:- Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
- Unlimited Free Two-Day Delivery
- Streaming of thousands of movies and TV shows with limited ads on Prime Video.
- A Kindle book to borrow for free each month - with no due dates
- Listen to over 2 million songs and hundreds of playlists
- Unlimited photo storage with anywhere access
Important: Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.
Buy new:
$13.09$13.09
FREE delivery: Thursday, Feb 29 on orders over $35.00 shipped by Amazon.
Ships from: Amazon.com Sold by: Amazon.com
Buy used: $7.37
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Image Unavailable
Color:
-
-
-
- To view this video download Flash Player
Superintelligence: Paths, Dangers, Strategies Reprint Edition
Purchase options and add-ons
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- ISBN-109780198739838
- ISBN-13978-0198739838
- EditionReprint
- PublisherOxford University Press
- Publication dateMay 1, 2016
- LanguageEnglish
- Dimensions7.6 x 1 x 5 inches
- Print length390 pages
Frequently bought together

More items to explore
Editorial Reviews
Review
"I highly recommend this book" --Bill Gates
"Terribly important. Groundbreaking, extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole. If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom's excellent book "Superintelligence" is the best thing I've seen on this topic. It is well worth a read." --Sam Altman, President of Y Combinator and Co-Chairman of OpenAI
"Worth reading. We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments. The problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
Book Description
About the Author
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy's Top 100 Global Thinkers list twice. He was included on Prospect's World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
Product details
- ASIN : 0198739834
- Publisher : Oxford University Press; Reprint edition (May 1, 2016)
- Language : English
- Paperback : 390 pages
- ISBN-10 : 9780198739838
- ISBN-13 : 978-0198739838
- Item Weight : 15.8 ounces
- Dimensions : 7.6 x 1 x 5 inches
- Best Sellers Rank: #13,596 in Books (See Top 100 in Books)
- #1 in Artificial Intelligence (Books)
- #24 in Artificial Intelligence & Semantics
- #90 in Unknown
- Customer Reviews:
Important information
To report an issue with this product or seller, click here.
About the author

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.
He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.
For more, see www.nickbostrom.com
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviews with images
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.
Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.
As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.
This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.
One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.
At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.
As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.
That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.
The book stands out for its rigorous analysis and balanced perspective. Bostrom carefully navigates the reader through various scenarios where AI surpasses human intelligence, discussing both the transformative benefits and the existential risks. His writing style is scholarly yet accessible, making complex ideas about AI ethics, future forecasting, and strategic planning understandable to a broad audience.
One of the most compelling aspects of the book is its exploration of the 'control problem' - how humans could control entities far smarter than themselves. Bostrom does not shy away from the challenging philosophical and technical issues this problem presents. He also emphasizes the importance of preparatory work in AI safety research, encouraging proactive measures rather than reactive.
However, some readers might find the level of detail and theoretical nature of the discussions somewhat daunting. The book demands attentiveness and a willingness to engage with deeply philosophical and technical content. Additionally, while Bostrom presents a wide array of possibilities, the book sometimes leans more towards speculative thought than practical solutions.
"Superintelligence: Paths, Dangers, Strategies" is a seminal work in the field of AI and an essential read for anyone interested in the future of technology and its implications for humanity. Bostrom's thorough approach offers valuable insights and raises critical questions that will shape the ongoing conversation about AI and our future.















