See Clubs
Loading your book clubs
There was a problem loading your book clubs. Please try again.
Not in a club? Learn more
Join or create book clubs
Choose books together
Track your books
Bring your club to Amazon Book Clubs, start a new book club and invite your friends to join, or find a club that’s right for you for free.
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Superintelligence: Paths, Dangers, Strategies Reprint Edition
by
Nick Bostrom
(Author)
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$3.04
| — | — |
|
MP3 CD, Audiobook, MP3 Audio, Unabridged
"Please retry" | $19.93 | $4.36 |
{"desktop_buybox_group_1":[{"displayPrice":"$13.09","priceAmount":13.09,"currencySymbol":"$","integerValue":"13","decimalSeparator":".","fractionalValue":"09","symbolPosition":"left","hasSpace":false,"showFractionalPartIfEmpty":true,"offerListingId":"2owZnp8Mz8u5WTHz6HLxO82alk2x%2Ba6zmZ5CjEMgBga62Mg4JVxJLCjfFQOsmghFvrpR%2FQrLZ1nSbpWgGuYZ3uRFfCAtSSKx%2BBzgM%2FK0PKDtbqsusyqKVOjJarQ3RB%2Bs49TduyPuprBY1%2BW3Yj4cMg%3D%3D","locale":"en-US","buyingOptionType":"NEW","aapiBuyingOptionIndex":0}]}
Purchase options and add-ons
A New York Times bestseller
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- ISBN-109780198739838
- ISBN-13978-0198739838
- EditionReprint
- PublisherOxford University Press
- Publication dateMay 1, 2016
- LanguageEnglish
- Dimensions7.6 x 1 x 5 inches
- Print length390 pages
Frequently bought together

This item: Superintelligence: Paths, Dangers, Strategies
$13.09$13.09
In Stock
$15.49$15.49
In Stock
$13.62$13.62
Get it as soon as Monday, Dec 18
In Stock
Total price:
To see our price, add these items to your cart.
Try again!
Added to Cart
These items are shipped from and sold by different sellers.
Choose items to buy together.
More items to explore
Page 1 of 1 Start overPage 1 of 1
Human Compatible: Artificial Intelligence and the Problem of ControlPaperback$15.72 shippingGet it as soon as Monday, Dec 18
Artificial Intelligence: A Modern ApproachPaperback$21.84 shippingGet it as soon as Wednesday, Dec 20Only 1 left in stock - order soon.
The Big Picture: On the Origins of Life, Meaning, and the Universe ItselfSean CarrollPaperback$16.09 shipping
Customer reviews
4.3 out of 5 stars
4.3 out of 5
4,224 global ratings
How customer reviews and ratings work
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviews with images
Submit a report
A few common reasons customers report reviews:
- Harassment, profanity
- Spam, advertisement, promotions
- Given in exchange for cash, discounts
When we get your report, we'll check if the review meets our Community guidelines. If it doesn't, we'll remove it.
Report
Cancel
Sorry we couldn't load the review
Thank you for your feedback
Sorry, there was an error
Please try again later.Close
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Reviewed in the United States on October 9, 2023
This was written 10 years ago, and formed the basis of a lot of thought about approaches to preparing for and dealing with an intelligence explosion. It’s extremely thorough, and goes through a series of philosophical and practical possibilities, and strategies for dealing with them (as the title says). It kind of bogs down at the 3/4 mark for modern readers, because realistically some of the questions he tackles are now moot (should we pursue whole brain emulation before synthetic AI? will the world band together to collaborate on a global AI?) and reading through all the paths and strategies has become less relevant. But he had a lot of interesting things to say on the side of collaboration that are definitely worth a re-read, and the general substance of the book is fundamental to understanding more current approaches to an intelligence explosion.
Reviewed in the United States on February 1, 2020
Absent the emergence of some physical constraint which causes the exponential growth of computing power at constant cost to cease, some form of economic or societal collapse which brings an end to research and development of advanced computing hardware and software, or a decision, whether bottom-up or top-down, to deliberately relinquish such technologies, it is probable that within the 21st century there will emerge artificially-constructed systems which are more intelligent (measured in a variety of ways) than any human being who has ever lived and, given the superior ability of such systems to improve themselves, may rapidly advance to superiority over all human society taken as a whole. This “intelligence explosion” may occur in so short a time (seconds to hours) that human society will have no time to adapt to its presence or interfere with its emergence. This challenging and occasionally difficult book, written by a philosopher who has explored these issues in depth, argues that the emergence of superintelligence will pose the greatest human-caused existential threat to our species so far in its existence, and perhaps in all time.
Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.
Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.
As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.
This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.
One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.
At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.
As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.
That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.
Let us consider what superintelligence may mean. The history of machines designed by humans is that they rapidly surpass their biological predecessors to a large degree. Biology never produced something like a steam engine, a locomotive, or an airliner. It is similarly likely that once the intellectual and technological leap to constructing artificially intelligent systems is made, these systems will surpass human capabilities to an extent greater than those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer's yeast and humans. We'd better be sure of the intentions and benevolence of that intelligence before handing over the keys to our future to it.
Because when we speak of the future, that future isn't just what we can envision over a few centuries on this planet, but the entire “cosmic endowment” of humanity. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. (If we weren't, there would be abundant and visible evidence of cosmic engineering by those more advanced that we.) Thus our cosmic endowment may be the entire galaxy, or the universe, until the end of time. What we do in the next century may determine the destiny of the universe, so it's worth some reflection to get it right.
As an example of how easy it is to choose unwisely, let me expand upon an example given by the author. There are extremely difficult and subtle questions about what the motivations of a superintelligence might be, how the possession of such power might change it, and the prospects for we, its creator, to constrain it to behave in a way we consider consistent with our own values. But for the moment, let's ignore all of those problems and assume we can specify the motivation of an artificially intelligent agent we create and that it will remain faithful to that motivation for all time. Now suppose a paper clip factory has installed a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission. The mission?—to maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but then they were just consuming energy and material resources which could better be deployed for making paper clips. Soon other planets in the solar system would be similarly disassembled, and self-reproducing probes dispatched on missions to other stars, there to make paper clips and spawn other probes to more stars and eventually other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn't hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.
This is a light-hearted example, but if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips, be very worried.
One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial general intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.
At some point, the emergence of superintelligence calls into the question the economic rationale for a large human population. In 1915, there were about 26 million horses in the U.S. By the early 1950s, only 2 million remained. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won't.
As an engineer, I usually don't have much use for philosophers, who are given to long gassy prose devoid of specifics and for spouting complicated indirect arguments which don't seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices before we're faced with the emergence of an artificial intelligence, after which, if they're wrong, it will be too late.
That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence which blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine, and not be self-aware and have its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass. Unless you believe there is some kind of intellectual élan vital inherent in biological substrates which is absent in their equivalents based on other hardware (which just seems silly to me—like arguing there's something special about a horse which can't be accomplished better by a truck), the mature artificial intelligence will be the superior in every way to its human creators, so in-depth ratiocination about how it will regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.
Reviewed in the United States on May 24, 2023
An important book to read and reread, especially in this AI epoch. The book details several issues that may arise during the deployment of AI, AGI, and Superintelligence. We are clearly not ready to coexist with an agent that is orders of magnitude intelligent than us.
Slightly hard to read some parts of the book as it is written clearly like a summary of research papers rather than a typical nonfiction book. As, a researcher and curious person, I highly recommend.
Slightly hard to read some parts of the book as it is written clearly like a summary of research papers rather than a typical nonfiction book. As, a researcher and curious person, I highly recommend.
Reviewed in the United States on September 17, 2014
The author has obviously put a huge amount of thought into this topic. The number of angles he considers in terms of implementation timelines, methodologies, pros and cons for each, likelihood of the success of different methodologies over various timeframes, are impressive.
For example, in discussing the various ways in which AI might be implemented, he concludes that AI (and subsequently, super-intelligent AI) via whole brain emulation is essentially guaranteed to happen due to ever-improving scanning techniques such as MRI or electron microscopy, ever-increasing computing power, and the fact that understanding the brain is not necessary to emulate the brain. Rather, once you can scan it in enough detail, and you have enough hardware to simulate it, it can be done even if the overarching design is a black box to you (individual neurons or clusters of neurons can already be simulated, but we lack the computing power to simulate 10 billion neurons, and we lack the knowledge of how they are all connected in a human brain -- something which various scanning projects are already tackling).
However, he also concludes that due to the time it will take to achieve the necessary advances in scanning and hardware, whole brain emulation is unlikely to be how advanced AI is actually, or initially, achieved. Rather, more conventional AI programming techniques, while perhaps posing a greater need for understanding the nature of intelligence, have a much-reduced hardware requirement (and no scanning requirement) and are likely to reach fruition first.
This is just one example. He slices and dices these issues more ways than you can imagine, coming to what is, in the end, a fairly simple conclusion (if I may inelegantly paraphrase): Super-intelligent AI is coming. It might be in 10 years, maybe 20, maybe 50, but it is coming. And, it is potentially quite dangerous because, by definition, it is smarter than you. So, if it wants to do you harm, it will and there will be very little you can do about it. Therefore, by the time super-intelligent AI is possible, we better know not just how to make a super-intelligent AI, but a super-intelligent AI which shares human values and morals (or perhaps embodies human values and morals as we wish they were, since as he points out, we certainly would not want to use some peoples' values and morals as a template for an AI, and it may be hard to even agree on some such philosophical issues across widely-divergent cultures and beliefs).
This is a thought-provoking book. It raises issues that I never even would have thought of had the author not pointed them out. For example, "infrastructure proliferation" is a bizarre, yet presumably possible, way in which a super-intelligent (but in some ways, lacking common sense) AI could end life as we know it without even being malicious -- just indifferent to us while pursuing pedestrian goals in what is, to it, a perfectly logical manner.
I share the author's concerns. Human-level (much less super-intelligent) AI seems far away. So, why worry about the consequences right now? There will be plenty of time to deal with such issues as the ability to program strong AI gets closer. Right?
Maybe, maybe not. As the author also describes in detail, there are many scenarios (perhaps the most likely ones) where one day you don't have AI, and the next you do (e.g., only a single algorithm tweak was keeping the system from being intelligent and with that solved, all of the sudden your program is smarter than you -- and able to recursively improve itself so that days, or maybe hours or minutes later, it is WAY smarter than you). I hope AI researchers take heed of this book. If the ability to program goals, values, morals and common sense into a computer is not developed in parallel with the ability to create programs that dispassionately "think" at a very high level, we could have a very big problem on our hands.
For example, in discussing the various ways in which AI might be implemented, he concludes that AI (and subsequently, super-intelligent AI) via whole brain emulation is essentially guaranteed to happen due to ever-improving scanning techniques such as MRI or electron microscopy, ever-increasing computing power, and the fact that understanding the brain is not necessary to emulate the brain. Rather, once you can scan it in enough detail, and you have enough hardware to simulate it, it can be done even if the overarching design is a black box to you (individual neurons or clusters of neurons can already be simulated, but we lack the computing power to simulate 10 billion neurons, and we lack the knowledge of how they are all connected in a human brain -- something which various scanning projects are already tackling).
However, he also concludes that due to the time it will take to achieve the necessary advances in scanning and hardware, whole brain emulation is unlikely to be how advanced AI is actually, or initially, achieved. Rather, more conventional AI programming techniques, while perhaps posing a greater need for understanding the nature of intelligence, have a much-reduced hardware requirement (and no scanning requirement) and are likely to reach fruition first.
This is just one example. He slices and dices these issues more ways than you can imagine, coming to what is, in the end, a fairly simple conclusion (if I may inelegantly paraphrase): Super-intelligent AI is coming. It might be in 10 years, maybe 20, maybe 50, but it is coming. And, it is potentially quite dangerous because, by definition, it is smarter than you. So, if it wants to do you harm, it will and there will be very little you can do about it. Therefore, by the time super-intelligent AI is possible, we better know not just how to make a super-intelligent AI, but a super-intelligent AI which shares human values and morals (or perhaps embodies human values and morals as we wish they were, since as he points out, we certainly would not want to use some peoples' values and morals as a template for an AI, and it may be hard to even agree on some such philosophical issues across widely-divergent cultures and beliefs).
This is a thought-provoking book. It raises issues that I never even would have thought of had the author not pointed them out. For example, "infrastructure proliferation" is a bizarre, yet presumably possible, way in which a super-intelligent (but in some ways, lacking common sense) AI could end life as we know it without even being malicious -- just indifferent to us while pursuing pedestrian goals in what is, to it, a perfectly logical manner.
I share the author's concerns. Human-level (much less super-intelligent) AI seems far away. So, why worry about the consequences right now? There will be plenty of time to deal with such issues as the ability to program strong AI gets closer. Right?
Maybe, maybe not. As the author also describes in detail, there are many scenarios (perhaps the most likely ones) where one day you don't have AI, and the next you do (e.g., only a single algorithm tweak was keeping the system from being intelligent and with that solved, all of the sudden your program is smarter than you -- and able to recursively improve itself so that days, or maybe hours or minutes later, it is WAY smarter than you). I hope AI researchers take heed of this book. If the ability to program goals, values, morals and common sense into a computer is not developed in parallel with the ability to create programs that dispassionately "think" at a very high level, we could have a very big problem on our hands.
Top reviews from other countries
Clyve Westerlund
5.0 out of 5 stars
Thought Provoking, Carefully Well-Crafted, and Immeasurably Important to Consider.
Reviewed in Australia on July 22, 2022
Bostrom is by far one of my all time philosophers and story tellers. This of course introduces my own biases towards having a positive review and remarks of this text. Not to mention the countless icons in the high-functioning fields of our Human Society that remark on the positive evaluations and comments on what is written here, and why this book is important for everyone to ruminate upon and have mature conversations about.
Artificial Intelligence to me is a point whereby it is on the same evolutionary pathway we are on, or alternatively a point that is forming in conjunction and adjacent to our path and has its own evolutionary path. What this suggests to me is that we either suffuse and fuse with our technology (whatever form that may be and I am talking in a good way as I would argue we’re not doing too bad with our living with technology thus far as we are still here and thriving in a sense) or we co-exist side by side and treating each other like kin. I must admit, this is the utopian view for me and as Bostrom elaborates, there are many things to get right first before we even get to such a level of (co-)existence and growth.
There are things in this book that I have already thought about before I peered and pierced into the realm that illustrates the most concerning issues and strategies regarding this field and technology. But so many other things I had no idea existed that allowed my mind to wander and critically think, like really think. Furthermore, as I read, it allowed me to seriously consider what is at stake and what could be discovered and invented and resolved that concocts a world beyond our wildest dreams and imagination.
Although I thought the language used in this text is not so much accessible to the average reader, this just further illustrates the absolute demand that we need all the words and brain power we can muster to make sure as best we can to create something wonderful for All Humanity. In spite of the doom and gloom narratives that the media portrays with A.I, it is clear that there is much misunderstanding and misconceptions that exist through these news cycles. However, reading this gives you the ammunition to dispel such things. So my advice to those whom find this difficult to read, think of the challenge of reading such a sophisticated book as this as the simplest challenge to first overcome for yourself to better your understanding and expand your learnings of what could perhaps be something that could propound and compound possibilities of the infinite. Isn’t that what we’re aiming for anyway? To hone our abilities to contemplate the infinite. One of the ways, but perhaps one of the most significant and fundamental ways to do that is through A.I.
I stipulate that at the end of this book, I am left in awe of the research that went into this, the writing and effort of putting all the pieces together, and the hope that if we just decide to communicate and collaborate with each other, the future is nothing but a glowing North Star.
In conclusion, this is worth the read for one of the most uncertain yet paradoxically almost certain; technological developments in all our 200,000 years of Human History; that could elevate our civilisation by orders of magnitude. But since we are also the Creators (and thus our proclivity towards strife and destitute and desolation at times), it could in contrast be fatally catastrophic if we are not self aware about everything infused in the processes of this development. But I have hope for us. Because the alternative is not so good.
Anyway, have fun reading. :)
Artificial Intelligence to me is a point whereby it is on the same evolutionary pathway we are on, or alternatively a point that is forming in conjunction and adjacent to our path and has its own evolutionary path. What this suggests to me is that we either suffuse and fuse with our technology (whatever form that may be and I am talking in a good way as I would argue we’re not doing too bad with our living with technology thus far as we are still here and thriving in a sense) or we co-exist side by side and treating each other like kin. I must admit, this is the utopian view for me and as Bostrom elaborates, there are many things to get right first before we even get to such a level of (co-)existence and growth.
There are things in this book that I have already thought about before I peered and pierced into the realm that illustrates the most concerning issues and strategies regarding this field and technology. But so many other things I had no idea existed that allowed my mind to wander and critically think, like really think. Furthermore, as I read, it allowed me to seriously consider what is at stake and what could be discovered and invented and resolved that concocts a world beyond our wildest dreams and imagination.
Although I thought the language used in this text is not so much accessible to the average reader, this just further illustrates the absolute demand that we need all the words and brain power we can muster to make sure as best we can to create something wonderful for All Humanity. In spite of the doom and gloom narratives that the media portrays with A.I, it is clear that there is much misunderstanding and misconceptions that exist through these news cycles. However, reading this gives you the ammunition to dispel such things. So my advice to those whom find this difficult to read, think of the challenge of reading such a sophisticated book as this as the simplest challenge to first overcome for yourself to better your understanding and expand your learnings of what could perhaps be something that could propound and compound possibilities of the infinite. Isn’t that what we’re aiming for anyway? To hone our abilities to contemplate the infinite. One of the ways, but perhaps one of the most significant and fundamental ways to do that is through A.I.
I stipulate that at the end of this book, I am left in awe of the research that went into this, the writing and effort of putting all the pieces together, and the hope that if we just decide to communicate and collaborate with each other, the future is nothing but a glowing North Star.
In conclusion, this is worth the read for one of the most uncertain yet paradoxically almost certain; technological developments in all our 200,000 years of Human History; that could elevate our civilisation by orders of magnitude. But since we are also the Creators (and thus our proclivity towards strife and destitute and desolation at times), it could in contrast be fatally catastrophic if we are not self aware about everything infused in the processes of this development. But I have hope for us. Because the alternative is not so good.
Anyway, have fun reading. :)
One person found this helpful
Report
Oktay
5.0 out of 5 stars
one of my favorite books
Reviewed in Sweden on January 26, 2023
It is one of my favorite, I think it is must-read book for everyone in 21st century
Akki
4.0 out of 5 stars
This just changes the way one thinks around AI
Reviewed in India on April 19, 2019
There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless.
It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.
What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.
The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.
I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.
I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.
However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.
What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.
The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.
I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.
I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.
The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.
However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
23 people found this helpful
Report
Du
5.0 out of 5 stars
Go for it....
Reviewed in India on September 17, 2023
I'm not finish it yet but
Truly it's a great book..
Truly it's a great book..
Bernhard
4.0 out of 5 stars
A very interesting book, and an important one - but it could have been written a bit better.
Reviewed in Germany on May 22, 2016
It might not be the most pleasurable read, but sticking with it till the end is worth its while. The book has a great logical structure, and most chapters end with a summary - that's exactly how such a book should be written. Other than this, it is hard to give a general critique, since many chapters are extremely well-written:
The first chapter treats past developments and the state of the art in AI research. Well written, but lacking important aspects of current AI-related research: Scientific progress in all areas of machine learning was not mentioned in sufficient detail; clustering, policy development via dynamic programming, classification, natural language processing, etc. Which of these special-purpose skills are important ingredients for an AGI, and which are problems that are AI-complete? I would have appreciated a bit more on that topic.
The second chapter, "Paths to superintelligence" outlines several possible ways to get there, but is not very convincing in which of these seems to be likely. The reader is well-informed, but left in a state of "OK, but Bostrom himself seems not to believe that any of them is likely to achieve superintelligence in the next 50 years." Slightly connected, but much better written, chapter 3 deals with different forms of superintelligence.
The fourth chapter deals with the "kinetics of an intelligence explosion", and is again very vague: Both accelerating and decelerating effects (nicely matched with all possible paths to superintelligence) are discussed at lengths, and again the reader is left with the feeling that absolutely no prognosis is possible. Bostrom himself end the chapter with "although a fast or medium [speed] takeoff looks more likely, the possibility of a slow takeoff cannot be excluded".
Chapter 5 marks a transition in the writing style: Bostrom changes from a very neutral, unconfident tone to a highly convincing one. If I had to guess, I would say that this and the following chapters are at the core of his own research interest. Bostrom makes a very convincing point that as soon as a superintelligence is created, it will very likely take control over the world. Chapter 6 briefly deals with possible (super-)capabilities such a superintelligence will develop and how it can use them to take over control.
Chapter 7 contains the important orthogonality hypothesis, i.e., that there is no reason to believe that a superintelligence has high moral standards. It then discusses important instrumental goals (i.e., goals necessary to achieve to achieve the intelligence' ultimate goal, whatever that may be). Count in self-preservation and ressource acquisition, for example. The following chapter then shows that even a non-malevolent superintelligence may destroy everything dear to us or perform otherwise morally terrible actions (e.g., simulating what human people wish requires simulating human people - terminating a simulation could then easily turn out as a genocide). Both chapters are extremely well-written and captivating, an easy and convincing read. In that line of thinking, chapter 11 discusses scenarios in which not one, but multiple superintelligences come to power, a world in which humankind is a mere slave race. Opposed to the previously mentioned chapters, this one again lacks confidence and seems to paint a quite unrealistic picture.
Chapter 9 tries to illustrate possible ways to control a superintelligence, and, more importantly, illustrates how and why they will probably fail. Chapter 10 merely categorizes superintelligences via their controlled environment. Chapter 12 connects with chapter 9, assuming that the control problem is solved: How should we design and control our superintelligence? What kind of morale should we install? This chapter very nicely explains the important problem that morale is not a (mathematically) well-defined object, and that we are currently lacking an operational definition ourselves. Still, the chapter presents a few interesting ideas about how to "load" our values into the superintelligence. Chapter 13 augments this chapter by suggesting more indirect methods for the value loading problem.
Chapter 14 finally deals with "what we should do now": Should we continue researching or should we do our best to stall progress? Although several scenarios are presented, a definite conclusion escaped my attention. This then nicely summarizes my impression of chapters 2-4 and 13-14: Bostrom is jumping between pros and cons, eager to give a complete picture (which is always better than a one-sided one). Jumping around destroyed a lot of the book's effect in these topics. By mentioning something good and something bad in consecutive sentences, the book invoked a feeling of neutrality (probably more than what would have been invoked by first listing ALL pros and then ALL cons). Maybe not the best strategy in a topic where the credo should be: "Better be safe than sorry."
The first chapter treats past developments and the state of the art in AI research. Well written, but lacking important aspects of current AI-related research: Scientific progress in all areas of machine learning was not mentioned in sufficient detail; clustering, policy development via dynamic programming, classification, natural language processing, etc. Which of these special-purpose skills are important ingredients for an AGI, and which are problems that are AI-complete? I would have appreciated a bit more on that topic.
The second chapter, "Paths to superintelligence" outlines several possible ways to get there, but is not very convincing in which of these seems to be likely. The reader is well-informed, but left in a state of "OK, but Bostrom himself seems not to believe that any of them is likely to achieve superintelligence in the next 50 years." Slightly connected, but much better written, chapter 3 deals with different forms of superintelligence.
The fourth chapter deals with the "kinetics of an intelligence explosion", and is again very vague: Both accelerating and decelerating effects (nicely matched with all possible paths to superintelligence) are discussed at lengths, and again the reader is left with the feeling that absolutely no prognosis is possible. Bostrom himself end the chapter with "although a fast or medium [speed] takeoff looks more likely, the possibility of a slow takeoff cannot be excluded".
Chapter 5 marks a transition in the writing style: Bostrom changes from a very neutral, unconfident tone to a highly convincing one. If I had to guess, I would say that this and the following chapters are at the core of his own research interest. Bostrom makes a very convincing point that as soon as a superintelligence is created, it will very likely take control over the world. Chapter 6 briefly deals with possible (super-)capabilities such a superintelligence will develop and how it can use them to take over control.
Chapter 7 contains the important orthogonality hypothesis, i.e., that there is no reason to believe that a superintelligence has high moral standards. It then discusses important instrumental goals (i.e., goals necessary to achieve to achieve the intelligence' ultimate goal, whatever that may be). Count in self-preservation and ressource acquisition, for example. The following chapter then shows that even a non-malevolent superintelligence may destroy everything dear to us or perform otherwise morally terrible actions (e.g., simulating what human people wish requires simulating human people - terminating a simulation could then easily turn out as a genocide). Both chapters are extremely well-written and captivating, an easy and convincing read. In that line of thinking, chapter 11 discusses scenarios in which not one, but multiple superintelligences come to power, a world in which humankind is a mere slave race. Opposed to the previously mentioned chapters, this one again lacks confidence and seems to paint a quite unrealistic picture.
Chapter 9 tries to illustrate possible ways to control a superintelligence, and, more importantly, illustrates how and why they will probably fail. Chapter 10 merely categorizes superintelligences via their controlled environment. Chapter 12 connects with chapter 9, assuming that the control problem is solved: How should we design and control our superintelligence? What kind of morale should we install? This chapter very nicely explains the important problem that morale is not a (mathematically) well-defined object, and that we are currently lacking an operational definition ourselves. Still, the chapter presents a few interesting ideas about how to "load" our values into the superintelligence. Chapter 13 augments this chapter by suggesting more indirect methods for the value loading problem.
Chapter 14 finally deals with "what we should do now": Should we continue researching or should we do our best to stall progress? Although several scenarios are presented, a definite conclusion escaped my attention. This then nicely summarizes my impression of chapters 2-4 and 13-14: Bostrom is jumping between pros and cons, eager to give a complete picture (which is always better than a one-sided one). Jumping around destroyed a lot of the book's effect in these topics. By mentioning something good and something bad in consecutive sentences, the book invoked a feeling of neutrality (probably more than what would have been invoked by first listing ALL pros and then ALL cons). Maybe not the best strategy in a topic where the credo should be: "Better be safe than sorry."
38 people found this helpful
Report













