- Paperback: 576 pages
- Publisher: Oxford University Press; 1 edition (August 1, 2011)
- Language: English
- ISBN-10: 0199606501
- ISBN-13: 978-0199606504
- Product Dimensions: 9 x 1.1 x 6.1 inches
- Shipping Weight: 2.1 pounds (View shipping rates and policies)
- Average Customer Review: 18 customer reviews
- Amazon Best Sellers Rank: #576,923 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Global Catastrophic Risks 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
"Children of Blood and Bone"
Tomi Adeyemi conjures a stunning world of dark magic and danger in her West African-inspired fantasy debut. Learn more
Frequently bought together
Customers who bought this item also bought
Customers who viewed this item also viewed
`Review from previous edition This volume is remarkably entertaining and readable...It's risk assessment meets science fiction.'
Natural Hazards Observer
`The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations.'
`We should welcome this fascinating and provocative book.'
Martin J Rees (from foreword)
`[Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations.'
About the Author
Nick Bostrom, PhD, is Director of the Future of Humanity Institute, in the James Martin 21st Century School, at Oxford University. He previously taught at Yale University in the Department of Philosophy and in the Yale Institute for Social and Policy Studies. Bostrom has served as an expert consultant for the European Commission in Brussels and for the Central Intelligence Agency in Washington DC. He has advised the British Parliament, the European Parliament, and many other public bodies on issues relating to emerging technologies.
Milan M. Cirkovic, PhD, is a senior research associate of the Astronomical Observatory of Belgrade, (Serbia) and a professor of Cosmology at Department of Physics, University of Novi Sad (Serbia). He received both his PhD in Physics and his MSc in Earth and Space Sciences from the State University of New York at Stony Brook (USA) and his BSc in Theoretical Physics was received from the University of Belgrade.
Author interviews, book reviews, editors picks, and more. Read it now
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
Amongst the core chapters discussing particular risks, the three that are most ``hard science", on supervolcanos, asteroid or comet impact, and extra-solar-system risks are just great -- one learns for instance that (contrary to much science fiction) comets are more a risk than asteroids, and the major risk in the last category is not nearby supernovas but cosmic rays created by gamma ray bursts. These three chapters are perhaps the only contexts where it's reasonable to attempt to estimate actual probabilities of the catastrophes.
The balanced article on global warming is unlikely to please extremists, concluding that mainstream science predicts a linear increase in temperature that may be unpleasant but not catastrophic, while the various speculative non-linear possibilities leading to catastrophe have plausibilities impossible to assess. The article on pandemics is surprisingly upbeat (``are influenza pandemics likely? Possibly, except for the preposterous mortality rate that has been proposed"), as is the article on exotic physics ("Might our vacuum be only metastable? If so, we can envisage a terminal catastrophe, when the field configuration of empty space changes, and with it the effective laws of physics ..."). The articles on nuclear war, on nuclear terrorism, and on risks from biotechnology and from nanotechnology are perfectly sensible and well-argued. These articles are somewhat technical, so it is a curious relief to arrive at "totalitarian government" which discusses in an easy to read way why 20th century totalitarian governments did not last forever, and circumstances under which a stable worldwide totalitarian government might emerge.
The article on AIs emphasizes that we wrongly imagine intelligent machines as like humans -- "how likely is it that AI will cross the vast gap from amoeba to village idiot, and then stop at the level of human genius?" -- and that we should attempt to envisage something quite different. But the subsequent discussion of Friendly or Unfriendly AIs rests on the assumptions that AIs may be created which have intelligence and motivation ("optimization targets", in the author's effort to avoid anthropomorphizing) to do things on their own initiative, and that their motivations will be comprehensible to humans. Well, I find it hard enough to imagine what "motivation/optimization targets" mean to an amoeba or a village idiot, let alone an AI.
The only article I found positively unsatisfactory was on social collapse. A catastrophe eliminating global food production for one year would likely cause "collapse of civilization" in fighting over the 2 months food supply in storage. But not elimination for just one month. A serious discussion of the sizes of different catastrophes needed to reach this tipping point would be fascinating, but the article merely assumes power law distributions for the size of an unspecified disaster -- this is the sort of thing that brings mathematical modeling into disrepute.
Overall, a valuable and eclectic selection of thought-provoking articles.
Probably the most dangerous future risk is going to be the advent of real Artificial Intelligence within our lifetime or very near into the future. Eliezer Yudkowsky is the top figurehead and spokesman for factors involved in this risk and is the editor for this specific risk within the book. If our fears are to become a reality, then it doesn't matter much of whatever else we get right. Many of the other risks to worry about, we already have a wealth of information on their occurrences, how they work, how likely they are to affect us, and how they will affect us when they come. The risks concerning the arrival of AI however are far more dangerous in that this isn't an experiment that we get to practically represent so that reality can beat us over the head with the correct answer. If we are to achieve true FAI (Friendly Artificial Intelligence as Yudkowsky calls it) then a massive amount of dedication, money and effort is needed for research needed to avoid a real disaster. If our aims are achieved and realized however, many of the other risks and concerns we have can be offset to the handling of an intelligence much greater than ourselves with a higher probability and likelihood of being overcome.
We are passing through a stage where we are beginning to create problems that are beyond our current capacity to provide solutions for. This book is probably the best general and somewhat technical primer to become acquainted with serious problems we are currently facing and that we will inevitably arrive at in the future. If you are truly keen to getting involved in with the kinds of problems we will have to confront, this book is indispensable.