- Paperback: 576 pages
- Publisher: Oxford University Press; 1 edition (August 1, 2011)
- Language: English
- ISBN-10: 0199606501
- ISBN-13: 978-0199606504
- Product Dimensions: 9 x 1.1 x 6.1 inches
- Shipping Weight: 2.1 pounds (View shipping rates and policies)
- Average Customer Review: 17 customer reviews
- Amazon Best Sellers Rank: #577,706 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Global Catastrophic Risks 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
`Review from previous edition This volume is remarkably entertaining and readable...It's risk assessment meets science fiction.'
Natural Hazards Observer
`The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations.'
`We should welcome this fascinating and provocative book.'
Martin J Rees (from foreword)
`[Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations.'
About the Author
Nick Bostrom, PhD, is Director of the Future of Humanity Institute, in the James Martin 21st Century School, at Oxford University. He previously taught at Yale University in the Department of Philosophy and in the Yale Institute for Social and Policy Studies. Bostrom has served as an expert consultant for the European Commission in Brussels and for the Central Intelligence Agency in Washington DC. He has advised the British Parliament, the European Parliament, and many other public bodies on issues relating to emerging technologies.
Milan M. Cirkovic, PhD, is a senior research associate of the Astronomical Observatory of Belgrade, (Serbia) and a professor of Cosmology at Department of Physics, University of Novi Sad (Serbia). He received both his PhD in Physics and his MSc in Earth and Space Sciences from the State University of New York at Stony Brook (USA) and his BSc in Theoretical Physics was received from the University of Belgrade.
Try the Kindle edition and experience these great reading features:
Showing 1-5 of 17 reviews
There was a problem filtering reviews right now. Please try again later.
I'm an expert in Artificial Intelligence. Bostrom makes the point that AI can become uncontrolled, because while there's a way to test an existing program, there's no way to test a program that re-writes itself, because you don't know what it will turn into. There. I just gave a synopsis of one of the main points of an entire chapter.
Bostrom is wasteful with words on multiple levels. An illustrative example he gives showing how training AI neural nets can fail can be stated in a couple sentences. It's a well-known example in the AI community, yet it takes long, turgid prose for Bostrom to get to the point. Here it is: When an AI program was taught to recognize tanks, it got a very good success rate. When the program was moved to a different country's tanks, it failed completely, because it was trained, not to identify tanks, but the similar lighting conditions in one country.
Probably the most dangerous future risk is going to be the advent of real Artificial Intelligence within our lifetime or very near into the future. Eliezer Yudkowsky is the top figurehead and spokesman for factors involved in this risk and is the editor for this specific risk within the book. If our fears are to become a reality, then it doesn't matter much of whatever else we get right. Many of the other risks to worry about, we already have a wealth of information on their occurrences, how they work, how likely they are to affect us, and how they will affect us when they come. The risks concerning the arrival of AI however are far more dangerous in that this isn't an experiment that we get to practically represent so that reality can beat us over the head with the correct answer. If we are to achieve true FAI (Friendly Artificial Intelligence as Yudkowsky calls it) then a massive amount of dedication, money and effort is needed for research needed to avoid a real disaster. If our aims are achieved and realized however, many of the other risks and concerns we have can be offset to the handling of an intelligence much greater than ourselves with a higher probability and likelihood of being overcome.
We are passing through a stage where we are beginning to create problems that are beyond our current capacity to provide solutions for. This book is probably the best general and somewhat technical primer to become acquainted with serious problems we are currently facing and that we will inevitably arrive at in the future. If you are truly keen to getting involved in with the kinds of problems we will have to confront, this book is indispensable.
I may be wrong but as the volume proceeded I gained the growing impression there was an increase in lack of comprehension among authors about what meaningful they could actually say about the topics they'd been assigned. Perhaps I'm harsh, but I did not enjoy this read - or learn much from it. Final pedantic: for OUP, too many glitches and typos, obviously beneath the dignity of the young high flying editors to bother with.