|Print List Price:||$30.00|
Save $26.01 (87%)
Hachette Book Group
Price set by seller.
The Precipice: Existential Risk and the Future of Humanity Kindle Edition
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
About the Author
"A powerfully-argued book that alerts us to what is perhaps the most important-and yet also most neglected-problem we will ever face."―Peter Singer, author of Animal Liberation and The Life You Can Save
"The Precipice may be the Silent Spring that the futurists have been waiting for."―Los Angeles Review of Books
"The Precipice separates science from hype and will remain the definitive work on existential risk for a long time to come."―Max Tegmark, author of Life 3.0 and Our Mathematical Universe
"The Precipice is a fascinating book, one that showcases both the knowledge of its author and his humanity."―Bryan Walsh, Axios
"This book is a wake-up call to the existential threats of nuclear and biological weapons and the urgent need for action. A must-read that galvanizes us to play a role in addressing these risks."
―Angela Kane, former UN High Representative for Disarmament Affairs
"A fascinating and persuasive guide to the most important topic of all: how our species will survive the risks we pose to our continued existence."―Stuart Russell, author of Human Compatible and Artificial Intelligence: A Modern Approach
"Toby Ord is today's Carl Sagan. Clear and inspiring, this book leaves us hopeful for a flourishing human future."―Christine Peterson, co-founder of the Foresight Institute
"Splendid....The Precipice is a powerful book, written with a philosopher's eye for counterarguments so that he can meet them in advance. And Ord's love for humanity and hope for its future is infectious, as is his horrified wonder at how close we have come to destroying it."―The Spectator
"Many people have recently found that they want to read books offering the grandest perspectives possible on human existence, such as Sapiens . . . Toby Ord's new book is a startling and rigorous contribution to this genre that deserves to be just as widely read."―Evening Standard
"Ord's map of the existential risk landscape is an engaging read for anyone who wants to learn more about this important and interdisciplinary research."―Science
"Ord's analysis of the science is exemplary . . . Thrillingly written"―Sunday Times --This text refers to the hardcover edition.
- ASIN : B07V9GHKYP
- Publisher : Hachette Books; Illustrated edition (March 24, 2020)
- Publication date : March 24, 2020
- Language : English
- File size : 6515 KB
- Text-to-Speech : Enabled
- Enhanced typesetting : Enabled
- X-Ray : Enabled
- Word Wise : Enabled
- Print length : 480 pages
- Lending : Not Enabled
- Best Sellers Rank: #97,240 in Kindle Store (See Top 100 in Kindle Store)
- Customer Reviews:
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Somehow the book is simultaneously beautifully written, intellectually rigorous, and deeply researched, and is both "the one book to read" on its topic (existential risk) and filled with lots of novel content, even for someone like me who has been following this research area closely for many years. It also manages to be hopeful and inspiring despite its grim subject matter.
The book makes compelling arguments for several important conclusions, including:
- Mitigating existential risk should be a global priority according to many different value systems.
- Fortunately, natural risks such as asteroids are extremely unlikely to be existential in the course of a century.
- Anthropogenic risks such as climate change and nuclear weapons are much more worrying, but their odds of being a truly existential catastrophe in the next century probably vary by orders of magnitude.
- Some of the most valuable work for reducing existential risk is probably not directed at a specific existential risk, but at various "risk factors" such as great powers war, which may not be existential risks themselves but increase the danger coming from other sources of existential risk.
I've been waiting for this book for years, and it does not disappoint. In fact, I've read it front to back 3 times already. Highly recommended.
(Full disclosure: I work for a funder of Toby's work but I work on other areas and haven't been involved in those grants.)
The book is food for thought but it's not useful in terms of short-term quality of life, which seems a much more pressing question.
the life of myself and people I know to become unbearable.
Ord is a philosopher specializing in ethics at Oxford. His goal here is to systematically extend ethics from individual rights and responsibilities to encompass all of humanity and our civilization. Because of our technology and the impact of our industries, Ord argues, humanity stands at a crossroads that is different than any we have faced. In addition to natural threats such as asteroids or supernovae, science has enabled us to annihilate ourselves - by nuclear weapons, artificial intelligence, or bioweapons - and, of course, we have global warming and environmental degradation. We need, he argues, to decide how we are going to deal with these threats and set in motion the processes to do so.
In an effort to quantify these civilizational risks, Ord creates a kind of hierarchy of them based on scientific estimations of their likelihood. You can argue with his numbers - they appear based on Bayesian statistical methods, which incorporates subjective intuitions - but I found this a useful exercise to gain an initial take on the whole. By far the greatest risk in his estimation (as a former computer programmer, no less) is the creation of an AI that would act against a humanity it judged inferior, not as terminator robots but as a network to manipulate humans and the environment from a wholly different, perhaps unimaginable, perspective. Such an AI, he believes, has a 1 in 10 chance to destroy our civilization or even humanity itself in the next 100 years. Farther back in his hierarchy are engineered pandemics (1 in 30), climate change (1 in 1000) and comet impact (1 in a million), among other things. Taken together, he sees a 1 in 6 annihilation risk, which he judges as unacceptably high.
Aside from the threat of man's extinction, Ord never clearly defines what he means by civilization or indeed its value, instead assuming that his reader will see his point. (The closest he comes is to argue our "potential" would be "permanently lost".) This was problematic for me. There have been so many collapses of civilization in history that I am skeptical we are at THE crucial and irreversible turning point. Today's threats might be on a larger scale than the fall of Rome, but is it qualitatively different?
Unfortunately, Ord's remedies sound rather academic, including research funding of his Institute at Oxford as well as other think tanks. He also suggests that inter-governmental institutions should be strengthened, such as the WHO, and that government should consider the formation of a world government. At this moment of populist nationalism, this kind of altruism appears unrealistic to me at best. I could be wrong, but if any new global survival movement is in the offing, it is inchoate.
The last part of the book is about what potential we might unleash, as if man is about to graduate from its infancy and go on to do even greater things. Here, he suggests we might explore the galaxy and even populate the entire universe over billions of years as well as transform ourselves into superior beings with biotech, etc. This is all very nice - I would opt for establishing a United Federation of Planets myself - but it's just science fiction. It's worth a skim, I suppose.
This book is an ambitious inquiry into our long-term prospects. Ord is optimistic, yet approaches his subject with a hardened skepticism and the details of his investigations are solid and unfailingly intelligent. The framework he sets up is a useful starting point to look at these issues, if somewhat elementary at times. I am sure Ord was picturing the book as a staple of the classroom, perhaps as a standard, summer-reading text for high school graduates headed for university, like Guns, Germs, and Steel. Of course, the book would interest many general readers as well, if not quite the academic specialists. Recommended.
If you are interested in learning more, but not ready to buy, you can google for the book review by SlateStarCodex, or the podcast by 80,000 hours.
Top reviews from other countries
What strikes me the most about the book is the incredibly smooth text, with earnest calls for people with 'keen intellect' to go forth and tackle these challenges, because the future could be great. It's clear that Toby Ord has poured so much into this book over the past few years, and the text remains calm, honest, persistent, and inspirational. I ordered five copies and will be sharing them with my friends and family.
Loads of academics could spend decades in conferences, throwing around impressive mathematics and concepts and philosophical and political ideas, only to be as subject to the pointlessness of fate as the builder who built the conference hall, or the cleaner who cleaned the seats before the conference. Of course we have to care about humanity and about the future, just as we have to care about the welfare of our own children. Of course we have to live as if things are going to go on and on, and try to make them better all the time. But that is just common sense and decency. It doesn’t make it any better if we complicate any of it with equations identifying risk probabilities, or get all worked up about the world of our great-great grandchildren, about which we know almost nothing. (Those equations are surely only given value if they relate to the real world, e.g. they increase the safety of road or air travel vehicles.)
Perhaps I found the vision of the book simply too huge for normal human beings. The best things to do with our expertise and intelligence and expert opinions is to be pragmatic about solving current issues – climate change, political and economic conflicts, and the potential threats from artificial intelligence – with a focus on the next fifty years, and let the century after that take care of itself. All we can hope to leave behind is a good example and the fruits of our experience and knowledge.
Ethical philosophy about any future society is potentially rather silly. Imagine getting someone in a time machine from a mere 150 years ago and bringing them to today – when all of their ideas about women and foreigners and music and sexuality and conduct in society would be screamingly out of place and wildly intolerant to our eyes. How do you think that someone in 150 years might feel about, say, paedophilia? It’s only a (really unfortunate) sexual orientation, isn’t it, which you can’t “help” any more than being gay, so criminalising it and being thoroughly disgusted by it and condemnatory of it – which is our current society’s response, overwhelmed by the emotional reaction to the vulnerability of child victims – is surely going to be looked back upon as primitive. Hating and criminalising it is no more capable of solving resulting social/psychological/ emotional problems than believing that it is in the nature of black slaves to be slaves, once a widely accepted attitude. So how disgusted might we be, from our time and our perspective, by the activities and fashions of the society of 3,000 years in the future? We might not even think they are worth saving the world for, with their genetically modified inhabitants of Mars and Titan with heaven-knows-what in the way of social and sexual practices, with their sex robots and their utterly incomprehensible music and their stupid religions and mad political set-ups. (I’m not saying they won’t think their existences are just fine, thank you!)
So yes, he’s right that we’re on a Precipice, and that only we can do something about it. But that’s virtually nothing to do with what we might get up to when we become capable of settling other planets, for example, or how long the stars will last or how fast the universe is expanding (if it is – something else our great-great-grandchildren might have vastly different explanations for, looking back on our quaint ideas about string theory and parallel universes and dark energy, and taxation and social policies, and even perhaps on the threats from AI). There is much of interest in this book and you might have a completely different perspective on it from mine, but I think it could have been a damn sight shorter and more near-future focused, in order to make a greater impact. Not that I have an ounce of the influence that the author has, in academic and political circles.
It is, incidentally, one of the most infuriating books I have read – it was absolutely necessary to use two bookmarks, because half of the information, and half of the book, is in a gigantic footnotes section. There is therefore, if you want to appreciate what is being discussed, endless going back-and-forth, to take in long notes – these footnotes are rarely just a reference to something you can find in the bibliography. It would have made much better reading if many of the footnotes were simply incorporated into the main text, where, in my opinion, they really belong.
To return to what I said at the beginning, Toby Ord is clearly brilliant, and a great guy, but I was left with very mixed feelings about this book. If you want the big picture, and don’t mind also putting up with some waffle (rather speculative) about the probabilities, or if you want to support the future-humanity charities that the book’s royalties are all going to go to, buy the book. You could have done worse things in your life or with your money. But I don't think it will be one of the best books you ever read.
Ord makes his case in a systematic, persuasive and captivating way. Aside from that, the book is sprinkled with interesting facts. For example, did you know that, if all bees and all other pollinators were to disappear, this would only reduce global crop yields by 3-8%? Or that maximum-security biological research labs have accidentally released foot-and-mouth disease (twice, and once leading to an outbreak), Polio (45kg of it), SARS, and the black death?
The book finishes with recommendations for how you can help reduce existential risks. My recommendation: read this book.
Toby Ord convincingly argues that this is not the case, the chances that humanity may go extinct in near future are not that small, and the topic deserves serious attention. Actually I think the likelihood of dying because of some global catastrophe is larger than e.g. the risk of dying because of a traffic accident. Taking this seriously could be quite a perspective-changing event.
Readable and throughly researched