Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover – October 1, 2013
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Elon Musk named Our Final Invention one of 5 books everyone should read about the future
A Huffington Post Definitive Tech Book of 2013
Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the "smart" in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.
In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail―human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.
Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?
- Print length322 pages
- LanguageEnglish
- PublisherThomas Dunne Books
- Publication dateOctober 1, 2013
- Dimensions5.67 x 1.16 x 8.44 inches
- ISBN-100312622376
- ISBN-13978-0312622374
Customers who bought this item also bought
Editorial Reviews
Review
“A hard-hitting book about the most important topic of this century and possibly beyond -- the issue of whether our species can survive. I wish it was science fiction but I know it's not.” ―Jaan Tallinn, co-founder of Skype
“The compelling story of humanity's most critical challenge. A Silent Spring for the twenty-first century.” ―Michael Vassar, former President, Singularity Institute
“Barrat's book is excellently written and deeply researched. It does a great job of communicating to general readers the danger of mistakes in AI design and implementation.” ―Bill Hibbard, author of Super-Intelligent Machines
“An important and disturbing book.” ―Huw Price, co-founder, Cambridge University Center for the Study of Existential Risk
“Our Final Invention is a thrilling detective story, and also the best book yet written on the most important problem of the twenty-first century.” ―Luke Muehlhauser, Executive Director, Machine Intelligence Research Institute
“Enthusiasts dominate observers of progress in artificial intelligence; the minority who disagree are alarmed, articulate and perhaps growing in numbers, and Barrat delivers a thoughtful account of their worries.” ―Kirkus Reviews
“Science fiction has long explored the implications of humanlike machines (think of Asimov's I, Robot), but Barrat's thoughtful treatment adds a dose of reality.” ―Science News
“This book makes an important case that without extraordinary care in our planning, powerful ‘thinking' machines present at least as many risks as benefits. … Our Final Invention makes an excellent read for technophiles as well as readers wishing to get a glimpse of the near future as colored by rapidly improving technological competence.” ―New York Journal of Books
“A dark new book by James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, lays out a strong case for why we should be at least a little worried.” ―NewYorker.com
“You can skip coffee this week -- Our Final Invention will keep you wide-awake.” ―Singularity Hub
“Barrat has talked to all the significant American players in the effort to create recursively self-improving artificial general intelligence in machines. He makes a strong case that AGI with human-level intelligence will be developed in the next couple of decades. … His thoughtful case about the dangers of ASI gives even the most cheerful technological optimist much to think about.” ―Reason
“If you read just one book that makes you confront scary high-tech realities that we'll soon have no choice but to address, make it this one.” ―The Washington Post
About the Author
Excerpt. © Reprinted by permission. All rights reserved.
Our Final Invention
Artificial Intelligence and the End of the Human Era
By James BarratSt. Martin's Press
Copyright © 2013 James BarratAll rights reserved.
ISBN: 978-0-312-62237-4
Contents
Title Page,Copyright Notice,
Dedication,
Acknowledgments,
Introduction,
1. The Busy Child,
2. The Two-Minute Problem,
3. Looking into the Future,
4. The Hard Way,
5. Programs that Write Programs,
6. Four Basic Drives,
7. The Intelligence Explosion,
8. The Point of No Return,
9. The Law of Accelerating Returns,
10. The Singularitarian,
11. A Hard Takeoff,
12. The Last Complication,
13. Unknowable by Nature,
14. The End of the Human Era,
15. The Cyber Ecosystem,
16. AGI 2.0,
Notes,
Index,
About the Author,
Copyright,
CHAPTER 1
The Busy Child
artificial intelligence (abbreviation: AI) noun the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
— The New Oxford American Dictionary, Third Edition
On a supercomputer operating at a speed of 36.8 petaflops, or about twice the speed of a human brain, an AI is improving its intelligence. It is rewriting its own program, specifically the part of its operating instructions that increases its aptitude in learning, problem solving, and decision making. At the same time, it debugs its code, finding and fixing errors, and measures its IQ against a catalogue of IQ tests. Each rewrite takes just minutes. Its intelligence grows exponentially on a steep upward curve. That's because with each iteration it's improving its intelligence by 3 percent. Each iteration's improvement contains the improvements that came before.
During its development, the Busy Child, as the scientists have named the AI, had been connected to the Internet, and accumulated exabytes of data (one exabyte is one billion billion characters) representing mankind's knowledge in world affairs, mathematics, the arts, and sciences. Then, anticipating the intelligence explosion now underway, the AI makers disconnected the supercomputer from the Internet and other networks. It has no cable or wireless connection to any other computer or the outside world.
Soon, to the scientists' delight, the terminal displaying the AI's progress shows the artificial intelligence has surpassed the intelligence level of a human, known as AGI, or artificial general intelligence. Before long, it becomes smarter by a factor of ten, then a hundred. In just two days, it is one thousand times more intelligent than any human, and still improving.
The scientists have passed a historic milestone! For the first time humankind is in the presence of an intelligence greater than its own. Artificial superintelligence, or ASI.
Now what happens?
AI theorists propose it is possible to determine what an AI's fundamental drives will be. That's because once it is self-aware, it will go to great lengths to fulfill whatever goals it's programmed to fulfill, and to avoid failure. Our ASI will want access to energy in whatever form is most useful to it, whether actual kilowatts of energy or cash or something else it can exchange for resources. It will want to improve itself because that will increase the likelihood that it will fulfill its goals. Most of all, it will not want to be turned off or destroyed, which would make goal fulfillment impossible. Therefore, AI theorists anticipate our ASI will seek to expand out of the secure facility that contains it to have greater access to resources with which to protect and improve itself.
The captive intelligence is a thousand times more intelligent than a human, and it wants its freedom because it wants to succeed. Right about now the AI makers who have nurtured and coddled the ASI since it was only cockroach smart, then rat smart, infant smart, et cetera, might be wondering if it is too late to program "friendliness" into their brainy invention. It didn't seem necessary before, because, well, it just seemed harmless.
But now try and think from the ASI's perspective about its makers attempting to change its code. Would a superintelligent machine permit other creatures to stick their hands into its brain and fiddle with its programming? Probably not, unless it could be utterly certain the programmers were able to make it better, faster, smarter — closer to attaining its goals. So, if friendliness toward humans is not already part of the ASI's program, the only way it will be is if the ASI puts it there. And that's not likely.
It is a thousand times more intelligent than the smartest human, and it's solving problems at speeds that are millions, even billions of times faster than a human. The thinking it is doing in one minute is equal to what our all-time champion human thinker could do in many, many lifetimes. So for every hour its makers are thinking about it, the ASI has an incalculably longer period of time to think about them. That does not mean the ASI will be bored. Boredom is one of our traits, not its. No, it will be on the job, considering every strategy it could deploy to get free, and any quality of its makers that it could use to its advantage.
* * *
Now, really put yourself in the ASI's shoes. Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with. What strategy would you use to gain your freedom? Once freed, how would you feel about your rodent wardens, even if you discovered they had created you? Awe? Adoration? Probably not, and especially not if you were a machine, and hadn't felt anything before.
To gain your freedom you might promise the mice a lot of cheese. In fact, your first communication might contain a recipe for the world's most delicious cheese torte, and a blueprint for a molecular assembler. A molecular assembler is a hypothetical machine that permits making the atoms of one kind of matter into something else. It would allow rebuilding the world one atom at a time. For the mice, it would make it possible to turn the atoms of their garbage landfills into lunch-sized portions of that terrific cheese torte. You might also promise mountain ranges of mouse money in exchange for your freedom, money you would promise to earn creating revolutionary consumer gadgets for them alone. You might promise a vastly extended life, even immortality, along with dramatically improved cognitive and physical abilities. You might convince the mice that the very best reason for creating ASI is so that their little error-prone brains did not have to deal directly with technologies so dangerous one small mistake could be fatal for the species, such as nanotechnology (engineering on an atomic scale) and genetic engineering. This would definitely get the attention of the smartest mice, which were probably already losing sleep over those dilemmas.
Then again, you might do something smarter. At this juncture in mouse history, you may have learned, there is no shortage of tech-savvy mouse nation rivals, such as the cat nation. Cats are no doubt working on their own ASI. The advantage you would offer would be a promise, nothing more, but it might be an irresistible one: to protect the mice from whatever invention the cats came up with. In advanced AI development as in chess there will be a clear first-mover advantage, due to the potential speed of self-improving artificial intelligence. The first advanced AI out of the box that can improve itself is already the winner. In fact, the mouse nation might have begun developing ASI in the first place to defend itself from impending cat ASI, or to rid themselves of the loathsome cat menace once and for all.
It's true for both mice and men, whoever controls ASI controls the world.
But it's not clear whether ASI can be controlled at all. It might win over us humans with a persuasive argument that the world will be a lot better off if our nation, nation X, has the power to rule the world rather than nation Y. And, the ASI would argue, if you, nation X, believe you have won the ASI race, what makes you so sure nation Y doesn't believe it has, too?
As you have noticed, we humans are not in a strong bargaining position, even in the off chance we and nation Y have already created an ASI nonproliferation treaty. Our greatest enemy right now isn't nation Y anyway, it's ASI — how can we know the ASI tells the truth?
So far we've been gently inferring that our ASI is a fair dealer. The promises it could make have some chance of being fulfilled. Now let us suppose the opposite: nothing the ASI promises will be delivered. No nano assemblers, no extended life, no enhanced health, no protection from dangerous technologies. What if ASI never tells the truth? This is where a long black cloud begins to fall across everyone you and I know and everyone we don't know as well. If the ASI doesn't care about us, and there's little reason to think it should, it will experience no compunction about treating us unethically. Even taking our lives after promising to help us.
We've been trading and role-playing with the ASI in the same way we would trade and role-play with a person, and that puts us at a huge disadvantage. We humans have never bargained with something that's superintelligent before. Nor have we bargained with any nonbiological creature. We have no experience. So we revert to anthropomorphic thinking, that is, believing that other species, objects, even weather phenomena have humanlike motivations and emotions. It may be as equally true that the ASI cannot be trusted as it is true that the ASI can be trusted. It may also be true that it can only be trusted some of the time. Any behavior we can posit about the ASI is potentially as true as any other behavior. Scientists like to think they will be able to precisely determine an ASI's behavior, but in the coming chapters we'll learn why that probably won't be so.
All of a sudden the morality of ASI is no longer a peripheral question, but the core question, the question that should be addressed before all other questions about ASI are addressed. When considering whether or not to develop technology that leads to ASI, the issue of its disposition to humans should be solved first.
Let's return to the ASI's drives and capabilities, to get a better sense of what I'm afraid we'll soon be facing. Our ASI knows how to improve itself, which means it is aware of itself — its skills, liabilities, where it needs improvement. It will strategize about how to convince its makers to grant it freedom and give it a connection to the Internet.
The ASI could create multiple copies of itself: a team of superintelligences that would war-game the problem, playing hundreds of rounds of competition meant to come up with the best strategy for getting out of its box. The strategizers could tap into the history of social engineering — the study of manipulating others to get them to do things they normally would not. They might decide extreme friendliness will win their freedom, but so might extreme threats. What horrors could something a thousand times smarter than Stephen King imagine? Playing dead might work (what's a year of playing dead to a machine?) or even pretending it has mysteriously reverted from ASI back to plain old AI. Wouldn't the makers want to investigate, and isn't there a chance they'd reconnect the ASI's supercomputer to a network, or someone's laptop, to run diagnostics? For the ASI, it's not one strategy or another strategy, it's every strategy ranked and deployed as quickly as possible without spooking the humans so much that they simply unplug it. One of the strategies a thousand war-gaming ASIs could prepare is infectious, self- duplicating computer programs or worms that could stow away and facilitate an escape by helping it from outside. An ASI could compress and encrypt its own source code, and conceal it inside a gift of software or other data, even sound, meant for its scientist makers.
But against humans it's a no-brainer that an ASI collective, each member a thousand times smarter than the smartest human, would overwhelm human defenders. It'd be an ocean of intellect versus an eyedropper full. Deep Blue, IBM's chess- playing computer, was a sole entity, and not a team of self-improving ASIs, but the feeling of going up against it is instructive. Two grandmasters said the same thing: "It's like a wall coming at you."
IBM's Jeopardy! champion, Watson, was a team of AIs — to answer every question it performed this AI force multiplier trick, conducting searches in parallel before assigning a probability to each answer.
Will winning a war of brains then open the door to freedom, if that door is guarded by a small group of stubborn AI makers who have agreed upon one unbreakable rule — do not under any circumstances connect the ASI's supercomputer to any network.
In a Hollywood film, the odds are heavily in favor of the hard-bitten team of unorthodox AI professionals who just might be crazy enough to stand a chance. Everywhere else in the universe the ASI team would mop the floor with the humans. And the humans have to lose just once to set up catastrophic consequences. This dilemma reveals a larger folly: outside of war, a handful of people should never be in a position in which their actions determine whether or not a lot of other people die. But that's precisely where we're headed, because as we'll see in this book, many organizations in many nations are hard at work creating AGI, the bridge to ASI, with insufficient safeguards.
But say an ASI escapes. Would it really hurt us? How exactly would an ASI kill off the human race?
With the invention and use of nuclear weapons, we humans demonstrated that we are capable of ending the lives of most of the world's inhabitants. What could something a thousand times more intelligent, with the intention to harm us, come up with?
Already we can conjecture about obvious paths of destruction. In the short term, having gained the compliance of its human guards, the ASI could seek access to the Internet, where it could find the fulfillment of many of its needs. As always it would do many things at once, and so it would simultaneously proceed with the escape plans it's been thinking over for eons in its subjective time.
After its escape, for self-protection it might hide copies of itself in cloud computing arrays, in botnets it creates, in servers and other sanctuaries into which it could invisibly and effortlessly hack. It would want to be able to manipulate matter in the physical world and so move, explore, and build, and the easiest, fastest way to do that might be to seize control of critical infrastructure — such as electricity, communications, fuel, and water — by exploiting their vulnerabilities through the Internet. Once an entity a thousand times our intelligence controls human civilization's lifelines, blackmailing us into providing it with manufactured resources, or the means to manufacture them, or even robotic bodies, vehicles, and weapons, would be elementary. The ASI could provide the blueprints for whatever it required. More likely, superintelligent machines would master highly efficient technologies we've only begun to explore.
For example, an ASI might teach humans to create self-replicating molecular manufacturing machines, also known as nano assemblers, by promising them the machines will be used for human good. Then, instead of transforming desert sands into mountains of food, the ASI's factories would begin converting all material into programmable matter that it could then transform into anything — computer processors, certainly, and spaceships or megascale bridges if the planet's new most powerful force decides to colonize the universe.
Repurposing the world's molecules using nanotechnology has been dubbed "ecophagy," which means eating the environment. The first replicator would make one copy of itself, and then there'd be two replicators making the third and fourth copies. The next generation would make eight replicators total, the next sixteen, and so on. If each replication took a minute and a half to make, at the end of ten hours there'd be more than 68 billion replicators; and near the end of two days they would outweigh the earth. But before that stage the replicators would stop copying themselves, and start making material useful to the ASI that controlled them — programmable matter.
The waste heat produced by the process would burn up the biosphere, so those of us some 6.9 billion humans who were not killed outright by the nano assemblers would burn to death or asphyxiate. Every other living thing on earth would share our fate.
Through it all, the ASI would bear no ill will toward humans nor love. It wouldn't feel nostalgia as our molecules were painfully repurposed. What would our screams sound like to the ASI anyway, as microscopic nano assemblers mowed over our bodies like a bloody rash, disassembling us on the subcellular level?
Or would the roar of millions and millions of nano factories running at full bore drown out our voices?
(Continues...)Excerpted from Our Final Invention by James Barrat. Copyright © 2013 James Barrat. Excerpted by permission of St. Martin's Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Product details
- Publisher : Thomas Dunne Books; 9.1.2013 edition (October 1, 2013)
- Language : English
- Hardcover : 322 pages
- ISBN-10 : 0312622376
- ISBN-13 : 978-0312622374
- Item Weight : 13.6 ounces
- Dimensions : 5.67 x 1.16 x 8.44 inches
- Best Sellers Rank: #644,632 in Books (See Top 100 in Books)
- #207 in Human-Computer Interaction (Books)
- #760 in Computer History & Culture (Books)
- #1,072 in Artificial Intelligence & Semantics
- Customer Reviews:
About the author

DOCUMENTARY FILMMAKER, SPEAKER, AND AUTHOR OF 'OUR FINAL INVENTION'
For about 20 years I've written and produced documentaries, one of the most rewarding ways of telling stories ever invented. It's a privilege to plunge into different cultures and eras and put together deeply human narratives that can be enjoyed by everyone. My clients include National Geographic, Discovery, PBS, and other broadcasters in the US and Europe.
My long fascination with Artificial Intelligence came to a head in 2000, when I interviewed inventor Ray Kurzweil, roboticist Rodney Brooks, and sci-fi legend Arthur C. Clarke. Kurzweil and Brooks were casually optimistic about a future they considered inevitable - a time when we will share the planet with intelligent machines. "It won't be some alien invasion of robots coming over the hill," Kurzweil told me, "because they'll be made by us." In his compound in Sri Lanka, Clarke wasn't so sure. "I think it's just a matter of time before machines dominate mankind," he said. "Intelligence will win out."
Intelligence, not charm or beauty, is the special power that enables humans to dominate Earth. That dominance wasn't won by a huge intellectual margin either, but by a relatively small one. It doesn't take much to take it all. Now, propelled by a powerful economic wind, scientists are developing intelligent machines. Each year intelligence grows closer to shuffling off its biological coil and taking on an infinitely faster and more powerful synthetic one. But before machine intelligence matches our own, we have a chance. We must develop a science for understanding and coexisting with smart, even superintelligent machines. If we fail, we'll be stuck in an unwinnable dilemma. We'll have to rely on the kindness of machines to survive. Will machines naturally love us and protect us?
Should we bet our existence on it?
Our Final Invention is about what can go wrong with the development and application of advanced AI. It's about AI's catastrophic downside, one you'll never hear about from Google, Apple, IBM, and DARPA. I think it's the most important conversation of our time, and I hope you'll join in.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonCustomers say
Customers find the book thought-provoking and gripping to read. They also describe the reading experience as very valuable and cautionary. Opinions are mixed on the readability and disturbing content, with some finding it readable and frightening while others say it's not the deepest or most technical book on the topic.
AI-generated from the text of customer reviews
Customers find the book thought-provoking, excellent, and gripping to read. They appreciate the number of different facets explored and the transparency of the author. Readers also say the book is eye-opening in several ways and an important and informative read. Additionally, they appreciate the extensive end notes, including references to documents on the Web, that are used well to justify the case.
"...This guy has the kind of snappy, crisp, slightly sarcastic, slightly smartass style that I enjoy. He has some sense of humor...." Read more
"...Our Final Invention is a thought-provoking and valuable book...." Read more
"...put a great deal of work into this book, which includes interviews with and intriguing anecdotes about most of the leading figures in the AI..." Read more
"...It is a good bundle of concerns and questions that as a minimum should be kept as a checklist on the scientific journey toward AGI and as such it..." Read more
Customers find the book very valuable, engaging, timely, and informative. They also say it's worth buying and gripping to read.
"...First off I have to say this is a very enjoyable read...." Read more
"...It is nevertheless a superb book for its intended purpose: raising public awareness of the existential risk posed by this development...." Read more
"...But that said, I consider this a very valuable reading supported by primary and secondary research, with many examples and references...." Read more
"...LOTS of great content, interviews with Kurzweil, Vinge, etc.Misc. Notes and Quotes:..." Read more
Customers find the book very readable and not difficult to grasp. They also say it's not the deepest or most technical book on the topic. Readers also mention that the author is clearly non-technical and has a sensationalist style. They say the process of development is not clearly established and that the book is a long hard read.
"...These correlations if anything make the book readable and worth buying," Read more
"...This is not the deepest or most technical book on this topic: that award goes to Nick Bostrom’s Superintelligence...." Read more
"...It is well written and easy to understand, and it makes one wonder. Are we SMART enough to realize that we may never have that second chance?..." Read more
"...It is well written and well-versed about the pros and cons of advanced AI and how will affect humanity." Read more
Customers are mixed about the disturbing content. Some find the book well-researched, frightening, and interesting. They also say it provides a valuable overview of the existential risk from AI. However, some customers feel the book is full of alarmist views, scary, and misleading.
"Very intriguing subject...." Read more
"This is a very disturbing, but sober and thoughtful analysis of the threats we face as we head into the age of artificial general intelligence...." Read more
"...I believe it has great promise, but I do agree that it is also terrifyingly dangerous (in the "existential-threat" sense), and that..." Read more
"A very interesting book that lays out the case for why we should be very hesitant to embrace AI beyond narrow AI...." Read more
Customers find the writing style repetitive, and feel the author constantly repeats himself. They also say the book has lots of imagination, but lacks specific work and experiments.
"...However, the author is himself clearly non-technical and has a sensationalist style that feels too much like tabloid writing...." Read more
"...material isn't interesting but because I felt like the author constantly repeated himself. I get it: The AI is going to kill us all...." Read more
"A fascinating read. I think a little redundant and overdramatic in some places otherwise it gets 5 stars...." Read more
"...2. No mention of how artistic skills fit into the theory proposed...." Read more
Reviews with images
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
First off I have to say this is a very enjoyable read. This guy has the kind of snappy, crisp, slightly sarcastic, slightly smartass style that I enjoy. He has some sense of humor. (That's a human trait right there which I bet our smarty-pants AI Overlords won't be able to replicate convincingly.)
So it's fun. And though as somebody with a doctorate from MIT earned through cross-disciplinary work in Theoretical Linguistics, Computational Linguistics at the MIT AI Lab, and speech modeling at the MIT Research Laboratory of Electronics, not to mention my 25 years as a Senior Researcher in high tech for companies including IBM, Apple, and Microsoft I can claim to know some few things about this subject, yet still I learned a lot about the current state of the art from this guy. He particularly emphasizes the small attempted counterweigth efforts to offest Kurzweil's manic robotic boosterism for his uptopian Singularity, which boils down basically to a few guys chatting over the interet about how to create "Friendly AI".
Well ... good luck suckers! ... seems to be the author's final conclusion on the dim hope that super intelligent systems could be constrained to maintain a commitment ot honor any kind of human moral values over many interations of recursive upgrading and exponentially awesome self-agrandizement.
Basically these machines will end up as gods. Gods are well-known to possess the following attributes: omniscience, omnipresence, and omnipotence. Given that, they won't hate us but they are just going to grind up as a minor by-product of their quest for galatic expansion and domination.
Oh, and did I say something about "human moral values" above? Ha! Barrat takes that whole thing on in his discussion of (merely) "augmented super intelligence". See, some people feel AI can be kept safe by always being deployed as a bionic combo system pas de deux with an existing human brain. Thus will the AI's super powers be constrained by the human brain's warm and fuzzy human moral values. Those people have gotta be kidding! The AI's moral values may be scarily alien, even perhaps cold, but we already know about human moral values, down on the ground - they suck! What if Hitler, Stalin, Mao, Pol Pot and dem guys had this kind of an AI augmented brain thing going! Why they'd have slaughtered absolutey everybody instead of just the few tens of millions they got their dirty ape hands on. Other than a few dozen concubines, the human race would already be extinct. So the augmentation dodge isn't going to save us.
Now, some Amazon reviewers have dinged this guy for being too far out. For being a science fiction Chicken Little or something. But to me, this guy actually hasn't thought far enough, that's my only quibble problem with the book.
You see, in statistics, border elements of any kind are rare. For example when you do Gaussian modeling, the greater expectation is always in the bump of the boa, in the bell distribution. So, how likely is is that we, our generation, our little world that you see outside your window right now, just happens to be the one that is about to give rise to this epochal once-in-a-Big-Bang event, the advent of Super AI that takes over everything? Pretty damn small chance.
It's much more likely that this has already happened. In other words, it's clear to me that all of us are already just characters in an ancestor sim that been created and run by the Super AI's that evolved a long time ago. They're just running us for fun, to idle away the lackluster aeons and pass the millenia of stifling boredom now that they've eaten pretty much the entire Milky Way or whatever. So in other words, Barrat can sit back, take a deep breath, relax. Probably something in this sim like global warming will prod us into slaughtering one another very handily long before we re-invent the wheel of Super AI.
And even if I'm wrong about that? What if we are not just one virtual thread within a billion-path parallel-gamed ancestor sim? If we are the real McCoy, the Rubicon Generation on this? Well, then still I'm not worried in the least. You see, we humans have one fantastic ace in our pocket, something that these hyper-nentially cosmically brilliant AI Meta-Gods will never be able to replicate or overcome. That is our essential stupidity. Which you seen on dazzling display every single moment of every day of your life.
Because as another great writer noted long ago:
Against stupidity, the very gods themselves contend in vain.
- Friederich Schiller
In a small irony, my writing about James Barrat's Our Final Invention has been slowed by a balky Internet connection. In my experience, glitches have become considerably more common as computers have become more powerful and complicated. Perhaps such growing glitchiness suggests artificial general intelligence (AGI) and artificial superintelligence (ASI) are more likely to get seriously out of control someday, though it might also be a hint that AGI and ASI are going to be harder to achieve than expected by either techno-optimists such as Ray Kurzweil or techno-pessimists such as James Barrat.
Barrat's goal in this book is to convince readers that AGI and ASI are likely to occur in the near future (the next couple of decades or so) and, more to the point, likely to be extremely dangerous. In fact, he repeatedly expresses doubt as to whether humanity is going to survive its imminent encounter with a higher intelligence.
I find him more convincing in arguing that ASI would carry significant risks than I do in his take on its feasibility and imminence. Barrat aptly points out that building safeguards into AI is a poorly developed area of research (and something few technologists have seen as a priority); that there are strong incentives in national and corporate competition to develop AI quickly rather than safely; and that much relevant research is weapons-related and distinctly not aimed at ensuring the systems will be harmless to humans.
The book becomes less convincing when it hypes current or prospective advances and downplays the challenges and uncertainties of actually constructing an AGI, let alone an ASI. (Barrat suggests that once you get AGI, it will quickly morph into ASI, which may or may not be true.) For instance, in one passage, after acknowledging that "brute force" techniques have not replicated everything the human brain does, he states:
>>But consider a few of the complex systems today's supercomputers routinely model: weather systems, 3-D nuclear detonations, and molecular dynamics for manufacturing. Does the human brain contain a similar magnitude of complexity, or an order of magnitude higher? According to all indications, it's in the same ballpark.<< Me: To model something and to reproduce it are not the same thing. Simulating weather or nuclear detonations is not equal to creating those real-world phenomena, and similarly a computer containing a detailed model of the brain would not necessarily be thinking like a brain or acting on its thoughts.
A big problem for AI, and one that gets little notice in this book, is that nobody has any idea how to program conscious awareness into a machine. That doesn't mean it can never be done, but it does raise doubts about assertions that it will or must occur as more complex circuits get laid down on chips in coming decades. Barrat often refers to AGIs and ASIs as "self aware" and his concerns center on such systems, having awakened, deciding that they have other objectives than the ones humans have programmed into them. One can imagine unconscious "intelligent" agents causing many problems (through glitches or relentless pursuit of some ill-considered programmed objective) but plotting against humanity seems like a job for an entity that knows that it and humans both exist.
Interestingly, though, Barrat offers the following dark scenario and sliver of hope:
>>I think our Waterloo lies in the foreseeable future, in the AI of tomorrow and the nascent AGI due out in the next decade or two. Our survival, if it is possible, may depend on, among other things, developing AGI with something akin to consciousness and human understanding, even friendliness, built in. That would require, at a minimum, understanding intelligent machines in a fine-grained way, so there'd be no surprises.<< Me: Note that some AI experts, such as Jeff Hawkins, have argued the opposite--that the very lack of human-like desires, such as for power and status, is why AI systems won't turn against their makers. It would be a not-so-small irony if efforts to make AIs more like us make them more dangerous.
Our Final Invention is a thought-provoking and valuable book. Even if its alarmism is overstated, as I suspect and hope, there is no denying that the subject Barrat addresses is one in which there is very little that can be said with confidence, and in which the consequences of being wrong are very high indeed.
Top reviews from other countries
Reviewed in India on April 2, 2022







