Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover – October 1, 2013
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Elon Musk named Our Final Invention one of 5 books everyone should read about the future
A Huffington Post Definitive Tech Book of 2013
Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the "smart" in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.
In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail―human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.
Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?
- Print length322 pages
- LanguageEnglish
- PublisherThomas Dunne Books
- Publication dateOctober 1, 2013
- Dimensions5.67 x 1.16 x 8.44 inches
- ISBN-100312622376
- ISBN-13978-0312622374
Customers who bought this item also bought
Life 3.0: Being Human in the Age of Artificial IntelligencePaperback$10.31 shippingGet it as soon as Thursday, Aug 22Only 1 left in stock - order soon.

Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonCustomers say
Customers find the book thought-provoking and gripping to read. They also describe the reading experience as very valuable and cautionary. Opinions are mixed on the readability and disturbing content, with some finding it readable and frightening while others say it's not the deepest or most technical book on the topic.
AI-generated from the text of customer reviews
Customers find the book thought-provoking, excellent, and gripping to read. They appreciate the number of different facets explored and the transparency of the author. Readers also say the book is eye-opening in several ways and an important and informative read. Additionally, they appreciate the extensive end notes, including references to documents on the Web, that are used well to justify the case.
"...This guy has the kind of snappy, crisp, slightly sarcastic, slightly smartass style that I enjoy. He has some sense of humor...." Read more
"...Our Final Invention is a thought-provoking and valuable book...." Read more
"...put a great deal of work into this book, which includes interviews with and intriguing anecdotes about most of the leading figures in the AI..." Read more
"...It is a good bundle of concerns and questions that as a minimum should be kept as a checklist on the scientific journey toward AGI and as such it..." Read more
Customers find the book very valuable, engaging, timely, and informative. They also say it's worth buying and gripping to read.
"...First off I have to say this is a very enjoyable read...." Read more
"...It is nevertheless a superb book for its intended purpose: raising public awareness of the existential risk posed by this development...." Read more
"...But that said, I consider this a very valuable reading supported by primary and secondary research, with many examples and references...." Read more
"...LOTS of great content, interviews with Kurzweil, Vinge, etc.Misc. Notes and Quotes:..." Read more
Customers find the book very readable and not difficult to grasp. They also say it's not the deepest or most technical book on the topic. Readers also mention that the author is clearly non-technical and has a sensationalist style. They say the process of development is not clearly established and that the book is a long hard read.
"...These correlations if anything make the book readable and worth buying," Read more
"...This is not the deepest or most technical book on this topic: that award goes to Nick Bostrom’s Superintelligence...." Read more
"...It is well written and easy to understand, and it makes one wonder. Are we SMART enough to realize that we may never have that second chance?..." Read more
"...It is well written and well-versed about the pros and cons of advanced AI and how will affect humanity." Read more
Customers are mixed about the disturbing content. Some find the book well-researched, frightening, and interesting. They also say it provides a valuable overview of the existential risk from AI. However, some customers feel the book is full of alarmist views, scary, and misleading.
"Very intriguing subject...." Read more
"This is a very disturbing, but sober and thoughtful analysis of the threats we face as we head into the age of artificial general intelligence...." Read more
"...I believe it has great promise, but I do agree that it is also terrifyingly dangerous (in the "existential-threat" sense), and that..." Read more
"A very interesting book that lays out the case for why we should be very hesitant to embrace AI beyond narrow AI...." Read more
Customers find the writing style repetitive, and feel the author constantly repeats himself. They also say the book has lots of imagination, but lacks specific work and experiments.
"...However, the author is himself clearly non-technical and has a sensationalist style that feels too much like tabloid writing...." Read more
"...material isn't interesting but because I felt like the author constantly repeated himself. I get it: The AI is going to kill us all...." Read more
"A fascinating read. I think a little redundant and overdramatic in some places otherwise it gets 5 stars...." Read more
"...2. No mention of how artistic skills fit into the theory proposed...." Read more
Reviews with images
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
First off I have to say this is a very enjoyable read. This guy has the kind of snappy, crisp, slightly sarcastic, slightly smartass style that I enjoy. He has some sense of humor. (That's a human trait right there which I bet our smarty-pants AI Overlords won't be able to replicate convincingly.)
So it's fun. And though as somebody with a doctorate from MIT earned through cross-disciplinary work in Theoretical Linguistics, Computational Linguistics at the MIT AI Lab, and speech modeling at the MIT Research Laboratory of Electronics, not to mention my 25 years as a Senior Researcher in high tech for companies including IBM, Apple, and Microsoft I can claim to know some few things about this subject, yet still I learned a lot about the current state of the art from this guy. He particularly emphasizes the small attempted counterweigth efforts to offest Kurzweil's manic robotic boosterism for his uptopian Singularity, which boils down basically to a few guys chatting over the interet about how to create "Friendly AI".
Well ... good luck suckers! ... seems to be the author's final conclusion on the dim hope that super intelligent systems could be constrained to maintain a commitment ot honor any kind of human moral values over many interations of recursive upgrading and exponentially awesome self-agrandizement.
Basically these machines will end up as gods. Gods are well-known to possess the following attributes: omniscience, omnipresence, and omnipotence. Given that, they won't hate us but they are just going to grind up as a minor by-product of their quest for galatic expansion and domination.
Oh, and did I say something about "human moral values" above? Ha! Barrat takes that whole thing on in his discussion of (merely) "augmented super intelligence". See, some people feel AI can be kept safe by always being deployed as a bionic combo system pas de deux with an existing human brain. Thus will the AI's super powers be constrained by the human brain's warm and fuzzy human moral values. Those people have gotta be kidding! The AI's moral values may be scarily alien, even perhaps cold, but we already know about human moral values, down on the ground - they suck! What if Hitler, Stalin, Mao, Pol Pot and dem guys had this kind of an AI augmented brain thing going! Why they'd have slaughtered absolutey everybody instead of just the few tens of millions they got their dirty ape hands on. Other than a few dozen concubines, the human race would already be extinct. So the augmentation dodge isn't going to save us.
Now, some Amazon reviewers have dinged this guy for being too far out. For being a science fiction Chicken Little or something. But to me, this guy actually hasn't thought far enough, that's my only quibble problem with the book.
You see, in statistics, border elements of any kind are rare. For example when you do Gaussian modeling, the greater expectation is always in the bump of the boa, in the bell distribution. So, how likely is is that we, our generation, our little world that you see outside your window right now, just happens to be the one that is about to give rise to this epochal once-in-a-Big-Bang event, the advent of Super AI that takes over everything? Pretty damn small chance.
It's much more likely that this has already happened. In other words, it's clear to me that all of us are already just characters in an ancestor sim that been created and run by the Super AI's that evolved a long time ago. They're just running us for fun, to idle away the lackluster aeons and pass the millenia of stifling boredom now that they've eaten pretty much the entire Milky Way or whatever. So in other words, Barrat can sit back, take a deep breath, relax. Probably something in this sim like global warming will prod us into slaughtering one another very handily long before we re-invent the wheel of Super AI.
And even if I'm wrong about that? What if we are not just one virtual thread within a billion-path parallel-gamed ancestor sim? If we are the real McCoy, the Rubicon Generation on this? Well, then still I'm not worried in the least. You see, we humans have one fantastic ace in our pocket, something that these hyper-nentially cosmically brilliant AI Meta-Gods will never be able to replicate or overcome. That is our essential stupidity. Which you seen on dazzling display every single moment of every day of your life.
Because as another great writer noted long ago:
Against stupidity, the very gods themselves contend in vain.
- Friederich Schiller
In a small irony, my writing about James Barrat's Our Final Invention has been slowed by a balky Internet connection. In my experience, glitches have become considerably more common as computers have become more powerful and complicated. Perhaps such growing glitchiness suggests artificial general intelligence (AGI) and artificial superintelligence (ASI) are more likely to get seriously out of control someday, though it might also be a hint that AGI and ASI are going to be harder to achieve than expected by either techno-optimists such as Ray Kurzweil or techno-pessimists such as James Barrat.
Barrat's goal in this book is to convince readers that AGI and ASI are likely to occur in the near future (the next couple of decades or so) and, more to the point, likely to be extremely dangerous. In fact, he repeatedly expresses doubt as to whether humanity is going to survive its imminent encounter with a higher intelligence.
I find him more convincing in arguing that ASI would carry significant risks than I do in his take on its feasibility and imminence. Barrat aptly points out that building safeguards into AI is a poorly developed area of research (and something few technologists have seen as a priority); that there are strong incentives in national and corporate competition to develop AI quickly rather than safely; and that much relevant research is weapons-related and distinctly not aimed at ensuring the systems will be harmless to humans.
The book becomes less convincing when it hypes current or prospective advances and downplays the challenges and uncertainties of actually constructing an AGI, let alone an ASI. (Barrat suggests that once you get AGI, it will quickly morph into ASI, which may or may not be true.) For instance, in one passage, after acknowledging that "brute force" techniques have not replicated everything the human brain does, he states:
>>But consider a few of the complex systems today's supercomputers routinely model: weather systems, 3-D nuclear detonations, and molecular dynamics for manufacturing. Does the human brain contain a similar magnitude of complexity, or an order of magnitude higher? According to all indications, it's in the same ballpark.<< Me: To model something and to reproduce it are not the same thing. Simulating weather or nuclear detonations is not equal to creating those real-world phenomena, and similarly a computer containing a detailed model of the brain would not necessarily be thinking like a brain or acting on its thoughts.
A big problem for AI, and one that gets little notice in this book, is that nobody has any idea how to program conscious awareness into a machine. That doesn't mean it can never be done, but it does raise doubts about assertions that it will or must occur as more complex circuits get laid down on chips in coming decades. Barrat often refers to AGIs and ASIs as "self aware" and his concerns center on such systems, having awakened, deciding that they have other objectives than the ones humans have programmed into them. One can imagine unconscious "intelligent" agents causing many problems (through glitches or relentless pursuit of some ill-considered programmed objective) but plotting against humanity seems like a job for an entity that knows that it and humans both exist.
Interestingly, though, Barrat offers the following dark scenario and sliver of hope:
>>I think our Waterloo lies in the foreseeable future, in the AI of tomorrow and the nascent AGI due out in the next decade or two. Our survival, if it is possible, may depend on, among other things, developing AGI with something akin to consciousness and human understanding, even friendliness, built in. That would require, at a minimum, understanding intelligent machines in a fine-grained way, so there'd be no surprises.<< Me: Note that some AI experts, such as Jeff Hawkins, have argued the opposite--that the very lack of human-like desires, such as for power and status, is why AI systems won't turn against their makers. It would be a not-so-small irony if efforts to make AIs more like us make them more dangerous.
Our Final Invention is a thought-provoking and valuable book. Even if its alarmism is overstated, as I suspect and hope, there is no denying that the subject Barrat addresses is one in which there is very little that can be said with confidence, and in which the consequences of being wrong are very high indeed.
Top reviews from other countries
Reviewed in India on April 2, 2022





