- Hardcover: 384 pages
- Publisher: Knopf (August 29, 2017)
- Language: English
- ISBN-10: 1101946598
- ISBN-13: 978-1101946596
- Product Dimensions: 6.7 x 1.2 x 9.6 inches
- Shipping Weight: 1.7 pounds (View shipping rates and policies)
- Average Customer Review: 70 customer reviews
- Amazon Best Sellers Rank: #1,279 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Life 3.0: Being Human in the Age of Artificial Intelligence
Use the Amazon App to scan ISBNs and compare prices.
The Amazon Book Review
Author interviews, book reviews, editors picks, and more. Read it now
Frequently bought together
Customers who bought this item also bought
“Original, accessible, and provocative….Tegmark successfully gives clarity to the many faces of AI, creating a highly readable book that complements The Second Machine Age’s economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence…. At one point, Tegmark quotes Emerson: ‘Life is a journey, not a destination.’ The same may be said of the book itself. Enjoy the ride, and you will come out the other end with a greater appreciation of where people might take technology and themselves in the years ahead.” —Science
“This is a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness—on Earth and beyond.” —Elon Musk, Founder, CEO and CTO of SpaceX and co-founder and CEO of Tesla Motors
“All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark’s thought-provoking book will help you join it.” —Professor Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology
“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” —Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity is Near and How to Create a Mind
“Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down.” —Jaan Tallinn, co-founder of Skype
“This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity.” —Bart Selman, Professor of Computer Science, Cornell University
“The unprecedented power unleashed by artificial intelligence means the next decade could be humanity’s best—or worst. Tegmark has written the most insightful and just plain fun exploration of AI’s implications that I’ve ever read. If you haven’t been exposed to Tegmark’s joyful mind yet, you’re in for a huge treat.” —Professor Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age
“Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers—AI, cosmology, values, even the nature of conscious experience—can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions.” —Nick Bostrom, Founder of Oxford’s Future of Humanity Institute, author of Superintelligence
"I was riveted by this book. The transformational consequences of AI may soon be upon us—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds." —Professor Martin Rees, Astronomer Royal, cosmology pioneer, author of Our Final Hour
"In [Tegmark's] magnificent brain, each fact or idea appears to slip neatly into its appointed place like another little silver globe in an orrery the size of the universe. There are spaces for Kant, Cold War history and Dostoyevsky, for the behaviour of subatomic particles and the neuroscience of consciousness....Tegmark describes the present, near-future and distant possibilities of AI through a series of highly original thought experiments....Tegmark is not personally wedded to any of these ideas. He asks only that his readers make up their own minds. In the meantime, he has forged a remarkable consensus on the need for AI researchers to work on the mind-bogglingly complex task of building digital chains that are strong and durable enough to hold a superintelligent machine to our bidding....This is a rich and visionary book and everyone should read it." —The Times (UK)
"Life 3.0 is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required." —Stuart Russell, Nature
"Lucid and engaging, it has much to offer the general reader. Mr. Tegmark's explanation of how electronic circuitry–or a human brain–could produce something as evanescent and immaterial as thought is both elegant and enlightening. But the idea that machine-based superintelligence could somehow run amok is fiercely resisted by many computer scientists....Yet the notion enjoys more credence today than a few years ago, partly thanks to Mr. Tegmark.” —Wall Street Journal
"Tegmark’s book, along with Nick Bostrom’s Superintelligence, stands out among the current books about our possible AI futures....Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too." —The Telegraph (UK)
“Exhilarating….MIT physicist Tegmark surveys advances in artificial intelligence such as self-driving cars and Jeopardy-winning software, but focuses on the looming prospect of “recursive self-improvement”—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark’s smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons….Engrossing.” —Publishers Weekly
About the Author
MAX TEGMARK is an MIT professor who has authored more than 200 technical papers on topics from cosmology to artificial intelligence. As president of the Future of Life Institute, he worked with Elon Musk to launch the first-ever grants program for AI safety research. He has been featured in dozens of science documentaries. His passion for ideas, adventure, and entrepreneurship is infectious.
Browse award-winning titles. See more
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
Have you notice how you don’t “solve” CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) anymore? That’s because computers now can. Artificial Intelligence, from being a fairly niche area of mostly academic study a decade ago has exploded in the last five years. Much more quickly than many anticipated, machine learning (a subset of AI) systems have defeated the best human Go players, are piloting self-driving cars, usefully if imperfectly translating documents, labeling your photos, understanding your speech, and so on. This has led to huge investment in AI by companies and governments, with every sign that progress will continue. This book is about what happens if and when it does.
But why hear about it from Tegmark, an accomplished MIT physicists and cosmologist, rather than (say) an AI researcher? First, Tegmark has over the past few years *become* an AI researcher, with 5 published technical papers in the past two years. But he’s also got a lifetime of experience thinking carefully, rigorously, generally (and entertainingly to boot) about the “big picture” of what is possible, and what is not, over long timescales and cosmic distances (see his last book!) – which most AI researchers do not. Finally, he's played an active and very key role (as you can read about in the book’s epilogue) in actually creating conversation and research about the impacts and safety of AI in the long-term. I don’t think anyone is more comprehensively aware of the full spectrum of important aspects of the issue.
So now the book. Chapter 1 lays out why AI is suddenly on everyone’s radar, and very likely to be extremely important over the coming decades, situating present-day as a crucial point within the wider sweep of human and evolutionary history on Earth. Chapter 2 takes the question of “what is intelligence?” and abstracts it from its customary human application, to “what is intelligence *in general*?” How can we define it in a useful way to cover both biological and artificial forms, and how do these tie to a basic understanding of the physical world? This lays the groundwork for the question of what happens as artificial intelligences grow ever more powerful. Chapter 3 addresses this question in the near future: what happens as more and more human jobs can be done by AIs? What about AI weapons replacing human-directed ones? How will be cope when more and more decision are made by AIs what may be flawed or biased? This is a about a lot of important changes occurring *right now* to which society is, for the most part, asleep at the wheel. Chapter 4 gets into what is exciting – and terrifying – about AI: as a designed intelligence, it can in principle *re*design itself to get better and better, potentially on a relatively short timescale. This raises a lot of rich, important, and extremely difficult questions that not that many people have thought through carefully (another in-print example is the excellent book by Bostrom). Chapter 5 discusses where what happens to humans as a species after an “intelligence explosion” takes place. Here Tegmark is making a call to start thinking about where we want to be, as we may end up somewhere sooner than we think, and some of the possibilities are pretty awful. Chapter 6 exhibits Tegmark’s unique talent for tackling the big questions, looking at the *ultimate* limits and promise of intelligent life in the universe, and how stupefyingly high the stakes might be fore getting the next few decades right. It’s both a sobering and an exhilerating prospect. Chapters 7 and 8 then dig into some of the deep and interesting questions about AI: what does it mean for a machine to have “goals”? What are our goals as individuals and a society, and how can we best aim toward them in the long term? Can a machine we design have consciousness? What is the long-term future of consciousness? Is there a danger of relapsing into a universe *without* consciousness if we aren’t careful? Finally, an epilogue describes Tegmark’s own experience – which I’ve had the privilege to personally witness – as a key player in an effort to focus thought and effort on AI and its long-term implications, of which writing this book is a part. (And I should also mention the prologue, which gives an fictional but less *science*fictional depiction of an artificial superintelligence being used by a small group to seize control of human society.
The book is written in a very lively and engaging style. The explanations are clear, and Tegmark develops a lot of material at a level that is understandable to a general audience, but rigorous enough to give readers a real understanding of the issues relevant to thinking about the future impact of AI. There are a lot of news ideas in the book, and although it is sometimes written in a breezy and engaging style, that belies a lot of careful thinking about the issues.
It’s possible that real, general artificial intelligence (AGI) is 100 or more years away, a problem for the next generation, with large but manageable effects of “narrow” AI to deal with over a span of decades. But it’s also quite possible that it’s going to happen 10, 15, 20, or 30 years from now, in which case society is going to have to make a lot of very wise and very important (literally of cosmic import) decisions very quickly. It’s important to start the conversation now, and there’s no better way.
Most recent customer reviews
In the first one, we are offered a standard "AI takes the world over" scenario.Read more
Tegmark explores issues beyond just artificial intelligence.Read more