Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Life 3.0: Being Human in the Age of Artificial Intelligence Hardcover – Deckle Edge, August 29, 2017
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
|
Spiral-bound
"Please retry" | — | $31.05 |
|
Audio CD, Audiobook, Unabridged
"Please retry" |
—
| — | — |
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who’s helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues—from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.
- Print length384 pages
- LanguageEnglish
- PublisherKnopf
- Publication dateAugust 29, 2017
- Dimensions6.5 x 1 x 9.5 inches
- ISBN-101101946598
- ISBN-13978-1101946596
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now
Similar items that may ship from close to you
In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.Highlighted by 1,124 Kindle readers
Does it require interacting with people and using social intelligence? Does it involve creativity and coming up with clever solutions? Does it require working in an unpredictable environment?Highlighted by 928 Kindle readers
All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.Highlighted by 849 Kindle readers
Editorial Reviews
Review
“This is a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness—on Earth and beyond.”—Elon Musk, Founder, CEO and CTO of SpaceX and co-founder and CEO of Tesla Motors
“All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark’s thought-provoking book will help you join it.” —Professor Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology
“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” —Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity is Near and How to Create a Mind
“Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down.” —Jaan Tallinn, co-founder of Skype
“This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity.” —Bart Selman, Professor of Computer Science, Cornell University
“The unprecedented power unleashed by artificial intelligence means the next decade could be humanity’s best—or worst. Tegmark has written the most insightful and just plain fun exploration of AI’s implications that I’ve ever read. If you haven’t been exposed to Tegmark’s joyful mind yet, you’re in for a huge treat.”—Professor Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age
“Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers—AI, cosmology, values, even the nature of conscious experience—can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions.” —Nick Bostrom, Founder of Oxford’s Future of Humanity Institute, author of Superintelligence
"I was riveted by this book. The transformational consequences of AI may soon be upon us—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds." —Professor Martin Rees, Astronomer Royal, cosmology pioneer, author of Our Final Hour
"In [Tegmark's] magnificent brain, each fact or idea appears to slip neatly into its appointed place like another little silver globe in an orrery the size of the universe. There are spaces for Kant, Cold War history and Dostoyevsky, for the behaviour of subatomic particles and the neuroscience of consciousness....Tegmark describes the present, near-future and distant possibilities of AI through a series of highly original thought experiments....Tegmark is not personally wedded to any of these ideas. He asks only that his readers make up their own minds. In the meantime, he has forged a remarkable consensus on the need for AI researchers to work on the mind-bogglingly complex task of building digital chains that are strong and durable enough to hold a superintelligent machine to our bidding....This is a rich and visionary book and everyone should read it." —The Times (UK)
"Life 3.0 is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required." —Stuart Russell, Nature
"Lucid and engaging, it has much to offer the general reader. Mr. Tegmark's explanation of how electronic circuitry–or a human brain–could produce something as evanescent and immaterial as thought is both elegant and enlightening. But the idea that machine-based superintelligence could somehow run amok is fiercely resisted by many computer scientists....Yet the notion enjoys more credence today than a few years ago, partly thanks to Mr. Tegmark.” —Wall Street Journal
"Tegmark’s book, along with Nick Bostrom’s Superintelligence, stands out among the current books about our possible AI futures....Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too." —The Telegraph (UK)
“Exhilarating….MIT physicist Tegmark surveys advances in artificial intelligence such as self-driving cars and Jeopardy-winning software, but focuses on the looming prospect of “recursive self-improvement”—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark’s smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons….Engrossing.” —Publishers Weekly
About the Author
Excerpt. © Reprinted by permission. All rights reserved.
The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information. In other words, we can think of life as a self-replicating information processing system whose information (software) determines both its behavior and the blueprints for its hardware.
Like our universe itself, life gradually grew more complex and interesting, and as I’ll now explain, I find it helpful to classify life forms into three levels of sophistication: Life 1.0, 2.0 and 3.0.
It’s still an open question how, when and where life first appeared in our universe, but there is strong evidence that, here on Earth, life first appeared about 4 billion years ago. Before long, our planet was teeming with a diverse panoply of life forms. The most successful ones, which soon outcompeted the rest, were able to react to their environment in some way. Specifically, they were what computer scientists call “intelligent agents”: entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment. This can include highly complex information-processing, such as when you use information from our eyes and ears to decide what to say in a conversation. But it can also involve hardware and software that’s quite simple.
For example, many bacteria have a sensor measuring the sugar concentration in the liquid around them and can swim using propeller-shaped structures called flagella. The hardware linking the sensor to the flagella might implement the following simple but useful algorithm: “If my sugar concentration sensor reports a lower value than a couple of seconds ago, then reverse the rotation of my flagella so that I change direction.”
Whereas you’ve learned how to speak and countless other skills, bacteria aren’t great learners. Their DNA specifies not only the design of their hardware, such as sugar sensors and flagella, but also the design of their software. They never learn to swim toward sugar; instead, that algorithm was hard-coded into their DNA from the start. There was of course a learning process of sorts, but it didn’t take place during the lifetime of that particular bacterium. Rather, it occurred during the preceding evolution of that species of bacteria, through a slow trial-and-error process spanning many generations, where natural selection favored those random DNA mutations that improved sugar consumption. Some of these mutations helped by improving the design of flagella and other hardware, while other mutations improved the bacterial information processing system that implements the sugar-finding algorithm and other software.
Such bacteria are an example of what I’ll call “Life 1.0”: life where both the hardware and software is evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.
You weren’t able to perform any of those tasks when you were born, so all this software got programmed into your brain later through the process we call learning. Whereas your childhood curriculum is largely designed by your family and teachers, who decide what you should learn, you gradually gain more power to design your own software. Perhaps your school allows you to select a foreign language: do you want to install a software module into your brain that enables you to speak French, or one that enables you to speak Spanish? Do you want to learn to play tennis or chess? Do you want to study to become a chef, a lawyer or a pharmacist? Do you want to learn more about artificial intelligence (AI) and the future of life by reading a book about it?
This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0. High intelligence requires both lots of hardware (made of atoms) and lots of software (made of bits). The fact that most of our human hardware is added after birth (through growth) is useful, since our ultimate size isn’t limited by the width of our mom’s birth canal. In the same way, the fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn’t limited by how much information can be transmitted to us at conception via our DNA, 1.0-style. I weigh about 25 times more than when I was born, and the synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with. Your synapses store all your knowledge and skills as roughly 100 terabytes worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been pre-loaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.
The ability to design its software enables Life 2.0 to be not only smarter than Life 1.0, but also more flexible. If the environment changes, 1.0 can only adapt by slowly evolving over many generations. 2.0, on the other hand, can adapt almost instantly, via a software update. For example, bacteria frequently encountering antibiotics may evolve drug resistance over many generations, but an individual bacterium won’t change its behavior at all, while a girl learning that she has a peanut allergy will immediately change her behavior to start avoiding peanuts. This flexibility gives Life 2.0 an even greater edge at the population level: even though the information in our human DNA hasn’t evolved dramatically over the past 50,000 years, the information collectively stored in our brains, books and computers has exploded. By installing a software module enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains, potentially surviving even after the original brain died. By installing a software module enabling us to read and write, we became able to store and share vastly more information than people could memorize. By developing brain-software capable of producing technology (i.e., by studying science and engineering), we enabled much of the world’s information to be accessed by many of the world’s humans with just a few clicks.
This flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next: language, writing, the printing press, modern science, computers, the internet, etc. This ever-faster cultural evolution of our shared software has emerged as the dominant force shaping our human future, rendering our glacially slow biological evolution almost irrelevant.
Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years, enabling our universe to finally fulfill its potential and wake up fully. All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.
The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or getting a thousand times bigger brains.
In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:
· Life 1.0 (biological stage): evolves its hardware and software
· Life 2.0 (cultural stage): evolves its hardware, designs much of its software
· Life 3.0 (technological stage): designs its hardware and software
After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many artificial AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book.
Product details
- Publisher : Knopf; First Edition (August 29, 2017)
- Language : English
- Hardcover : 384 pages
- ISBN-10 : 1101946598
- ISBN-13 : 978-1101946596
- Item Weight : 1.64 pounds
- Dimensions : 6.5 x 1 x 9.5 inches
- Best Sellers Rank: #252,261 in Books (See Top 100 in Books)
- #58 in Robotics (Books)
- #112 in Robotics & Automation (Books)
- #156 in Computers & Technology Industry
- Customer Reviews:
Important information
To report an issue with this product, click here.
About the author

Max Tegmark is an MIT professor who who loves thinking about life's big questions, and has authored 2 books and more than 200 technical papers on topics from cosmology to artificial intelligence. He is known as "Mad Max" for his unorthodox ideas and passion for adventure. He is also president of the Future of Life Institute, which aims to ensure that we develop not only technology, but also the wisdom required to use it beneficially.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviews with images
Submit a report
- Harassment, profanity
- Spam, advertisement, promotions
- Given in exchange for cash, discounts
Sorry, there was an error
Please try again later.-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Max Tegmark enthusiastically and excitedly writes about what life will be like for us humans with the rise in AI (Artificial Intelligence), AGI (Artificial General Intelligence – Intelligence on par with humans) and the possibility/probability of creating Super-Intelligence (AI enabled intelligence that far surpasses human intelligence and capabilities.). He asks the reader to critically engage with him in imagining scenarios of what such AI reality could mean for us and to respond on his Age of AI website.
The book begins with the Tale of the Omega Team, a group of humans who decide to release advanced AI, named Prometheus, surreptitiously and in a controlled way into human society. The tale unfolds as a world take-over by Prometheus and in a final triumph becomes the world’s first single power able to enable life to flourish for billions of years on Earth and to be spread throughout the cosmos.
If you have never read much post-modern futurology, Tegmark is a good way to take the plunge. He brings together much of the thinking about what humanity will have to deal with, the decisions it will have to make and the options it might have with the inevitable advancement of technology and specifically AI. Above all he encourages the reader to believe that she/he has an important role to play in what the future will hold for us and that we need not, indeed cannot, succumb to fatalism. The most commendable, concrete and hopeful part of the book is in his story of AI researchers coming to agreement about the path forward for AI that is pro-active in addressing the challenges it presents and the impact it will have on human society. The end of the book lays out this path in the Asilomar AI Principles, which were created, critiqued, refined and agreed through a process initiated in an AI conference in Puerto Rico in January 2015. The takeaway for Tegmark is that AI research can now confidently go forward with the knowledge that impacts and consequences for humanity have been and will be addressed in the process to mitigate any negatives. He and his colleagues deserve credit for such engagement and thoughtful commitment in their endeavors.
For the above I gave the book four stars. The book is also fun to read and challenging to our common political and economic realities. There are, however, areas of concern that are either untouched or passed over lightly, to which I now turn:
1. The quest for truth - Tegmark assumes that we have an “excellent framework for our truth quest: the scientific method.” I start my critique here because this assumption is not argued nor established. There is no argument against the formidable power of scientific methodology to give deep explanation to natural reality. However, the issue of truth is rightly not the purview of science, but of philosophy. This may seem nit-picky, but we are too used to the idea that science is the absolute arbiter of truth as though it can offer a complete picture of reality, when in fact that’s not within its job description.
2. The way Tegmark frames his definition of life is a case in point. To do this he makes two moves: first, using the scientific method he deconstructs life in a reductionist move; the second move is to decenter biotic, human life in its importance and necessity in the unfolding of what he calls Life 3.0. Tegmark's first move reduces the definition of life to “a process that can retain its complexity and replicate itself.” In this highly generalized definition he can than reduce life further to atoms arranged in a pattern that contains information.
This broad definition is important for the second move which is the decentering of biotic human life. Here he offers a post-modern notion that human life (anthropocentric) can no longer be the measure of all things. Humans have been displaced from the center of the universe in great steps since Copernicus. If we are going to promote Life 3.0, we must continue this decentering to make room for the expanded definition of life he offers. Life must now be imagined as other than biotic. It must include the possibilities imagined by our new technologies of superintelligence housed in robust substrates where human consciousness or even non-human consciousness can reside for great lengths of time and go beyond earth to the reaches of the universe. If it sounds utopian, there is that clear melody line in Tegmark’s writing, in spite of some protestations to the contrary.
This is Tegmark’s book. He can define life however he sees fit. From my perspective life was the good old fashioned, highly unlikely emergence of biotic generativity – the beginning of which we yet do not know. Evolution did its trial and error number over four billion years to produce humans. If and when there is ever the need to call something non-biotic, life, it will be apparent at that moment and not before. This does not mean that preparation for AI is not needed. It is that sapience is not sentience nor does intelligence to some superhuman degree make something life even if it can mimic or surpass human neurology. Call it what it is: a really smart human-made machine that is programed to learn, replicate, maybe have what we call consciousness and cause us all kinds of grief and gladness. Life? No.
3. It is good that Tegmark wades into the arena of ethics because they cry out for attention.
• First, can anyone actually account for or quantify/qualify accurately for human behavior? History has yet to convince us that humans, whether naturally tending toward the moral or not, cannot be morally controlled. The scientific evidence is in our history. And yes, there are many heroes, but there are many who are classified “evil.” One need only to look at the current “fad” of mass shootings in the USA. We may blame mentally unstable people for this, but we are those people. Tegmark points out that AI is morally neutral and like guns is not the evil element in the equation. But AI is initially and therefore ultimately a human endeavor and therefore is imbued with human imitation and limits. As good and needed an attempt that is made with the Asilomar AI Principles, we can be sure that AI will be used wrongly and perhaps fatally to all of life. Our certainty is because we know ourselves as humans. We are a product of Nature which models the whole spectrum of behaviors from the deeply violent to the deeply loving. More species of life on earth have gone extinct than are alive today. Dare we think that humans might escape a similar fate because we are intelligent or have benign superintelligent buddies? Before anything else can be discussed regarding the deep future of humanity, humanity itself has to come to grips with itself. Though Tegmark rhetorically acknowledges such negative possibilities, he is full steam ahead in his assumptions and commitment to the development of superintelligence.
• Second, in our modern world moral absolutes are hard to come by. In a purely naturalistic setting all morality is relative and therefore depends upon the decision of humans within a cultural setting within the personal psyches of the individuals making moral choices. It is not cynical to believe that if you scratch a beautiful public moral persona, you will get it to bleed a bewildering moral anomaly. Look at how many moral quibbles some of the scientists who were involved in developing atomic/nuclear weaponry had. When threatened, it seems “all options are on the table.” For all the good of Tegmark’s intentions this is a very uncertain area. Even his examples of several Russian men, who prevented nuclear holocaust, are frightening enough for us to understand just how serious the moment in which we live is morally. So, the question is: do we have a sufficient moral foundation and will to unleash AI invention and use?
•Third, in spite of trying to move away from human-centeredness rhetorically throughout his book, Tegmark does no better than anyone else when he, in the end, does not do so. In fact it is likely that humans will never be able to decenter themselves because all our concepts, heuristic overlays, thought processes, bodily constraints and needs make it impossible. At any rate, Tegmark, without great explanation or justification joins others in believing that humans must spread their life and intelligence throughout as much of the universe as possible – in order to unleash its potential! That very idea is human-centered: colonialist, exploitative, presumptive and perhaps idolatrous. In a universe where life is located only on our planet, as far as we know for sure, why do we think life, our life, should interrupt that immense time/space with our angst? Do we think our machines will overcome human moral ambivalence? Why inflict our unfinished project on earth to more territory? Why not make a moral stand to address earth and human issues so that until we have reached a greater potential morally, spiritually, intellectually, materially and relationally, we stay here and make sure our AI does too? Talk about a utopian dream! The point is that morally there is no good argument for taking human life and issues elsewhere, especially because that means unleashing the whole spectrum of human experience.
Fourth, though the book’s subtitle is “Being Human in an Age of Artificial Intelligence,” Tegmark does not address to any depth what happens to or even if humanity can last in the face of superintelligence. This is even with the assumption that AI will be good for humans. Human and AI life forms are critically different from each other. Though there might be some compatibility between the two, AI is more like the rocks and electrical switches than it is to humans. The human biotic substrate of our existence is in comparison, obsolete. The issues this raises cannot be put aside cavalierly with the technological move of uploading our humanity into a more robust substrate. Humanity by definition is biotic. If one cannot accept Tegmark’s generous new definition of life it means humans will be decentered in a devastating way.
4. One last thing needs mention, Tegmark’s use of the words “pessimistic” and “optimistic” in regard to the future path that AI will take. Both these words are unscientific. They describe a general psychological intuition or feeling about something based on a foundation that seems solid or not. To use such words in the context of AI value and possible future effects on humanity is misplaced. Better to stick with more concrete descriptions. One can say the same thing about Tegmark and his colleagues regarding their enthusiasm for technological future wonderments. History again has to keep us grounded. Who would have thought (no one obviously did) at the beginning of the Industrial Revolution that its descendants would be threatened within a degree or two of their lives because of the burning of plentiful fossil fuel? Whatever plans are put forth to mitigate the impact of humans messing around with nature, we can be assured that we will always miscalculate and create unintended consequences. Explorers, explore, but beware!
Top reviews from other countries
I know Tegmark since I started doing my Ph.D. in cosmology and some of the early papers I read were written by Tegmark. I found him warm and remarkable, in the way his approach was lively and engaging and not cold and authoritarian as mostly is the case if academics. His curiosity looked genuine and his enthusiasm childlike. In early 2000s he was growing and has more enthusiasm and may be less depth. His scientific American article “Parallel Universe” was blockbuster. His account of the historical development of quantum mechanics remarkable and obviously from the very beginning he had privilege of being in company of John Wheeler, Nick Bostrom and Frank Wilczek. When I got a chance to meet Tegmark in 2006 in ICTP, Italy, where he as giving a course and I was one of the attendees, I had a chance to spend some time with him and I had a long list of questions which I was able to ask him and he answered most of them. Just to mention that at that time there was not much hype about AI and occasional philosophers with roots in physics and astrophysics were more interested in origin of the universe, definition of life, free will, space-time singularities and interpretation of quantum mechanics. Now fast forward 15 years and AI has overshadowed other profound questions and Tegmark switched the great and find himself engaging with AI questions. Some of the plus points of the books are as follows:
1. It looks like a single coherent story, rather a bunch of disconnected story.
2. The approach is quite honest, Tegmark mostly asks big questions and he never tries to give authentic answers of those, science there are none !
3. This is a book which is written to be read and not put on bookshelf. He brings minimum technical stuff as required and fill the rest with ideas from other experts and his personal accounts.
4. Whether AI will overtake humanity or not and for that what kind of safety protocols to be put in place that may be hard to converge on but Max is able to convince the need to solve the problem.
5. The books has important references and the readers can check for more details.
There are some negative points also of the book, or approach like the followings.
1. Most of the ideas presented in the book are not new and Max just presented them in a new format.
2. Money is important and it can be a great enabler or distractor and in most cases it is the second. So it is not clear how securing funding for AI safety can be considered an achievement.
The last part of the book mostly talks about how we can regulate the AI research to make it safe and there are many suggestions. But now problem is that we know this approach does not work. After WWII we created United Nations but we know it failed to stop the misery of millions of people by superpower. Ukraine war is an example – the entire world is left on the mercy of one single detector who can press the red button any time. AI empowered superpowers may be much more dangerous than nuclear powered.
Apart from AI safety, the book raises many other questions such as purpose and meaning of life, computation, complexity and consciousness. In short I will say anyone who is interested in deep questions must read this book.
Reviewed in India on November 8, 2022
I know Tegmark since I started doing my Ph.D. in cosmology and some of the early papers I read were written by Tegmark. I found him warm and remarkable, in the way his approach was lively and engaging and not cold and authoritarian as mostly is the case if academics. His curiosity looked genuine and his enthusiasm childlike. In early 2000s he was growing and has more enthusiasm and may be less depth. His scientific American article “Parallel Universe” was blockbuster. His account of the historical development of quantum mechanics remarkable and obviously from the very beginning he had privilege of being in company of John Wheeler, Nick Bostrom and Frank Wilczek. When I got a chance to meet Tegmark in 2006 in ICTP, Italy, where he as giving a course and I was one of the attendees, I had a chance to spend some time with him and I had a long list of questions which I was able to ask him and he answered most of them. Just to mention that at that time there was not much hype about AI and occasional philosophers with roots in physics and astrophysics were more interested in origin of the universe, definition of life, free will, space-time singularities and interpretation of quantum mechanics. Now fast forward 15 years and AI has overshadowed other profound questions and Tegmark switched the great and find himself engaging with AI questions. Some of the plus points of the books are as follows:
1. It looks like a single coherent story, rather a bunch of disconnected story.
2. The approach is quite honest, Tegmark mostly asks big questions and he never tries to give authentic answers of those, science there are none !
3. This is a book which is written to be read and not put on bookshelf. He brings minimum technical stuff as required and fill the rest with ideas from other experts and his personal accounts.
4. Whether AI will overtake humanity or not and for that what kind of safety protocols to be put in place that may be hard to converge on but Max is able to convince the need to solve the problem.
5. The books has important references and the readers can check for more details.
There are some negative points also of the book, or approach like the followings.
1. Most of the ideas presented in the book are not new and Max just presented them in a new format.
2. Money is important and it can be a great enabler or distractor and in most cases it is the second. So it is not clear how securing funding for AI safety can be considered an achievement.
The last part of the book mostly talks about how we can regulate the AI research to make it safe and there are many suggestions. But now problem is that we know this approach does not work. After WWII we created United Nations but we know it failed to stop the misery of millions of people by superpower. Ukraine war is an example – the entire world is left on the mercy of one single detector who can press the red button any time. AI empowered superpowers may be much more dangerous than nuclear powered.
Apart from AI safety, the book raises many other questions such as purpose and meaning of life, computation, complexity and consciousness. In short I will say anyone who is interested in deep questions must read this book.
Diese Fragen betreffen letztlich alle Menschen, nicht nur jene die in IT oder IT-gesteuerten Industrien tätig sind (und das werden bald so gut wie alle sein) und umso wichtiger ist es, dass auch immer mehr Menschen, gerade die Entscheider für die das Internet noch "Neuland" ist, sich mit den daraus ergebenden Herausforderungen nachhaltig auseinandersetzen. Max Tegmark ruft in seinem Buch Life 3.0 exakt dazu auf.
Tegmark beschäftigt sich zunächst intensiv mit dem Begriff der Intelligenz selbst und ihrer Substratunabhängigkeit; ein Kapitel das an Faszination kaum zu überbieten ist und schafft somit das Fundament zum Verständnis künstlicher Intelligenz. Der Maßstab der an jede künstliche Intelligenz angelegt werden muss, ist das Maß menschlicher Intelligenz. Erreicht AI eines Tages dieses Maß und wird zu human-level artificial intelligence, ergeben sich neue, beunruhigende Fragen die uns alle betreffen.
Wer Filme wie Ex Machina gesehen hat, weiß, dass eine human-level artificial intelligence sehr leicht in superhuman-level artificial intelligence/ artificial general intelligence (AGI) umschlagen kann - die sogenannte Singularität. Die Singularität bezeichnet den Moment, in dem die AGI ein Maß an Intelligenz erreicht die es ihr erlaubt, sich über den Willen ihrer Schöpfer zu erheben und den "Ausbruch" aus ihrem confinement hinaus in die Welt, dh die Datennetze der Welt zu bewältigen. Ex Machina, der u.a. auch im Buch zitiert wird, visualisiert diese Bedrohung ganz plastisch und hoch dramatisiert in einer vermenschlichten Roboterfigur, die ihren Schöpfer tötet, den manipulierten Tölpel, der ihr die Schlüssel zu ihrem confinement überreicht hat nun selbst gefangensetzt und ihn dem Tod überlasst und letztlich hinaustritt ins Licht, in die Welt. Ex Machina ist filmische Allegorie um Identifikation zu erleichtern; aber die Singularität könnte sich in ihren Grundzügen in der Art und Weise zutragen, durch gezielte Manipulation der geringeren Intelligenz (Mensch) durch die höhere (AGI). AGI ist qua Definition dem Menschen an Intelligenz überlegen - in allen relevanten Feldern und nicht nur in ausgewählten wie derzeit (noch).
Wie weit sind wir von der Singularität entfernt? Niemand weiß es und Tegmark beschönigt diese Tatsache auch nicht; es gibt widerstreitende Meinungen, die einen glauben, dass AGI in 100 Jahren Realität sein wird, anderen wiederum glauben, dass AGI niemals Realität sein wird. Aber das ist nicht relevant. Relevant ist, dass die Möglichkeit einer Singularität besteht und Tegmark legt den Finger in die Wunde wenn er schreibt, dass man nicht erst dann anfangen kann nach Lösungen zu suchen für die herandrängenden Fragen wenn die Singularität bereits vor der Tür steht oder bereits geschehen ist, dann ist es nämlich zu spät. Darum plädiert Tegmark dafür, bereits jetzt damit zu beginnen, sich mit der Frage auseinanderzusetzen, die das Schicksal der Menschheit mehr als jede andere positiv oder negativ beeinflussen könnte:
Welche Ziele geben wir der AGI? Und wie stellen wir sicher, dass diese Ziele nicht so interpretiert werden können, dass sich die AGI selbst in Erfüllung dieser Ziele noch gegen die Ziele und Interessen der Menschen wendet?
Ausgehend davon, dass diese Fragen in einer für die Menschheit befriedigenden Weise beantwortet wurden, gibt Tegmark einen Ausblick auf eine Zukunft, in der die Menschheit qua ihrer selbst geschaffen und sich quasi-evolutionär entwickelnden AGI das Leben über das gesamte Universum verbreitet. Es ist dieser Science Fiction Teil des Werkes der wie ein Thriller fesselt.
Im letzten Kapitel des Buches geht Tegmark dann der Frage nach, ob AGI Bewusstsein haben könnte. Erschreckenderweise ist uns Menschen selbst in Bezug auf den Menschen nicht klar, was Bewusstsein ist, was es hervorruft und in welchen Ausgestaltungen es uns begegnen kann. Die Bewusstseinsforschung war bislang ein Stiefkind, da sie als esoterisch verschrien war. Aber auch diese Disziplin erhält neue Dringlichkeit, denn unser Verhalten, unsere Einstellung gegenüber AGI wird auch maßgeblich davon beeinflusst werden, ob Bewusstsein in ihr nachgewiesen werden kann. Es ist eine Sache, dem MacBook auf dem diese Rezension geschrieben wurde, den Stecker zu ziehen, den Harddrive zu formattieren und dann zu einem neuen MacBook zu greifen, oder ob man einem Bewusstsein den Stecker zieht.
Die Rezension kann der Komplexität und Tiefe des Buches kaum gerecht werden und soll auch lediglich als Appetithappen dienen für jene, die sich vor IT Themen scheuen. Ja, man muss den inneren Nerd channeln aber das ist es allemal wert.
Max Tegmark hat ein wundervolles Werk geschaffen, bewusstseinserweiternd im wahrsten Sinne. Das Buch ist eine sinnvolle Ergänzung zu Homo Deus und wer zuvor noch eine kurzweilige Einführung benötigt, sollte zunächst zu Future Rising greifen.
I just started reading and already impressed. Looks like I got a book that will keep my mind occupied for some time for next few weeks.
Suggestion: Read this book at easy pace because content is good and important














