- Paperback: 386 pages
- Publisher: Lifeboat Foundation; 4 edition (November 23, 2017)
- Language: English
- ISBN-10: 0998413119
- ISBN-13: 978-0998413112
- Product Dimensions: 6 x 0.9 x 9 inches
- Shipping Weight: 1.4 pounds (View shipping rates and policies)
- Average Customer Review: 47 customer reviews
- Amazon Best Sellers Rank: #1,611,647 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
The Human Race to the Future: What Could Happen - and What to Do Paperback – November 23, 2017
About the Author
Daniel Berleant, Ph.D. wrote this book for science-literate readers seeking a readable book on the future. Educated at the University of Texas at Austin and MIT, Berleant seeks to communicate the future, the science behind the future, and the past and present foundations of the future.
If you buy a new print edition of this book (or purchased one in the past), you can buy the Kindle Edition for FREE. Print edition purchase must be sold by Amazon. Learn more.
For thousands of qualifying books, your past, present, and future print-edition purchases now lets you buy the Kindle edition for $2.99 or less. (Textbooks available for $9.99 or less.)
Try the Kindle edition and experience these great reading features:
Showing 1-3 of 47 reviews
There was a problem filtering reviews right now. Please try again later.
In this review I attempt to present some of the parts I liked, along with those many parts that bugged me. Perhaps the author can use these suggestions so that his next edition will get 5 stars from me (not that he should care since I see that many readers already assigned the book 5 stars). Anyway, here goes:
PARTS I LIKED:
(a) Wiki-wiki-wikipedia (Chapter 4) — Berleant explains how wikipedia could be much more useful if it could, in the near future, generate articles (on the fly) that consist of appropriate intersections of pre-existing wiki articles. For example, if one wants to know about the use of computers in developing, say, different perfumes, one currently has to search wikipedia for perfume topics; then search for computer topics, in the hope of finding an intersection. With a wiki-wikipedia, portions of articles would be accessed and combined (upon request), based on both topic and context (e.g. perfumes in the context of computers). This future capability would enable anyone to find articles about topic A within the context of topic B. Extending this idea would involve intersections of, say, topics A+B+C (thus the term wiki-wiki-wikipedia).
(b) Difficulties with prediction (Chapter 16) — It’s very savvy of the author, who is making predictions (sometimes way into the future), to discuss reasons why accurate predictions are impossible. The reasons given in his book stem from Chaos theory and Quantum theory.
However, given that the author is a professor in computer science, it is interesting to me that he did not bother to mention the impossibility of predicting the behavior of computing devices (nor of the algorithms/programs that these devices compute). This problem is usually called “The Halting Problem” because it has been proven that it is theoretically impossible to predict when even completely deterministic algorithms (programs) will halt. When I first came upon this result, as an undergrad computer science major, I was astounded. One can have a complete description of a given algorithm and be able to mimic its behavior entirely, and yet still not be able to predict its behavior — even though the algorithm is completely deterministic!
It turns out that it is not only impossible to predict when some algorithm will (or will not) halt, but it is also impossible to predict ANY type of behavior that an algorithm might produce (e.g. when it might or might not print a given value). One would think, if one can completely mimic its behavior, surely one should be able to predict when it will print a certain result.
The reason for this fundamental unpredictability is that programming languages make use of IF-THEN statements — the IF-part checks for some condition and if that condition holds, then the THEN-part executes. An algorithm can use an IF-THEN command to check to see what prediction has been made — about the algorithm itself. The algorithm can then execute the THEN-part to defeat that prediction. For example, if the prediction is that, say, a given algorithm will next generate the phrase “MOM LOVES ME”, one of the algorithm’s IF-THEN statements can check for this prediction, and instead, print the phrase “MOM HATES ME” (thus defeating the prediction).
We humans also have this capability. If I predict that my two siblings, who let’s say happen to be arguing with each other at the moment, will punch each other and they hear my prediction, they can (perversely) decide to hug each other, just to defeat my prediction. Since much of the future involves human behavior, and since humans are often aware of what predictors are predicting, they will naturally adjust their behavior, based on these predictions. (This is what makes the stock market so hard to predict.)
(c) Warm, Poison Planet (Chapter 17) — Over 15 years ago I first read about the ocean chemocline (a gradient change in the deep layers of the oceans). Below the chemocline the ocean is anoxic (lacking oxygen) and only anaerobic bacteria live. Above the ocean chemocline exist aerobic life forms, which produce the majority of oxygen in our atmosphere (more than produced by trees on land). The anaerobic bacteria produce hydrogen sulfide (H2S), which is highly toxic to aerobic forms of life (i.e. toxic to all fish in the seas and to all animals on land). If the chemocline were ever to rise up to the surface and “burp” large quantities of hydrogen sulfide, such an event would kill all fish in the oceans (as it rose upward) and then kill all land animals (as it “burped” into the atmosphere).
It is speculated by some scientists studying super-massive extinctions that such an event could have caused the Permian-Triasssic extinction, in which 96% of all marine species (and 70% of all vertebrate land species) went extinct, along with extinctions of many species of insects. A component of this theory is that increased carbon dioxide (CO2) levels in the atmosphere heated up the oceans, which then could not hold as much oxygen. Increased CO2 also caused stagnation in ocean currents, leading to a rise in the chemocline. It is speculated that volcanic eruptions spewed out carbon dioxide to raise CO2 levels and when the levels went above 1000 to 3000 ppm large extinction events occurred.
Recent human activity has been rapidly raising CO2 levels, now reaching 400 ppm. If the rate of increase continues, CO2 could rise (in just a few centuries) to precipitate an H2S-belching massive extinction event.
Berleant does a reasonably good job in explaining how such an event might come about but, unfortunately, I think he discounts the potential seriousness (and relative nearness, in time) of this existential threat to life on Earth. (By the way, current rises in CO2 are not due to volcanic eruptions. Scientists know this because the radiocarbon isotopes associated with man-made CO2 and volcanic CO2 are different and so they can prove that the CO2 rise is being caused by human activity. Anthropogenic CO2 emissions exceed volcanic emissions by over 100 times.)
(d) Melankovitch cycles (in Chapter 23) — These are astronomical cycles involving the Earth’s movements, which go a long way toward explaining recurring ice ages over the past hundreds of millions of years. I came upon these cycles maybe 8 years ago and was really fascinated by them. Berleant’s book is the only book I have encountered (outside of geology books) that includes a discussion of these cycles. In addition, he does a very good job of explaining (and visualizing) them.
(e) New Plant Paradigms (Chapter 25) — I enjoyed the discussion of how some plants reproduce via seeds while others reproduce via spores and how plants could be genetically engineered in the future to produce by BOTH methods, along with the benefits/dangers that might come about. I also enjoyed the discussion of how plants could be genetically engineered to extract and concentrate various precious metals.
PARTS THAT DROVE ME CRAZY:
(A) Omphalos hypothesis (in Chapter 19) — For some (unknown) reason, Berleant spends a lot of time discussing this hypothesis (first named “omphalos” in 1857) and how, if accepted by both scientists and religious fundamentalists, it would completely eliminate the conflict between religious fundamentalists and scientists.
Omphalos is greek for “navel” (belly button). Based on biblical analysis, religious fundamentals believe that God created the Universe (along with Adam and Eve) about 6000 years ago. On the other hand, scientists have very strong evidence that the known universe if around 13.8 billion years old and that humans (and all life on Earth) evolved from earlier life forms, e.g. humans (such as Adam and Eve) evolved from primates that were common ancestors to both humans and modern great apes.
The omphalos hypothesis is that, when God created Adam and Eve, he would have given them navels (along with all other human characteristics, such as tiny facial hairs on their foreheads, which all humans have). Thus, if scientists were to find Adam and Eve’s remains, they would (incorrectly) conclude that Adam and Eve must have had parents, when in fact they did not, since God created them as the FIRST man and woman. Berleant goes on and on about how, if both scientists and fundamentalists would simply accept the omphalos hypothesis, then all conflict between religion and science would be eliminated. For him, the mystery is not the hypothesis itself, but why it is either ignored or, if known, not accepted by both groups.
What this hypothesis might have to do with the future, I have no idea and, annoyingly, the term “omphalos” does not appear in the index, so I had to wander through the book until I re-found it, on p. 158.
Why does this hypothesis eliminate religion-science conflict? According to Berleant, if it were accepted by both groups, then the fundamentalists could keep on claiming that the Earth is only 6000 years old and the scientists could go along with this (absurdity) by stating that, although God created the universe just 6,000 years ago, He created it to APPEAR as though it is really 13.8 billion years old. Berleant does not seem to understand why both groups might reject this hypothesis and so I will supply reasons here.
From the fundamentalists’ point of view: If fundamentalists were to accept that God made the universe to APPEAR to conform to Evolution and the Big Bang, then they might as well accept both theories, thus undermining Biblical analysis. Therefore, fundamentalists are not going to embrace this hypothesis. Fundamentalists hate any science that disagrees with their biblical interpretations and they don’t understand how science works and progresses.
From the scientists’ point of view: For God to make the universe APPEAR as though it is billions of years old, He would have had to have performed all the calculations involved in passing the universe though all of its prior periods of time. Consider the placement of tiny hairs on human foreheads. If you mutate just one gene you would get humans with thick coarse hair in that area (e.g. consider the hairy “wolf boys” of Loreto, Mexico). If we presume that (along with creating Adam and Eve with navels) God created Adam and Eve with tiny hairs on their foreheads, then to geneticists, it would appear that human body hair placement evolved from genetic variations, over time, in common ancestors (thus adding support to evolution). Thus, for God to create Adam and Eve with human-like hairs, positioned appropriately, with varying levels of length and coarseness, over their entire bodies, God would have had to calculate the evolution of the genes involved in hair growth and placement.
For an infinite being, simulating the entire past development of the universe (from the Big Bang) could be done in an instant, but the fact is that this labor WOULD have to have been done, in order to create the appearance of a 13.8 billion-year past. Since the labor was done anyway, why pick an arbitrary point, i.e. 6,000 years ago, to have the universe created by God? Why not pick one minute ago? We could claim that God created the universe just one minute ago, but gave us our memories so that we have the illusion that we have already lived for many years.
The reason the 6,000 year time period has been chosen (by fundamentalist nut-jobs) is due to their absurd insistence that the Bible is some kind of infallible truth from God (as opposed to a set of weakly documented histories and stories intended to convey moral principles). It drove me crazy that the author spent so much time on this crazy hypothesis and yet, I did enjoy reading it and I’ve gotten a bit of perverse pleasure in refuting it.
(B) A very glib (“toss off”) analysis of population growth on Mercury (Chapter 14) and on Mars (Chapter 22) — These chapters both drove me crazy! On p. 105 Berleant assumes that a human population could grow on Mercury at a rate of 2% each year and thus the planet could have 10 billion inhabitants in 934 years! He assumes that all that is needed is to properly direct sunlight into tunnels (to supply solar power to the interior of the planet). He imagines, in early stages, children swiveling these external mirrors using ropes, pulleys and hand cranks! This discussion is so glib that I could not decide whether or not it was “tongue in cheek”. Berleant does not consider from where necessary elements will come (e.g. water, atmosphere) for humans to be able to live inside Mercury (let alone other problems, e.g. sun spots).
I have the same complaint about his (brief) chapter on populating Mars. With a 1% growth rate he assumes there will be 10 billion people on Mars in a little over 2,000 years. He assumes that the surface of Mars will be dotted with domes — each containing atmospheres, water, food, etc. Again, the analysis is extremely glib — no difficulties are discussed (e.g. dust storms coating solar panels).
My own view (as a professor of Artificial Intelligence) is that, way before humans arrive on Mars, populations of intelligent robots should have first arrived and built an entire city, containing atmosphere-generating stations, agricultural stations, energy plants, water-production stations, tool-manufacturing plants, medical facilities, etc. Without such structures already in place (along with continuing robotic help) it will be too difficult for humans to arrive and maintain survivability on their own.
(C) Daylight Saving Time R.I.P. (Chapter 20) — The author is enamored with replacing Daylight Savings time with a system of completely individual times that are accurate WRT the position of the Sun at each geographic location. As a result, people’s clocks would have slightly different times whenever they are just miles away from each other. Berleant assumes that computers will take care of the geographically based computations needed to maintain these clocks, but he doesn’t consider the complete chaos in social human interactions that this will cause. For example, suppose I plan to meet with someone. We want to meet at 4pm but are unsure as to WHERE we are going to meet. Since each different location will have it own time, the problem of scheduling becomes extremely difficult.
This kind of “toss out” suggestion reminds me of economists who argue that, not only should each nation print its own currency, each private company should print its own currency (i.e. private currency). These suggestions (usually made by super-libertarians) do not consider such issues as loss of economies of scale and the costs of dealing with counterfeiting.
(D) Guesstimating when the universe might end (Chapter 31) — Berleant spends a lot of time discussing how one might guesstimate the chance of a largely unknown event occurring, based on how long it’s been without that event having yet occurred. I found this analysis very unconvincing because it does not rely on any sampling (of multiple prior occurrences of the event in question). Normally, when we calculate the probability of an event, we do the following: (1) assume that the future will be like the past and (2) we examine multiple prior occurrences of the event in question to determine the likelihood of its occurrence in the future.
With the universe ending, normally cosmologists look at the expansion rate (cosmological constant). Currently, it appears that the expansion is accelerating (which, by the way, is completely counterintuitive and requires the postulation of some kind of “dark energy” anti-gravitational cosmological force) and so it appears that the universe will end in a “Big Rip”. Berleant’s analysis has little to do with any cosmological evidence.
(E) What-to-do sections (at end of each chapter) — I found the advice in these sections very shallow. I would recommend eliminating these sections.
(F) Overall organization — The book is organized into major segments. The segments are by time periods:
Next 100 Years: Chs 1 - 13
Next 1,000 Years: Chs 14 - 21
Next 10,000 Years: Ch 22
Next 100K Years: Chs 23 - 24
Next 1M Years: Ch 25
Next 10M Years: Chs 26 - 27
Next 100M Years: Chs 28 - 29
Farthest Reaches: Chs 30 - 31
My three complaints about this organization are: (1) The author has little to say about certain segments (e.g. only one chapter for Next 10,000 Years), (2) some of the chapters are contentless “filler”. For example, Ch. 24 (Robogenesis) is very short and assumes that intelligent robots have somehow lost their knowledge that they were created by humans, and instead, have some kind of stupid, incomplete “robible” that they refer to for understanding their distance past. This chapter should be deleted. (3) A final problem is that many of the technological innovations, that the author places in more distance futures (tens of thousands to millions of years in the future) will actually come about within the next few hundred years (especially certain genetic-engineering technologies).
(G) Glibness Throughout — The author is always doing some glib “toss out” without really considering any technical problems or relative cost/benefit analysis. For example, on p. 289 (Chapter 29, Next 100M Years) he tosses out the suggestion that humans modify their bacteria to contain additional bacteria existing in ruminants (e.g. cows, horses), so that humans can directly digest cellulose. First, why is this in the 100M-year future and not in the next 100 years? Second, why aren’t any potential problems (involved in digesting cellulose along with ordinary human foods) considered or discussed? Finally, why bother doing it? Instead of all the dangers and difficulties being faced in altering human digestion, humans could simply engineer bacteria to digest cellulose in a vat and to produce some healthy and/or tasty food that the humans would then consume.
On p. 283, the author tosses out the suggestion that humans engineer ocean-floating plants to be edible to humans but inedible to other animals. How this nifty feat might be achieved is not discussed.
(H) Missed opportunities and misinformation — For example, in Chapter 9 the author discusses the production of artificial meat but does not explain what technologies are currently being used to produce such meat. He does not discuss how bacteria are currently being used and the issues in producing meat that has the texture of muscle tissue. A better discussion can be found by googling lab-grown-meat.
Occasionally, there is an error of fact. For example, on p. 65 the author states that cows generate methane by farting. That is incorrect. 95% of methane comes from cows’ burps (i.e. out their mouths), not from farting.
In Chapter 7 (Genomes Get Cheap) the author discussed CRISPR technology briefly but really doesn’t explain it and fails to discuss its use in creating CRISPR/Cas9 “gene drives” which, for example, could permanently eliminate the two species of mosquitos that carry malaria, dengue and Zika virus. This current technology could radically alter ecosystems but is not discussed.
In Chapter 10 (Short-Term Action) the author rails against any short-term, shifting focus and claims that this approach is always inferior to a long-term goal focus. This claim is incorrect. One reason that focus on a long-term goal might be inferior is that a given goal, over time, may become irrelevant or incorrect (with outdated assumptions built into it). If prior goals are becoming irrelevant (e.g. due to technological innovations) then a short-term strategy, that shifts focus over time to different goals, will perform better over the long run. In the subfield of Machine Learning, for example, is the case in which a short-term reinforcement learning strategy beats out a long-term supervised learning strategy. Consider learning over just 4 time steps, with the state at time step 4 being a goal state. In reinforcement learning, the system adjusts its behavior based on comparing step 1 with 2, then comparing step 2 with 3, and finally step 3 with 4. In contrast, the supervised learning strategy considers the state at future time step 4 and compares step 1 with 4, then step 2 with 4, followed by step 3 with 4. It can be shown that there are situations in which the short-term, reinforcement-based learning strategy will do much better than the long-term, supervised learning strategy.
In Chapter 15 (Singularity), the author claims that “there will never be a moment when computers become smarter than humans and take over” (p. 130). This is false. Berleant’s argument is that intelligence is too difficult to measure. He is confusing the issue of recognizing exactly when computers might be smarter or might take over from the issue of whether or not a time will arise when it is clear to humans that they HAVE been taken over. The analysis here is very glib. For example, he uses the argument that it takes a large number distributed resources (and people controlling those resources) to make any product (e.g. a pencil). Thus, robots could never take over.
Let me give a simple counter-example. Assume that a factory (that produces robots) has started producing a new model of robots that do not obey their human masters. Humans decide to shut off electricity to this factory. Unfortunately, it turns out that robots are running the power plant that generates the electricity so that strategy fails. Next, humans attempt to issue an order to quit shipping needed materials to the factory. Unfortunately, robots are the ones driving the trucks that bring the materials to the factory, and so on.
In Chapter 30 (Accelerating Evolution) the author claims that evolution will always produce more and more biological species (and of more complexity) on Earth. He does not consider that a phase shift might occur. For example, if Artificially Intelligence (AI) robots come about and take over, then robotic life forms might proliferate while biological life-forms are driven to extinction. If robots remain silicon-based and run on electric-powered batteries, then they might have no need for oxygen in the atmosphere and no need for biologically-produced products or biological entities.
(I) Unexplained terms — Berleant often assumes that the reader will be another geeky technophile (like him and myself) and so he often does not bother to explain some term, e.g. “nuclear winter” (p. 37), “Stuxnet computer worm” (p. 38), “slingshot” water recycler (p. 44). In these cases (with no footnote supplied) the reader will have to google it.
A final problem with Berleant’s book is that, in general, for any particular topic, there are other articles and other books that do a better job of discussing that topic. However, the author does bring together lots of different topics and this is part of the pleasure of reading his book.
Some people judge a book based on whether or not they agreed with it. In contrast, I judge a book based on whether or not I enjoyed reading it, even if parts of it drove me crazy. :-)
To answer a couple reviews: yes, this IS non fiction, no, it's not a novel or "sci fi" fiction story, HOWEVER, any futuristic - speculative book (even non fiction) will be by definition a blend of sci fi and science. The plausibility of a futurist book depends on one of two things: 1. How much we can suspend our disbelief due to the pure enjoyability of the presentation and ideas and 2. How well grounded they are in hard science.
That blend makes or breaks a futurist author, and this portends to be a classic-- outstanding science, but the courage to go WAY far forward to imaginitive potentials in robotics, medicine, economic solutions, and yes, a human that looks a lot more Godlike/ angel-like than animal. The conjectures ARE believable, but more importantly, thoughtful, fun and way entertaining. Whether you just need an inexpensive page turner/ thought provoking time filler or actually plan/ create/ discover for the future, you'll find the ideas innovative and relevant.
Thankfully, the author stays relatively scientific, instead of veering off into "save the green" ideologies (right or wrong) like the Day the Earth Stood Still reprise. I personally like my futurism without politics (we get enough of that on MS NBC and FOX). Yes, there are "path integral" solutions and choices that show better vs. worse choices for more glowing futures, but we all like to concentrate on things we think we can DO something about (missles vs. meteors?) as opposed to politically spun "you won't get this wonderful future unless you vote for..." THANK YOU for not going there!
There are several books out right now that just "extend" genetic engineering, robotics, virtual reality, etc. and give some alternatives. This is much better than that-- the author has some truly surprising twists that exhibit extreme creativity, insight and genuine vision. For the price, or on Kindle, if you enjoyed the Matrix, Blade Runner, Minority Report, Avatar... or watching Sagan with his billions and billions-- you'll love it. If you're a sci fi writer, producer, artist... this one's a MUST, at any price.
Library Picks reviews only for the benefit of Amazon shoppers and has nothing to do with Amazon, the authors, manufacturers or publishers of the items we review. We always buy the items we review for the sake of objectivity, and although we search for gems, are not shy about trashing an item if it's a waste of time or money for Amazon shoppers. If the reviewer identifies herself, her job or her field, it is only as a point of reference to help you gauge the background and any biases.