Profile for Peter McCluskey > Reviews

Browse

Peter McCluskey's Profile

Customer Reviews: 250
Top Reviewer Ranking: 32,604
Helpful Votes: 2003




Community Features
Review Discussion Boards
Top Reviewers

Guidelines: Learn more about the ins and outs of Your Profile.

Reviews Written by
Peter McCluskey RSS Feed (Berkeley, CA USA)
(REAL NAME)   

Show:  
Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11-20
pixel
How the West Won: The Neglected Story of the Triumph of Modernity
How the West Won: The Neglected Story of the Triumph of Modernity
by Rodney Stark
Edition: Hardcover
Price: $21.37
57 used & new from $17.37

1 of 3 people found the following review helpful
3.0 out of 5 stars Many interesting ideas, but not very rigorous, September 9, 2014
This book is a mostly entertaining defense of Christian and libertarian cultures' contribution to Western civilization's dominance.

He wants us to believe that the industrial revolution resulted from mostly steady progress starting with Greek city-states, interrupted only by the Roman empire.

He defends the Catholic church's record of helping scientific progress and denies that the Reformation was needed, although he suggests the Catholic church's reaction to the Reformation created harmful anti-capitalist sentiments.

His ideas resemble those in Fukuyama's The Origins of Political Order, yet there's little overlap between the content of the two books.

The early parts of the book have too many descriptions of battles and other killings whose relevance is unclear.

I was annoyed at how much space he devoted to attacking political correctness toward the end of the book.

In spite of those problems, he presents many interesting ideas. Some are fairly minor, such as changes in privacy due to the Little Ice Age triggering the invention of chimneys. Others provide potentially important insights into differences between religions, e.g. "many influential Muslim scholars have held that efforts to formulate natural laws are blasphemy because they would seem to deny Allah's freedom to act."

Alas, I can only give the book a half-hearted endorsement because I suspect many of his claims are poorly supported. E.g. he thinks increased visibility of child labor in the 1800s caused child labor laws via shocked sensibilities. Two alternatives that seem much more plausible to me are that the increased visibility made the laws feasible to enforce, and the increased concentration of employers into a separate class made them easier scapegoats.


Superintelligence: Paths, Dangers, Strategies
Superintelligence: Paths, Dangers, Strategies
Price: $11.69

2 of 3 people found the following review helpful
5.0 out of 5 stars Valuable progress on important subjects, but still disappoints in places, August 1, 2014
This book is substantially more thoughtful than previous books on AGI risk, and substantially better organized than the previous thoughtful writings on the subject.

Bostrom's discussion of AGI takeoff speed is disappointingly philosophical. Many sources advise us to use the outside view to forecast how long something will take. We've got lots of weak evidence about the nature of intelligence, how it evolved, and about how various kinds of software improve, providing data for an outside view. Bostrom assigns a vague but implausibly high probability to AI going from human-equivalent to more powerful than humanity as a whole in days, with little thought of this kind of empirical check.

Bostrom's discussion of how takeoff speed influences the chance of a winner-take-all scenario makes it clear that disagreements over takeoff speed are pretty much the only cause of my disagreement with him over the likelihood of a winner-take-all outcome. Other writers aren't this clear about this. I suspect those who assign substantial probability to a winner-take-all outcome if takeoff is slow will wish he'd analyzed this in more detail.

I'm less optimistic than Bostrom about monitoring AGI progress. He says "it would not be too difficult to identify most capable individuals with a long-standing interest in [AGI] research". AGI might require enough expertise for that to be true, but if AGI surprises me by only needing modest new insights, I'm concerned by the precedent of Tim Berners-Lee creating a global hypertext system while barely being noticed by the "leading" researchers in that field. Also, the large number of people who mistakenly think they've been making progress on AGI may obscure the competent ones.

The best parts of the book clarify many issues related to ensuring that an AGI does what we want.

He catalogs more approaches to controlling AGI than I had previously considered, including tripwires, oracles, and genies, and clearly explains many limits to what they can accomplish.

He briefly mentions the risk that the operator of an oracle AI would misuse it for her personal advantage. Why should we have less concern about the designers of other types of AGI giving them goals that favor the designers?

If an oracle AI can't produce a result that humans can analyze well enough to decide (without trusting the AI) that it's safe, why would we expect other approaches (e.g. humans writing the equivalent seed AI directly) to be more feasible?

He covers a wide range of ways we can imagine handling AI goals, including strange ideas such as telling an AGI to use the motivations of superintelligences created by other civilizations

He does a very good job of discussing what values we should and shouldn't install in an AGI: the best decision theory plus a "do what I mean" dynamic, but not a complete morality.

I'm somewhat concerned by his use of "final goal" without careful explanation. People who anthropomorphise goals are likely to confuse at least the first few references to "final goal" as if it worked like a human goal, i.e. something that the AI might want to modify if it conflicted with other goals.

It's not clear how much of these chapters depend on a winner-take-all scenario. I get the impression that Bostrom doubts we can do much about the risks associated with scenarios where multiple AGIs become superhuman. This seems strange to me. I want people who write about AGI risks to devote more attention to whether we can influence whether multiple AGIs become a singleton, and how they treat lesser intelligences. Designing AGI to reflect values we want seems almost as desirable in scenarios with multiple AGIs as in the winner-take-all scenario (I'm unsure what Bostrom thinks about that). In a world with many AGIs with unfriendly values, what can humans do to bargain for a habitable niche?

He has a chapter on worlds dominated by whole brain emulations, probably inspired by Robin Hanson's writings but with more focus on evaluating risks than on predicting the most probable outcomes. Since it looks like we should still expect an em-dominated world to be replaced at some point by AGI(s) that are designed more cleanly and able to self-improve faster, this isn't really an alternative to the scenarios discussed in the rest of the book.

This books represents progress toward clear thinking about AGI risks, but much more work still needs to be done.


The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos
The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos
by Brian Greene
Edition: Paperback
Price: $11.04
109 used & new from $5.95

1 of 1 people found the following review helpful
4.0 out of 5 stars Entertaining, July 8, 2014
Verified Purchase(What's this?)
This book has a lot of overlap with Tegmark's Our Mathematical Universe.

Greene uses less provocative language than Tegmark, but makes up for that by suggesting 5 more multiverses than Tegmark (3 of which depend on string theory for credibility, and 2 that Tegmark probably wouldn't label as multiverses).

I thought about making some snide remarks about string theory being less real than the other multiverses. Then I noticed that what Greene calls the ultimate multiverse (all possible universes) implies that string theory universes (or at least computable approximations) are real regardless of whether we live in one.

Like Tegmark, Greene convinces me that inflation which lasts for infinite time implies infinite space and infinite copies of earth, but fails to convince me that he has a strong reason for assuming infinite time.

The main text is mostly easy to read. Don't overlook the more technical notes at the end - the one proposing an experiment that would distinguish the Many Worlds interpretation of quantum mechanics from the Copenhagen interpretation is one of the best parts of the book.


The Rule of the Clan: What an Ancient Form of Social Organization Reveals About the Future of Individual Freedom
The Rule of the Clan: What an Ancient Form of Social Organization Reveals About the Future of Individual Freedom
Offered by Macmillan
Price: $4.99

4.0 out of 5 stars Good historical insights, poor comments about modern policy, June 16, 2014
Verified Purchase(What's this?)
This book does a good job of explaining how barbaric practices such as feuds and honor killings are integral parts of clan-based systems of dispute resolution, and can't safely be suppressed without first developing something like the modern rule of law to remove the motives that perpetuate them.

He has a coherent theory of why societies with no effective courts and police need to have kin-based groups be accountable for the actions of their members, which precludes some of the individual rights that we take for granted.

He does a poor job of explaining how this is relevant to modern government. He writes as if anyone who wants governments to exert less power wants to weaken the rule of law and the ability of governments to stop violent disputes. (His comments about modern government are separate enough to not detract much from the rest of the book).

He implies that modern rule of law and rule by clans are the only stable possibilities. He convinced me that it would be hard to create good alternatives to those two options, but not that alternatives are impossible.

To better understand how modern individualism replaced clan-based society, read Fukuyama's The Origins of Political Order together with this book.


Our Mathematical Universe: My Quest for the Ultimate Nature of Reality
Our Mathematical Universe: My Quest for the Ultimate Nature of Reality
Offered by Random House LLC
Price: $11.40

2 of 4 people found the following review helpful
4.0 out of 5 stars Entertaining, occasionally insightful, April 14, 2014
Verified Purchase(What's this?)
His most important claim is the radical Platonist view that all well-defined mathematical structures exist, therefore most physics is the study of which of those we inhabit. His arguments are more tempting than any others I've seen for this view, but I'm left with plenty of doubt.

He points to ways that we can imagine this hypothesis being testable, such as via the fine-tuning of fundamental constants. But he doesn't provide a good reason to think that those tests will distinguish his hypothesis from other popular approaches, as it's easy to imagine that we'll never find situations where they make different predictions.

The most valuable parts of the book involve the claim that the multiverse is spatially infinite. He mostly talks as if that's likely to be true, but his explanations caused me to lower my probability estimate for that claim.

He gets that infinity by claiming that inflation continues in places for infinite time, and then claiming there are reference frames for which that infinite time is located in a spatial rather than a time direction. I have a vague intuition why that second step might be right (but I'm fairly sure he left something important out of the explanation).

For the infinite time part, I'm stuck with relying on argument from authority, without much evidence that the relevant authorities have much confidence in the claim.

Toward the end of the book he mentions reasons to doubt infinities in physics theories - it's easy to find examples where we model substances such as air as infinitely divisible, when we know that at some levels of detail atomic theory is more accurate. The eternal inflation theory depends on an infinitely expandable space which we can easily imagine is only an approximation. Plus, when physicists explicitly ask whether the universe will last forever, they don't seem very confident. I'm also tempted to say that the measure problem (i.e. the absence of a way to say some events are more likely than others if they all happen an infinite number of times) is a reason to doubt infinities, but I don't have much confidence that reality obeys my desire for it to be comprehensible.

I'm disappointed by his claim that we can get good evidence that we're not Boltzmann brains. He wants us to test our memories, because if I am a Boltzmann brain I'll probably have a bunch of absurd memories. But suppose I remember having done that test in the past few minutes. The Boltzmann brain hypothesis suggests it's much more likely for me to have randomly acquired the memory of having passed the test than for me to actually be have done the test. Maybe there's a way to turn Tegmark's argument into something rigorous, but it isn't obvious.

He gives a surprising argument that the differences between the Everett and Copenhagen interpretations of quantum mechanics don't matter much, because unrelated reasons involving multiverses lead us to expect results comparable to the Everett interpretation even if the Copenhagen interpretation is correct.

It's a bit hard to figure out what the book's target audience is - he hides the few equations he uses in footnotes to make it look easy for laymen to follow, but he also discusses hard concepts such as universes with more than one time dimension with little attempt to prepare laymen for them.

The first few chapters are intended for readers with little knowledge of physics. One theme is a historical trend which he mostly describes as expanding our estimate of how big reality is. But the evidence he provides only tells us that the lower bounds that people give keep increasing. Looking at the upper bound (typically infinity) makes that trend look less interesting.

The book has many interesting digressions such as a description of how to build Douglas Adams' infinite improbability drive.
Comment Comment (1) | Permalink | Most recent comment: May 2, 2014 9:49 AM PDT


The Great Degeneration: How Institutions Decay and Economies Die
The Great Degeneration: How Institutions Decay and Economies Die
by Niall Ferguson
Edition: Hardcover
Price: $15.43
126 used & new from $0.53

5 of 6 people found the following review helpful
2.0 out of 5 stars Read Reinhart and Rogoff instead, March 2, 2014
Verified Purchase(What's this?)
Read (or skim) Reinhart and Rogoff's book This Time is Different instead. The Great Degeneration contains little value beyond a summary of that.

The other part which comes closest to analyzing US decay is a World Bank report about governance quality from 1996 to 2011 which shows the US in decline from 2000 to 2009. He makes some half-hearted attempts to argue for a longer trend using anecdotes that don't really say much.

Large parts of the book are just standard ideological fluff.


Our Final Invention: Artificial Intelligence and the End of the Human Era
Our Final Invention: Artificial Intelligence and the End of the Human Era
Offered by Macmillan
Price: $9.99

4 of 6 people found the following review helpful
4.0 out of 5 stars A light-weight introduction to AI risk, February 14, 2014
Verified Purchase(What's this?)
This book describes the risk that artificial general intelligence will cause human extinction, presenting the ideas propounded by Eliezer Yudkowsky in a slightly more organized but less rigorous style than Eliezer has.

Barrat is insufficiently curious about why many people who claim to be AI experts disagree, so he'll do little to change the minds of people who already have opinions on the subject.

He dismisses critics as unable or unwilling to think clearly about the arguments. My experience suggests that while it's normally the case that there's an argument that any one critic hasn't paid much attention to, that's often because they've rejected with some thought some other step in Eliezer's reasoning and concluded that the step they're ignoring wouldn't influence their conclusions.

The weakest claim in the book is that an AGI might become superintelligent in hours. A large fraction of people who have worked on AGI (e.g. Eric Baum's What is Thought?) dismiss this as too improbable to be worth much attention, and Barrat doesn't offer them any reason to reconsider. The rapid takeoff scenarios influence how plausible it is that the first AGI will take over the world. Barrat seems only interested in talking to readers who can be convinced we're almost certainly doomed if we don't build the first AGI right. Why not also pay some attention to the more complex situation where an AGI takes years to become superhuman? Should people who think there's a 1% chance of the first AGI conquering the world worry about that risk?

Some people don't approve of trying to build an immutable utility function into an AGI, often pointing to changes in human goals without clearly analyzing whether those are subgoals that are being altered to achieve a stable supergoal/utility function. Barrat mentions one such person, but does little to analyze this disagreement.

Would an AGI that has been designed without careful attention to safety blindly follow a narrow interpretation of its programmed goal(s), or would it (after achieving superintelligence) figure out and follow the intentions of its authors? People seem to jump to whatever conclusion supports their attitude toward AGI risk without much analysis of why others disagree, and Barrat follows that pattern.

I can imagine either possibility. If the easiest way to encode a goal system in an AGI is something like "output chess moves which according to the rules of chess will result in checkmate" (turning the planet into computronium might help satisfy that goal).

An apparently harder approach would have the AGI consult a human arbiter to figure out whether it wins the chess game - "human arbiter" isn't easy to encode in typical software. But AGI wouldn't be typical software. It's not obviously wrong to believe that software smart enough to take over the world would be smart enough to handle hard concepts like that. I'd like to see someone pin down people who think this is the obvious result and get them to explain how they imagine the AGI handling the goal before it reaches human-level intelligence.

He mentions some past events that might provide analogies for how AGI will interact with us, but I'm disappointed by how little thought he puts into this.

His examples of contact between technologically advanced beings and less advanced ones all refer to Europeans contacting Native Americans. I'd like to have seen a wider variety of analogies, e.g.:

* Japan's contact with the west after centuries of isolation

* the interaction between neanderthals and humans

* the contact that resulted in mitochondria becoming part of our cells

He quotes Vinge saying an AGI 'would not be humankind's "tool" - any more than humans are the tools of rabbits or robins or chimpanzees.' I'd say that humans are sometimes the tools of human DNA, which raises more complex questions of how well the DNA's interests are served.

The book contains many questionable digressions which seem to be designed to entertain.

He claims Google must have an AGI project in spite of denials by Google's Peter Norvig (this was before it bought DeepMind). But the evidence he uses to back up this claim is that Google thinks something like AGI would be desirable. The obvious conclusion would be that Google did not then think it had the skill to usefully work on AGI, which would be a sensible position given the history of AGI.

He thinks there's something paradoxical about Eliezer Yudkowsky wanting to keep some information about himself private while putting lots of personal information on the web. The specific examples Barrat gives strongly suggests that Eliezer doesn't value the standard notion of privacy, but wants to limit peoples' ability to distract him. Barrat also says Eliezer "gave up reading for fun several years ago", which will surprise those who see him frequently mention works of fiction in his Author's Notes on hpmor.com.

All this makes me wonder who the book's target audience is. It seems to be someone less sophisticated than a person who could write an AGI.


Self Comes to Mind: Constructing the Conscious Brain
Self Comes to Mind: Constructing the Conscious Brain
by Antonio R. Damasio
Edition: Paperback
Price: $14.36
75 used & new from $7.61

1 of 1 people found the following review helpful
3.0 out of 5 stars Mostly Unenlightening, November 20, 2013
Verified Purchase(What's this?)
This book describes many aspects of human minds in ways that aren't wrong, but the parts that seem novel don't have important implications.

He devotes a sizable part of the book to describing how memory works, but I don't understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that "it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self". There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio's notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.


Reinventing Philanthropy: A Framework for More Effective Giving
Reinventing Philanthropy: A Framework for More Effective Giving
by Eric Friedman
Edition: Hardcover
Price: $23.11
59 used & new from $6.10

4.0 out of 5 stars A step in the right direction, September 25, 2013
This book will spread the ideas behind effective altruism to a modestly wider set of donors than other efforts I'm aware of. It understates how much the effective altruism movement differs from traditional charity and how hard it is to implement, but given the shortage of books on this subject any addition is valuable. It focuses on how to ask good questions about philanthropy rather than attempting to find good answers.

The author provides a list of objections he's heard to maximizing the effectiveness of charity, a majority of which seem to boil down to the "diversification of nonprofit goals would be drastically reduced", leading to many existing benefits being canceled. He tries to argue that people have extremely diverse goals which would result in an extremely diverse set of charities. He later argues that the subjectivity of determining the effectiveness of charities will maintain that diversity. Neither of these arguments seem remotely plausible. When individuals explicitly compare how they value their own pleasure, life expectancy, dignity, freedom, etc., I don't see more than a handful of different goals. How could it be much different for recipients of charity? There exist charities whose value can't easily be compared to GiveWell's recommended ones (stopping nuclear war?), but they seem to get a small fraction of the money that goes to charities that GiveWell has decent reasons for rejecting.

So I conclude that widespread adoption of effective giving would drastically reduce the diversity of charitable goals (limited mostly by the fact that spending large amount on a single goal is subject to diminishing returns). The only plausible explanation I see for peoples' discomfort with that is that people are attached to beliefs which are inconsistent with treating all potential recipients as equally deserving.

He's reluctant to criticize "well-intentioned" donors who use traditional emotional reasoning. I prefer to think of them as normally-intentioned (i.e. acting on a mix of selfish and altruistic motives).

I still have some concerns that asking average donors to objectively maximize the impact of their donations would backfire by reducing the emotional benefit they get from giving more than it increases the effectiveness of their giving. But since I don't expect more than a few percent of the population to be analytical enough to accept the arguments in this book, this doesn't seem like an important concern.

He tries to argue that effective giving can increase the emotional benefit we get from giving. This mostly seems to depend on getting more warm fuzzy feelings from helping more people. But as far as I can tell, those feelings are very insensitive to the number of people helped. I haven't noticed any improved feelings as I alter my giving to increase its impact, and the literature on scope insensitivity suggests that's typical.

He wants donors to treat potentially deserving recipients as equally deserving regardless of how far away they are, but he fails to include people who are distant in time. He might have good reasons for not wanting to donate to people of the distant future, but not analyzing those reasons risks making the same kind of mistake he criticizes donors for making about distant continents.


War in Human Civilization
War in Human Civilization
by Azar Gat
Edition: Paperback
Price: $24.46
55 used & new from $9.74

5 of 5 people found the following review helpful
3.0 out of 5 stars A few good sections hidden in a long, tedious book, August 31, 2013
This ambitious book has some valuable insights into what influences the frequency of wars, but is sufficiently long-winded that I wasn't willing to read much more than half of it (I skipped part 2).

Part 1 describes the evolutionary pressures which lead to war, most of which ought to be fairly obvious.

One point that seemed new to me in that section was the observation that for much of the human past, group selection was almost equivalent to kin selection because tribes were fairly close kin.

Part 3 describes how the industrial revolution altered the nature of war.

The best section of the book contains strong criticisms of the belief that democracy makes war unlikely (at least with other democracies).

Part of the reason for the myth that democracies don't fight each other was people relying on a database of wars that only covers the period starting in 1815. That helped people overlook many wars between democracies in ancient Greece, the 1812 war between the US and Britain, etc.

A more tenable claim is that something associated with modern democracies is deterring war.

But in spite of number of countries involved and the number of years in which we can imagine some of them fighting, there's little reason to consider the available evidence for the past century to be much more than one data point. There was a good deal of cultural homogeneity across democracies in that period. And those democracies were part of an alliance that was unified by the threat of communism.

He suggests some alternate explanations for modern peace that are only loosely connected to democracy, including:

* increased wealth makes people more risk averse
* war has become less profitable
* young males are a smaller fraction of the population
* increased availability of sex made men less desperate to get sex by raping the enemy ("Make love, not war" wasn't just a slogan)

He has an interesting idea about why trade wasn't very effective at preventing wars between wealthy nations up to 1945 - there was an expectation that the world would be partitioned into a few large empires with free trade within but limited trade between empires. Being part of a large empire was expected to imply greater wealth than a small empire. After 1945, the expectation that trade would be global meant that small nations appeared viable.

Another potentially important historical change was that before the 1500s, power was an effective way of gaining wealth, but wealth was not very effective at generating power. After the 1500s, wealth became important to being powerful, and military power became less effective at acquiring wealth.


Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11-20