|
300 of 322 people found the following review helpful
This review is from: The Predictioneer's Game: Using the Logic of Brazen Self-Interest to See and Shape the Future (Hardcover)
Vine Customer Review of Free Product (What's this?)
This review has been edited to correct some misstatements pointed out by the author. I was working from a prepublication version that did not have all the end-notes, nor a reference to the website. Moreover, the author's comment to this review adds some useful material. On the basis of that, I would raise my rating from three stars to three and a half if that were allowed, but my basic opinion has not changed.
This book is likely to teach you some fascinating and useful material, but I can't recommend it wholeheartedly because it may drive you crazy as well. The basic idea is simple. Experts know a lot, but are bad at making predictions about human affairs. Simple models based on quantitative game theory are more accurate, and even when they're incorrect they expand your thinking in useful ways. Moreover, these models allow you to simulate alternatives and generate outcomes as good or better than the best human strategists can achieve. To evaluate this book, it's useful to separate that claim into two parts. I'm a quant, and therefore I think it's pretty well established that you make better decisions by asking experts what they know and letting a computer trace the logical implications than by following the experts' recommendations. I also accept that simple quantitative models do remarkably good jobs, and are only rarely surpassed by complex qualitative analysis. Anyway, if you don't accept those positions, there's no point in even opening this book. So to me and probably to you if you're still reading, asking experts simple questions with answers on a scale from 0 to 100 and combining the results in a reasonable way, is an excellent approach to most decisions. Call this the basic quant position. The author goes further than the basic quant position in three respects. First, he makes much stronger claims for the superiority of his approach. Second, he advocates specific game theory analysis that involves complex modeling, as opposed to simple rules such as guessing that the outcome will be somewhere near the average opinion weighted by salience and power. He doesn't justify his methods as practical shortcuts that seem to work, he repeatedly claims that they are backed by science and logic, unlike alternatives. Finally, he goes beyond prediction to use his model to engineer outcomes. This is quant-on-steroids. It made me recall that John Nash, the most important game theorist with respect to this kind of work, maintained he was Emperor of Antarctica. To paraphrase Dizzy Dean, it ain't megalomania if you can back it up. This is where it starts to get irritating. In the preface, before you get to page 1, he writes, "I've been predicting future events for three decades, often in print before the fact, and mostly getting them right. . . . I have made hundreds, even thousands, of predictions--a great many of them in print, ready to be scrutinized by any naysayer." This guy makes extraordinarily bold claims about his quantitative prediction ability, and he doesn't keep track of his record? He can't even recall within a factor of ten how many predictions he has made? Who decided he was "mostly right?" And why is anyone who wants to know the record a "naysayer?" Even if you did all the work, unearthed all the printed predictions (he only references a few in the book) and found he was zero for 200, he could just say he got thousands of other ones right, ones you didn't find. This sounds more like a Nostradamus defender than the "science" he's always claiming. To bookend that frustration, by the end of the book, despite frequent promises, he has not revealed his model! He does have a version on-line that allows you to play around with it, and has more details on how it works, but still no clear, top-down description. In the book, you get hints and bits and pieces, but no clear explanation of how he arrives at his predictions. And this is not the only broken promise, there are frequent comments that he will "go into this further" or "provide more details," later; I can't find one example where he follows through. On the other hand, there are a few facts that get repeated far too many times. Those two things would be enough for a lot of people to conclude he's a fraud. But there's an awful lot of good, clear, insightful analysis in between. He gives examples of political and diplomatic predictions he has made, discussing the inputs and basic form of the analysis. There are accounts of corporate and legal struggles where he maneuvered to an outcome favorable to his clients. He also applies the methods to history, to ask what might have happened. This is all fascinating stuff, and the data and conclusions speak for themselves. They show plausibly that this approach could work, it is practical to implement and it leads to conclusions that are surprising, but can shown to be logical. Without a lot more details these stories don't prove the model works, but they represent a coherent claim that it does. However, this brings us to another problem. Some of the accounts are not credible. An account of how he maneuvered a fifth-choice candidate into a CEO job requires us to believe the board of directors split into five groups of three that agreed among themselves on the exact preference order for the five candidates, that these generated a cyclic preference order that included all five candidates (a mathematically possible but unlikely result only previously observed in game theory textbooks), that all this information was known with certainty beforehand and that none of the board members was smart enough to consider coalition-building or voting a second choice when it was clear a first choice vote would be wasted. Moreover, the Rube Goldberg scheme that worked seems far less promising than simple politicking to either build a coalition or change a slight preference. This is the least credible account (unless you include the million dollars he was offered by Libya to engineer the removal of Anwar Sadat from power in Egypt, or the 10% of Zaire dictator Mobutu's external wealth offered to keep him in power), but none of the stories include basic information to allow fact-checking. In some cases the need for confidentiality is clear, but why no names of the government officials who hired him, or the partners at Arthur Andersen? Why is he protecting the agents of Libya and Mobutu? Does the brokerage firm that bought his advice in 1992 still insist on remaining anonymous? (And does it even still exist?) Some discretion is understandable, but this book reminds me of the fictional spies who have all labels removed from their clothing and possessions. The final irritation is only one account of a missed prediction is given, and it is explained implausibly. The author predicted Hillary Clinton's healthcare plan would pass in 1994. He claims that the outcome was changed by the Rostenkowski scandal. That's hard to believe, since Rostenkowski resigned from his leadership position before the first bill came to Congress. Rostenkowski was not a strong supporter of healthcare reform. There's no doubt that his political skills would have been useful, had he chosen to push the plan, and the scandal did weaken the Democrats in general, but many other things happened that seemed to be at least as important. So either the prediction was dependent on lots of unpredictable events, and therefore should have been given as a probability distribution instead of a point estimate, or Rostenkowski was special, in which case the prediction should have been healthcare will pass or fail based on how Rostenkowski does. And why was the prediction not updated as the scandal worsened? I know this is a long review, but I've only covered some of the bigger irritations. If you're an easy-going, tolerant sort who wants to learn some important practical and theoretical aspects of prediction, by all means read this book. If not, you might want a blood pressure check before you attempt it.
Sort: Oldest first | Newest first
Showing 1-10 of 17 posts in this discussion
Initial post:
Oct 2, 2009 11:19:26 AM PDT
Robert Horowitz says:
Wow. Incredibly insightful and thoughtful review. I'll buy your book when you write it!
Posted on
Oct 2, 2009 5:19:44 PM PDT
Abacus says:
Aaron, that's an excellent review. I am half way through the book, and so far I love it for the positive reasons you mentioned. At first, I thought his self-proclaimed track record of 90% accuracy in forecasting the future was way too good to be true (in all fairness he states that a CIA study does confirm this too). I am kind of a quant too. And, in the quant world you typically deal with and accept uncertainty (a probabilistic distribution of outcomes). Instead, he comes up with a single outcome. How could he do that? But, the more I read the more he beats down my skepticism. The only thing I wish I could get an expert mathematician to confirm his model. His model algorithm is proprietary (other wise he would not make a living for long peddling his unique competitive advantage). Thus, there are no publicly available peer-reviewed studies of his model. This leaves you wanting confirmation.
In reply to an earlier post on
Oct 3, 2009 7:26:32 AM PDT
Aaron C. Brown says:
Thanks Robert, for the kind words.
Yes, the CIA study is some comfort but (a) no citation is given so you can check it and (b) the claim is the forecasts he made for the CIA were twice as accurate as their experts. That second part is open to a lot of interpretation, and I have heard much stronger claims for Intrade and other prediction markets. So I'd like to see the study, or at least a longer summary. He has published some models so there is some basis to judge. But if all he has is a proprietary black box, putting its results in a book is an advertisement, not a non-fiction book. He should tell you how to do what he claims. He doesn't have to give away all his proprietary tweaks that make his consulting services valuable, but it's not enough to hint around. How about a website that implements a simplified version of the model and details all the computations? What bothers me is the missing component is all the game theory. You don't need game theory to write down people's desires and powers, the theory helps you predict coalition building, strategy shifts and things like that. That means you need some kind of model for the decision-making process. The entire point of his method is game theory and the book contains only two toy examples (prisoner's dilemma and non-transitive voting) which are well-known standards.
In reply to an earlier post on
Oct 3, 2009 10:26:41 AM PDT
Abacus says:
Aaron, well put. I agree with you on all counts.
Posted on
Oct 4, 2009 7:14:10 AM PDT
pmpncali says:
Thank you for such a great review. I've been interested in learning about game theory for several months now, and after seeing De Mesquita on the Daily Show, thought, ok, the time is now. Your points, so well presented, will help me delve into this somewhat intimidating topic with eyes wider open. I'm with Robert Horowitz, and plan to check out your column and books. Thank you.
Posted on
Oct 4, 2009 10:08:37 AM PDT
Arlene Bueno De Mesquita says:
Aaron, I generally try to ignore reviewers but you have obviously tried to do a careful and thoughtful review and yet you make fundamentally incorrect factual statements that should not be left unanswered as they are likely to lead people to wrong inferences. You say that I do not provide a citation for the CIA evaluation. This just isn't so. I report the CIA evaluation on p. xix, "According to a declassified CIA assessment, the predictions for which I've been responsible have a 90 percent accuracy rate.6" Endnote 6 (see p. 236) contains the citation: Stanley Feder, "Factions and Policon: New Ways to Analyze Politics," in H. Bradford Westerfield, ed. Inside CIA's Private World: Declassified Articles from the Agency's Internal journal, 1955-1992 (New Haven: Yale University Press, 1995) and James L. Ray and Bruce M. Russett, "The Future as Arbiter of Theoretical Controversies: Predictions, Explanations and the end of the Cold War," British Journal of Political Science 26, no. 4 (October 1996): 441-70. These are the CIA and academic evaluations respectively (there is a more recent article by Feder in the Annual Review of Political Science, 2002, that even more strongly documents the model's effectiveness). Feder is cited again in chapter 4 when it is pertinent.
On the allegedly proprietary formulas -- they are not proprietary (only the implementation software is because I do not own the commercial rights to it, a firm with which I am no longer associated owns those rights) -- endnote 1 to chapter 5, p. 238, provides the equations necessary to create a program that (other than arbitrary assumptions of mine to make things doable in a practical way --others can make their own arbitrary assumptions), is essentially the same as the model I used to produce most of the results reported in the book. The latter chapters are based on a new model of mine whose math is laid out in a paper I gave at the 2009 meetings of the International Studies Association and that can be found online. Several people and organizations have created their own software based on articles and books I have published on my models. A more detailed list of references can be found on the web page I created for the book (www.predictioneersgame.com) -- I provide a long list of citations there. Of course, in a book like The Predictioneer's Game one does not go into the technical details -- that is for the academic audience, not a general readership. Finally, however skeptical a reader may be of the examples from my consulting life they are all accurate and based on real experiences. And yes it is true that the only wrong prediction I discuss at length is the 1993-1994 health care prediction. When 90 percent are right there aren't that many interesting wrong predictions. But of course there are others -- I predicted that China would depeg the Hong Kong dollar and they didn't (see my 1996 book with David Newman and Alvin Rabushka, Red Flag Over Hong Kong), etc. But the point is 90 percent are right. Hundreds are published in peer reviewed journals -- see the book's web page for citations, go read them and do your own assessment of accuracy. Sorry to take your time but I do hope at least you will correct the claim that I do not provide citations to the CIA evaluation and to the essential technical material. And please do check out the many more publications cited at www.predictioneersgame.com.
In reply to an earlier post on
Oct 4, 2009 10:43:45 AM PDT
Aaron C. Brown says:
I apologize for the misstatements, I will correct them. I should have noted that I had a prepublication version of the book that apparently did not have all the endnotes (nor did it have an index) as I do not see the ones you mentioned.
For all of this, I am still irritated at the attitude that the reader should find and evaluate hundreds of predictions, as if the burden of proof falls on people who disagree with you. Just finding and reading them is a huge job. To evaluate each one you have to figure out when it was made (the publication dates are often long after the prediction dates, after the outcome was known), what the consensus was or what a naive model would have predicted at the time, how specific the prediction was and how closely it matched the outcome. Why not put a table in the appendix, or on-line, with a list of published predictions with date made, date published, citation, brief statement of the prediction, and brief statement of the outcome? Then people can evaluate without looking at every prediction. They can pick a few they know something about and see what you consider correct, and how surprising and specific your predictions are. If they are skeptical, they can refer to the citation. You must have this information, otherwise you couldn't know you were right 90% of the time. It's not giving anything away, since everything is already published. It makes it possible for someone who is neither a born believer nor a naysayer to form a reasonable opinion, to audit your claims, with a reasonable amount of work. In my field, finance, there are thousands of people who charge tens of billions of dollars per year to make predictions that easily can be shown to be less accurate than random choices. All of them sound convincing and have degrees and credentials to back up their expertise. They write books and articles, and discuss things on CNBC every day. Most of them would probably claim to be "mostly right" or even right 90% of the time, if the SEC didn't discourage that kind of claim. This has been going on for decades, and their numbers and compensation continue to grow each year. Then there are astrologers, newspaper columnists, politicians and gambling advisors. In such a world, I think any predictioneer should publish a well-defined track record to be taken seriously.
Posted on
Oct 17, 2009 2:41:53 PM PDT
Michael Palmer says:
I have read roughly one-third of this book and am every bit as disappointed and frustrated as Mr. Brown appears to be. I am not really sure why his agent suggested he write it (except to make money). Perhaps it contains no math because publishers believe that even one formula appearing in print drives down sales considerably.
The basic components of the model are insufficiently explained in my view. I get the sense that he was concerned about giving away too much information, worried about making it possible for a potential competitor to reverse engineer what he created. Those with access to scholarly journals may want to check out his 1990 article in International Organization, which lays out some basic formulas (though not the proprietary algorithm). It seems to me that the most challenging aspect of his approach is that of correctly identifying who has what interests, how much influence they have, and how important (his word is "salient") their various interests are to them. This has always been a major challenge in multi-party negotiations. Having a quantitative model with which to manage the complexity does not provide a solution to this problem. The GIGO principle applied. What makes the book so tantalizing in theory and so aggravating in fact is his prediction accuracy. True, as Mr. Brown observes, he may be guilty of hyperbole and self-aggrandizement. But given the track records of experts such as Paul Wolfowitz who are catastrophically wrong in their predictions, the creation of a model that helps us get such forecasts right even half the time would be a great improvement. If Bruce Bueno de Mesquita has actually accomplished that result, his model should be acquired by the State Department, the Defense Department, and any other government agency that commits large resources based on predictions. Perhaps a second edition can clean up some of these problems.
In reply to an earlier post on
Oct 19, 2009 1:23:56 PM PDT
Last edited by the author on Oct 19, 2009 1:25:02 PM PDT
Aaron C. Brown says:
I mostly agree with you, although you may have overstressed the GIGO principle. People sometimes spend far too much effort getting the assumptions precisely correct, or give up because they cannot do that. It's often the case that results are robust to details of the assumptions. Some algorithms can turn half-reasonable guesses into somewhat reliable conclusions (of course, other algorithms can turn perfect data into nonsense). I'm not sure getting all the parties modeled right is important, as long as you get the major interests directionally correct and are close on power and salience.
Personally, my guess would be it's more important to model the decision process properly, for example insider nominations of solutions followed by coalition building and a power-weighted vote could give an entirely different outcome for the same set of decision makers as a period of open coalition building followed by an insider negotiation. In fact, I'm suspicious of predictions that aren't conditional on that. I would expect output like, "There's a 60% chance of a quick decision to do A by acclamation, but if that doesn't happen by January, B is likely to prevail by June." But we both agree that it all comes down to the results, and as you say, they need not be 90% accurate. To paraphrase the old joke about women, "In order to be thought 1% as good as an expert, a computer must be 100 times as accurate. Fortunately that is not difficult."
Posted on
Nov 17, 2009 9:29:49 AM PST
NN says:
Thanks for the review. If you have any suggestions at all on alternative titles that readers might want to take a look at, that would have been very useful.
|
Review Details |