- Hardcover: 352 pages
- Publisher: Princeton University Press; 4th Printing edition (July 25, 2005)
- Language: English
- ISBN-10: 0691123020
- ISBN-13: 978-0691123028
- Product Dimensions: 9.5 x 6.4 x 1.1 inches
- Shipping Weight: 1.4 pounds (View shipping rates and policies)
- Average Customer Review: 37 customer reviews
- Amazon Best Sellers Rank: #1,660,084 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Expert Political Judgment: How Good Is It? How Can We Know? 4th Printing Edition
Use the Amazon App to scan ISBNs and compare prices.
The Amazon Book Review
Author interviews, book reviews, editors picks, and more. Read it now
Frequently bought together
Customers who viewed this item also viewed
Winner of the 2006 Grawemeyer Award for Ideas Improving World Order
Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association
"It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business--people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables--are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself."--Louis Menand, The New Yorker
"The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand."--Gavyn Davies, Financial Times
"Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication."--Jim Coyle, Toronto Star
"Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts."--Choice
"[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all."--Geoffrey Colvin, Fortune
"Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting--by academics or intelligence analysts, independent pundits, journalists or institutional specialists--with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book."--Paul Monk, Australian Financial Review
"Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words."--William D. Crano, PsysCRITIQUES
"Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments."--John Kay, Financial Times
"Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in Expert Political Judgment contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking--but how he or she is thinking."--Leon Hadar, Business Times
From the Inside Flap
"This book is a major contribution to our thinking about political judgment. Philip Tetlock formulates coding rules by which to categorize the observations of individuals, and arrives at several interesting hypotheses. He lays out the many strategies that experts use to avoid learning from surprising real-world events."--Deborah W. Larson, University of California, Los Angeles
"This is a marvelous book--fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed."--Robert Jervis, Columbia University
"This book is a landmark in both content and style of argument. It is a major advance in our understanding of expert judgment in the vitally important and almost impossible task of political and strategic forecasting. Tetlock also offers a unique example of even-handed social science. This may be the first book I have seen in which the arguments and objections of opponents are presented with as much care as the author's own position."--Daniel Kahneman, Princeton University
Browse award-winning titles. See more
Top customer reviews
Were the experts better at anything? Well, they were pretty good at making excuses. Here are a few: 1. I made the right mistake. 2. I'm not right yet, but you'll see. 3. I was almost right. 4. Your scoring system is flawed. 5. Your questions aren't real world. 6. I never said that. 7. Things happen. Of course, experts applied their excuses only when they got it wrong... er... I mean almost right... that is, about to be right, or right if you looked at it in the right way, or what would have been right if the question were asked properly, or right if you applied the right scoring system, or... well... that was a dumb question anyway, or....
Not only did experts get it wrong, but they were so wedded to their opinions that they failed to update their forecasts even in the face of building evidence to the contrary. And then a curious thing happened -- after they got it wrong and exhausted all their excuses, they forgot they were wrong in the first place. When Tetlock did follow-up questions at later dates, experts routinely misremembered their predictions. When the expert's models failed, they merely updated their models post hoc, giving them the comforting illusion that their expert judgment and simplified model of social behavior remained intact. Compare this with another very complex system -- predicting the weather. In this latter case, there is a very big difference in the predictive abilities of experts and lay persons. Meteorologists do not use over-simplified models like "red in the morning, sailor's warning." They use complex modeling, statistical forecasting, computer simulations, etc. When they are wrong, weathermen do not say, well, it almost rained; or, it just hasn't rained yet; or, it didn't rain, but predicting rain was the right mistake to make; or, there's something wrong with the rain guage; or, I didn't say it was going to rain; or, what kind of a question is that?
Political experts, unlike weathermen, live in an infinite variety of counterfactual worlds; or as Tetlock writes, "Counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts." That is: sure, given x, y, and z, the former Soviet Union collapsed; but if z had not occurred, the former Soviet Union would have remained intact. Really? Considering the expert got it wrong in the first place, how could they possibly know the outcome in a hypothetical counterfactual world? At best, this is intellectual dishonesty. At worst, it is fraud.
But some experts did better than others. In particular, those who were less dogmatic and frequently updated their predictions in response to countervailing evidence (Tetlock's "foxes") did much better than the opposing camp (termed "hedgehogs"). The problem is that hedgehogs climb the ladder faster and have positions of greater prominence. My Machiavellian take? You might as well make dogmatic pronouncements because all the hedgehogs you work for aren't any better at predicting the future than you are -- they're just more sure of themselves. So, work on your self-confidence. It is apparently the only thing anyone pays any attention to.
His first critical conclusion is that, in forecasting complex political events, "we could do as well by tossing coins as by consulting experts". This is based on a massive set of surveys of expert opinion that were compared to outcomes in the real world over many years. The task was enormously complex to set up; defining an experiment in the social sciences presents the problems that constantly arise in making judgements in these sciences (what does one measure, and how? How can bias be measured and eliminated? etc. etc.) Much of the book is devoted to the problems in constructing the study, and how they were resolved.
His second key conclusion is that, while that may be true of experts as an undifferentiated group, some experts do significantly better than other experts. This does not reflect the level of expertise involved, nor does it reflect political orientation. Rather, it reflects the way the experts think. Poorer performers tend to be what Tetlock characterizes as "hedgehogs" -- people who apply theoretical frameworks, who stick with a line of argument, and who believe strongly in their own forecasts. The better performers tend to be what he calls "foxes" -- those with an eclectic approach, who examine many hypotheses, and who are more inclined to think probabilistically, by grading the likelihood of their forecasts.
But, as he notes, the forecasters who get the most media exposure tend to be the hedgehogs, those with a strong point of view that can be clearly expressed. This makes all the sense in the world; someone with a clear cut and compelling story is much more fun to listen to (and much more memorable than) someone who presents a range of possible outcomes (as a former many-handed economist, I know this all too well).
What does that mean for those of us who use forecasts? We use them in making political decisions, personal financial decisions, and investment decisions. This book tells us that WHAT THE EXPERTS SAY IS NOT LIKELY TO ADD MUCH TO THE QUALITY OF YOUR OWN DECISION MAKING. And that says be careful how much you pay for expert advice, and how much you rely on it. That of course applies to experts in the social sciences, NOT to experts in the hard (aka real) sciences. Generally, it is a good idea to regard your doctor as a real expert.
Because it makes it impossible to avoid these conclusions, I gave this book five stars; this is very important stuff. I would not have given it five stars for the way in which it is written. For me, it read as if it had been written for other academics, rather than for the general reader. This is hard to avoid, but some other works in the field do manage -- for example, "Thinking Fast and Slow". Don't skip the book because it is not exactly an enjoyable read, however: its merit far outweighs its manner.