Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Expert Political Judgment: How Good Is It? How Can We Know? Paperback – August 20, 2006
|New from||Used from|
Customers who bought this item also bought
Customers who viewed this item also viewed
What other items do customers buy after viewing this item?
Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Grawemeyer Award for Ideas Improving World Order
Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association
"It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business--people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables--are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself."--Louis Menand, The New Yorker
"The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand."--Gavyn Davies, Financial Times
"Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication."--Jim Coyle, Toronto Star
"Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts."--Choice
"[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all."--Geoffrey Colvin, Fortune
"Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting--by academics or intelligence analysts, independent pundits, journalists or institutional specialists--with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book."--Paul Monk, Australian Financial Review
"Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words."--William D. Crano, PsysCRITIQUES
"Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments."--John Kay, Financial Times
"Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in Expert Political Judgment contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking--but how he or she is thinking."--Leon Hadar, Business Times
From the Inside Flap
"This book is a landmark in both content and style of argument. It is a major advance in our understanding of expert judgment in the vitally important and almost impossible task of political and strategic forecasting. Tetlock also offers a unique example of even-handed social science. This may be the first book I have seen in which the arguments and objections of opponents are presented with as much care as the author's own position."--Daniel Kahneman, Princeton University, recipient of the 2002 Nobel Prize in economic sciences
"This book is a major contribution to our thinking about political judgment. Philip Tetlock formulates coding rules by which to categorize the observations of individuals, and arrives at several interesting hypotheses. He lays out the many strategies that experts use to avoid learning from surprising real-world events."--Deborah W. Larson, University of California, Los Angeles
"This is a marvelous book--fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed."--Robert Jervis, Columbia University
"This book is just what one would expect from America's most influential political psychologist: Intelligent, important, and closely argued. Both science and policy are brilliantly illuminated by Tetlock's fascinating arguments."--Daniel Gilbert, Harvard University--This text refers to an out of print or unavailable edition of this title.
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
Thus, brace yourself. This book is not an easy read. If you are not currently well versed in probabilities notation and Bayesian statistics, you may not get all the content. This is especially true of the long Technical Appendix at the end of the book. A good test of your proficiency in those areas is the Bayesian decision tree on page 306. If you can readily understand it, you are very good at that stuff. Along the same line, the graphs visual content are often complex. You sometimes have to really slowly digest what you are looking at. However, this book is so worth the effort.
So, what are the main findings of Tetlock’s study? Experts can’t predict much of anything. Their forecasts were typically no better than random guessing, and were worse than models using simple algorithms. That was already a violent shock to the established intellectual hierarchies in academia, business, and government.
Upon Tetlock drilling down, things got even more interesting. He found out that even though the performance of the average expert forecaster was really weak, their performance could readily be differentiated among two main groups: the Hedgehog and the Fox. The Hedgehog expert is heavily credentialed (i.e. Ivy League PhDs), often very well respected in the profession, and very successful with the Media. If you watch TV or read the paper, the vast majority of quoted experts are Hedgehogs. Tetlock uncovered that they also make for the very worst forecasters and in essence bring the average down. Why is that? It is because the way they think. They have very strong dogmatic bias towards their own theories. And, this dogma filters or intoxicate every forecast they undertake. They believe their own voice. They are chronically overconfident. Not only, they can’t predict the future; they most often can’t analyze or interpret the past. They suffer from a pronounced hindsight bias. On pg. 139, Tetlock shows a graph indicating how much more prone to hindsight bias Hedgehogs are relative to Foxes. The Hedgehogs dogmas are like foggy lenses that prevent them from seeing clearly the past, present, and future. In other words, they explain everything according to the theories in which they have vested their careers. And, they are stuck in a rut.
Fortunately, Tetlock uncovered that the Foxes made for much better forecasters than the Hedgehogs. And, again the difference was because the way they thought. The Foxes are not theory driven. They view the world as a very complex system that can’t be reduced to a single theory. As a result, Foxes are better at aggregating information from all different sources including conflicting one. Instead, Hedgehogs typically cherry-picked the data to support their theories. As a result, Foxes revised their forecasts a lot more frequently. They were much more flexible and able to eventually shift in the correct direction. On page 127, Tetlock has a very interesting graph that captures how Foxes changed their minds a lot faster and effectively than Hedgehogs and came much closer to a perfect change in probability assessment determined by Bayesian statistics. Thus, Tetlock stated Foxes are just better Bayesians than Hedgehogs.
Throughout the book, Tetlock demonstrates a superior ability of synthesizing his findings. One of the most spectacular of such occurrence is the flow chart-path analysis on page 163 that outlines why Foxes cognitive thinking style prevails in superior forecasting ability relative to Hedgehogs. Warning if you jump to that page in the absence of the preceding context leading to it, it may not make that much sense.
The Fox-Hedgehog dimension is not the only binomial indicator of forecasting success. Tetlock uncovered other ones. And, some of them are rather counterintuitive. For instance, expertise is strongly negatively correlated with forecasting accuracy. For both Foxes and Hedgehogs, the more expertise they had relative to a given question the less accurate their forecast. Tetlock found a negative correlation between the fame of an expert and his forecasting accuracy. The forecasters most often did better outside their field of expertise than within. You figure outside their field the theoretical dogmas that impaired Hedgehogs pretty much evaporated. For Hedgehogs: more expertise = more causal inference = more overconfidence = less forecasting accuracy. Similarly, more expertise also impaired Foxes forecasting ability but to a somewhat lesser degree as expected.
In chapter 7, Tetlock discloses a real perplexing finding. There is one area where Fox-thinking fails relative to Hedgeogs. And, that is when the forecasters are asked to disaggregate a question into various subquestions. Tetlock calls this scenario planning or “unpacking.” When the forecasters did so it caused two flaws. First, the forecasters probabilities became incoherent, meaning by that that the sum of the probabilities of all the various scenarios exceeded 1 or 100% (and sometimes a lot more). And, that the related forecast accuracy weakened. And, that is even after prorating such scenario probabilities so they do coherently sum to 1. In this situation, the Foxes were much more vulnerable to scenario planning and unpacking. The Hedgehogs for once were relatively more protected from this phenomenon by their stubborn belief in their respective theory that caused them to be less distracted and confused by weird speculative scenarios. This effect for Foxes is really strong and not only affects their forecast (graphs pg. 196, 200). It also affects their hindsight analysis (graph pg. 208). And, this is especially true if Foxes rendered assessments within their field of expertise (the nefarious side of expertise again).
The Foxes forecasting ability falling apart because of scenario planning or “unpacking” seems contradictory to Tetlock’s later findings. Within “Superforecasting” Tetlock discloses his 2nd of “Ten Commandments for Aspiring Superforecasters” as follows: “break seemingly intractable problems into tractable sub-problems.” This is synonymous to “unpacking” within this book. I could see that Superforecasters protect themselves from probability incoherence (sum of probability > 1) by using Bayesian models in spreadsheets to avoid this trap. But, within this book, Tetlock indicated that even after prorating the scenarios probabilities so they sum to a coherent value of 1, such unpacking still reduced forecasting accuracy.
I would gladly get feedback from the author to clarify this conundrum. I may very well have misinterpreted the text. This conundrum in no way detracts from the outstanding quality of both books. If you find those books interesting I also strongly recommend Everything Is Obvious: How Common Sense Fails Us.
His first critical conclusion is that, in forecasting complex political events, "we could do as well by tossing coins as by consulting experts". This is based on a massive set of surveys of expert opinion that were compared to outcomes in the real world over many years. The task was enormously complex to set up; defining an experiment in the social sciences presents the problems that constantly arise in making judgements in these sciences (what does one measure, and how? How can bias be measured and eliminated? etc. etc.) Much of the book is devoted to the problems in constructing the study, and how they were resolved.
His second key conclusion is that, while that may be true of experts as an undifferentiated group, some experts do significantly better than other experts. This does not reflect the level of expertise involved, nor does it reflect political orientation. Rather, it reflects the way the experts think. Poorer performers tend to be what Tetlock characterizes as "hedgehogs" -- people who apply theoretical frameworks, who stick with a line of argument, and who believe strongly in their own forecasts. The better performers tend to be what he calls "foxes" -- those with an eclectic approach, who examine many hypotheses, and who are more inclined to think probabilistically, by grading the likelihood of their forecasts.
But, as he notes, the forecasters who get the most media exposure tend to be the hedgehogs, those with a strong point of view that can be clearly expressed. This makes all the sense in the world; someone with a clear cut and compelling story is much more fun to listen to (and much more memorable than) someone who presents a range of possible outcomes (as a former many-handed economist, I know this all too well).
What does that mean for those of us who use forecasts? We use them in making political decisions, personal financial decisions, and investment decisions. This book tells us that WHAT THE EXPERTS SAY IS NOT LIKELY TO ADD MUCH TO THE QUALITY OF YOUR OWN DECISION MAKING. And that says be careful how much you pay for expert advice, and how much you rely on it. That of course applies to experts in the social sciences, NOT to experts in the hard (aka real) sciences. Generally, it is a good idea to regard your doctor as a real expert.
Because it makes it impossible to avoid these conclusions, I gave this book five stars; this is very important stuff. I would not have given it five stars for the way in which it is written. For me, it read as if it had been written for other academics, rather than for the general reader. This is hard to avoid, but some other works in the field do manage -- for example, "Thinking Fast and Slow". Don't skip the book because it is not exactly an enjoyable read, however: its merit far outweighs its manner.
BONDI BEACH AUSTRALIA