| Print List Price: | $18.00 |
| Kindle Price: | $14.99 Save $3.01 (17%) |
| Sold by: | Random House LLC Price set by seller. |
Your Memberships & Subscriptions
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Superforecasting: The Art and Science of Prediction Kindle Edition
“The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow.”—Jason Zweig, The Wall Street Journal
Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week’s meals. Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts’ predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught?
In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."
In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden’s compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn’t require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course.
Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic.
- LanguageEnglish
- PublisherCrown
- Publication dateSeptember 29, 2015
- File size3925 KB
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Customers who bought this item also bought
Editorial Reviews
Review
• "Superforecasting is the most important scientific study I've ever read on prediction." --The Bloomberg View
About the Author
Dan Gardner is a journalist and the author of Risk: The Science and Politics of Fear and Future Babble: Why Pundits Are Hedgehogs and Foxes Know Best.
Excerpt. © Reprinted by permission. All rights reserved.
An Optimistic Skeptic
We are all forecasters. When we think about changing jobs, getting married, buying a home, making an investment, launching a product, or retiring, we decide based on how we expect the future will unfold. These expectations are forecasts. Often we do our own forecasting. But when big events happen--markets crash, wars loom, leaders tremble--we turn to the experts, those in the know. We look to people like Tom Friedman.
If you are a White House staffer, you might find him in the Oval Office with the president of the United States, talking about the Middle East. If you are a Fortune 500 CEO, you might spot him in Davos, chatting in the lounge with hedge fund billionaires and Saudi princes. And if you don’t frequent the White House or swanky Swiss hotels, you can read his New York Times columns and bestselling books that tell you what’s happening now, why, and what will come next.1 Millions do.
Like Tom Friedman, Bill Flack forecasts global events. But there is a lot less demand for his insights.
For years, Bill worked for the US Department of Agriculture in Arizona--“part pick-and-shovel work, part spreadsheet”--but now he lives in Kearney, Nebraska. Bill is a native Cornhusker. He grew up in Madison, Nebraska, a farm town where his parents owned and published the Madison Star-Mail, a newspaper with lots of stories about local sports and county fairs. He was a good student in high school and he went on to get a bachelor of science degree from the University of Nebraska. From there, he went to the University of Arizona. He was aiming for a PhD in math, but he realized it was beyond his abilities--“I had my nose rubbed in my limitations” is how he puts it--and he dropped out. It wasn’t wasted time, however. Classes in ornithology made Bill an avid bird-watcher, and because Arizona is a great place to see birds, he did fieldwork part-time for scientists, then got a job with the Department of Agriculture and stayed for a while.
Bill is fifty-five and retired, although he says if someone offered him a job he would consider it. So he has free time. And he spends some of it forecasting.
Bill has answered roughly three hundred questions like “Will Russia officially annex additional Ukrainian territory in the next three months?” and “In the next year, will any country withdraw from the eurozone?” They are questions that matter. And they’re difficult. Corporations, banks, embassies, and intelligence agencies struggle to answer such questions all the time. “Will North Korea detonate a nuclear device before the end of this year?” “How many additional countries will report cases of the Ebola virus in the next eight months?” “Will India or Brazil become a permanent member of the UN Security Council in the next two years?” Some of the questions are downright obscure, at least for most of us. “Will NATO invite new countries to join the Membership Action Plan (MAP) in the next nine months?” “Will the Kurdistan Regional Government hold a referendum on national independence this year?” “If a non-Chinese telecommunications firm wins a contract to provide Internet services in the Shanghai Free Trade Zone in the next two years, will Chinese citizens have access to Facebook and/or Twitter?” When Bill first sees one of these questions, he may have no clue how to answer it. “What on earth is the Shanghai Free Trade Zone?” he may think. But he does his homework. He gathers facts, balances clashing arguments, and settles on an answer.
No one bases decisions on Bill Flack’s forecasts, or asks Bill to share his thoughts on CNN. He has never been invited to Davos to sit on a panel with Tom Friedman. And that’s unfortunate. Because Bill Flack is a remarkable forecaster. We know that because each one of Bill’s predictions has been dated, recorded, and assessed for accuracy by independent scientific observers. His track record is excellent.
Bill is not alone. There are thousands of others answering the same questions. All are volunteers. Most aren’t as good as Bill, but about 2% are. They include engineers and lawyers, artists and scientists, Wall Streeters and Main Streeters, professors and students. We will meet many of them, including a mathematician, a filmmaker, and some retirees eager to share their underused talents. I call them superforecasters because that is what they are. Reliable evidence proves it. Explaining why they’re so good, and how others can learn to do what they do, is my goal in this book.
How our low-profile superforecasters compare with cerebral celebrities like Tom Friedman is an intriguing question, but it can’t be answered because the accuracy of Friedman’s forecasting has never been rigorously tested. Of course Friedman’s fans and critics have opinions one way or the other--“he nailed the Arab Spring” or “he screwed up on the 2003 invasion of Iraq” or “he was prescient on NATO expansion.” But there are no hard facts about Tom Friedman’s track record, just endless opinions--and opinions on opinions.2 And that is business as usual. Every day, the news media deliver forecasts without reporting, or even asking, how good the forecasters who made the forecasts really are. Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between. And every day, all of us--leaders of nations, corporate executives, investors, and voters--make critical decisions on the basis of forecasts whose quality is unknown. Baseball managers wouldn’t dream of getting out the checkbook to hire a player without consulting performance statistics. Even fans expect to see player stats on scoreboards and TV screens. And yet when it comes to the forecasters who help us make decisions that matter far more than any baseball game, we’re content to be ignorant.3
In that light, relying on Bill Flack’s forecasts looks quite reasonable. Indeed, relying on the forecasts of many readers of this book may prove quite reasonable, for it turns out that forecasting is not a “you have it or you don’t” talent. It is a skill that can be cultivated. This book will show you how.
The One About the Chimp
I want to spoil the joke, so I’ll give away the punch line: the average expert was roughly as accurate as a dart-throwing chimpanzee.
You’ve probably heard that one before. It’s famous--in some circles, infamous. It has popped up in the New York Times, the Wall Street Journal, the Financial Times, the Economist, and other outlets around the world. It goes like this: A researcher gathered a big group of experts--academics, pundits, and the like--to make thousands of predictions about the economy, stocks, elections, wars, and other issues of the day. Time passed, and when the researcher checked the accuracy of the predictions, he found that the average expert did about as well as random guessing. Except that’s not the punch line because “random guessing” isn’t funny. The punch line is about a dart-throwing chimpanzee. Because chimpanzees are funny.
I am that researcher and for a while I didn’t mind the joke. My study was the most comprehensive assessment of expert judgment in the scientific literature. It was a long slog that took about twenty years, from 1984 to 2004, and the results were far richer and more constructive than the punch line suggested. But I didn’t mind the joke because it raised awareness of my research (and, yes, scientists savor their fifteen minutes of fame too). And I myself had used the old “dart-throwing chimp” metaphor, so I couldn’t complain too loudly.
I also didn’t mind because the joke makes a valid point. Open any newspaper, watch any TV news show, and you find experts who forecast what’s coming. Some are cautious. More are bold and confident. A handful claim to be Olympian visionaries able to see decades into the future. With few exceptions, they are not in front of the cameras because they possess any proven skill at forecasting. Accuracy is seldom even mentioned. Old forecasts are like old news--soon forgotten--and pundits are almost never asked to reconcile what they said with what actually happened. The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough. Many have become wealthy peddling forecasting of untested value to corporate executives, government officials, and ordinary people who would never think of swallowing medicine of unknown efficacy and safety but who routinely pay for forecasts that are as dubious as elixirs sold from the back of a wagon. These people--and their customers--deserve a nudge in the ribs. I was happy to see my research used to give it to them.
But I realized that as word of my work spread, its apparent meaning was mutating. What my research had shown was that the average expert had done little better than guessing on many of the political and economic questions I had posed. “Many” does not equal all. It was easiest to beat chance on the shortest-range questions that only required looking one year out, and accuracy fell off the further out experts tried to forecast--approaching the dart-throwing-chimpanzee level three to five years out. That was an important finding. It tells us something about the limits of expertise in a complex world--and the limits on what it might be possible for even superforecasters to achieve. But as in the children’s game of “telephone,” in which a phrase is whispered to one child who passes it on to another, and so on, and everyone is shocked at the end to discover how much it has changed, the actual message was garbled in the constant retelling and the subtleties were lost entirely. The message became “all expert forecasts are useless,” which is nonsense. Some variations were even cruder--like “experts know no more than chimpanzees.” My research had become a backstop reference for nihilists who see the future as inherently unpredictable and know-nothing populists who insist on preceding “expert” with “so-called.”
So I tired of the joke. My research did not support these more extreme conclusions, nor did I feel any affinity for them. Today, that is all the more true.
There is plenty of room to stake out reasonable positions between the debunkers and the defenders of experts and their forecasts. On the one hand, the debunkers have a point. There are shady peddlers of questionable insights in the forecasting marketplace. There are also limits to foresight that may just not be surmountable. Our desire to reach into the future will always exceed our grasp. But debunkers go too far when they dismiss all forecasting as a fool’s errand. I believe it is possible to see into the future, at least in some situations and to some extent, and that any intelligent, open-minded, and hardworking person can cultivate the requisite skills.
Call me an “optimistic skeptic.”
The Skeptic
To understand the “skeptic” half of that label, consider a young Tunisian man pushing a wooden handcart loaded with fruits and vegetables down a dusty road to a market in the Tunisian town of Sidi Bouzid. When the man was three, his father died. He supports his family by borrowing money to fill his cart, hoping to earn enough selling the produce to pay off the debt and have a little left over. It’s the same grind every day. But this morning, the police approach the man and say they’re going to take his scales because he has violated some regulation. He knows it’s a lie. They’re shaking him down. But he has no money. A policewoman slaps him and insults his dead father. They take his scales and his cart. The man goes to a town office to complain. He is told the official is busy in a meeting. Humiliated, furious, powerless, the man leaves.
1. Why single out Tom Friedman when so many other celebrity pundits could have served the purpose? The choice was driven by a simple formula: (status of pundit) X (difficulty of pinning down his/her forecasts) X (relevance of pundit’s work to world politics). Highest score wins. Friedman has high status; his claims about possible futures are highly difficult to pin down--and his work is highly relevant to geopolitical forecasting. The choice of Friedman was in no way driven by an aversion to his editorial opinions. Indeed, I reveal in the last chapter a sneaky admiration for some aspects of his work. Exasperatingly evasive though Friedman can be as a forecaster, he proves to be a fabulous source of forecasting questions.
2. Again, this is not to imply that Friedman is unusual in this regard. Virtually every political pundit on the planet operates under the same tacit ground rules. They make countless claims about what lies ahead but couch their claims in such vague verbiage that it is impossible to test them. How should we interpret intriguing claims like “expansion of NATO could trigger a ferocious response from the Russian bear and may even lead to a new Cold War” or “the Arab Spring might signal that the days of unaccountable autocracy in the Arab world are numbered” or . . . ? The key terms in these semantic dances, may or could or might, are not accompanied by guidance on how to interpret them. Could could mean anything from a 0.0000001 chance of “a large asteroid striking our planet in the next one hundred years” to a 0.7 chance of “Hillary Clinton winning the presidency in 2016.” All this makes it impossible to track accuracy across time and questions. It also gives pundits endless flexibility to claim credit when something happens (I told you it could) and to dodge blame when it does not (I merely said it could happen). We shall encounter many examples of such linguistic mischief.
3. It is as though we have collectively concluded that sizing up the starting lineup for the Yankees deserves greater care than sizing up the risk of genocide in the South Sudan. Of course the analogy between baseball and politics is imperfect. Baseball is played over and over under standard conditions. Politics is a quirky game in which the rules are continually being contorted and contested. So scoring political forecasting is much harder than compiling baseball statistics. But “harder” doesn’t mean impossible. It turns out to be quite possible.
There is also another objection to the analogy. Pundits do more than forecasting. They put events in historical perspective, offer explanations, engage in policy advocacy, and pose provocative questions. All true, but pundits also make lots of implicit or explicit forecasts. For instance, the historical analogies pundits invoke contain implicit forecasts: the Munich appeasement analogy is trotted out to support the conditional forecast “if you appease country X, it will ramp up its demands”; and the World War I analogy is trotted out to support “if you use threats, you will escalate the conflict.” I submit that it is logically impossible to engage in policy advocacy (which pundits routinely do) without making assumptions about whether we would be better or worse off if we went down one or another policy path. Show me a pundit who does not make at least implicit forecasts and I will show you one who has faded into Zen-like irrelevance.
Product details
- ASIN : B00RKO6MS8
- Publisher : Crown (September 29, 2015)
- Publication date : September 29, 2015
- Language : English
- File size : 3925 KB
- Text-to-Speech : Enabled
- Screen Reader : Supported
- Enhanced typesetting : Enabled
- X-Ray : Enabled
- Word Wise : Enabled
- Print length : 328 pages
- Page numbers source ISBN : 0804136696
- Best Sellers Rank: #74,865 in Kindle Store (See Top 100 in Kindle Store)
- Customer Reviews:
About the authors

Philip E. Tetlock (born 1954) is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences.
He has written several non-fiction books at the intersection of psychology, political science and organizational behavior, including Superforecasting: The Art and Science of Prediction; Expert Political Judgment: How Good Is It? How Can We Know?; Unmaking the West: What-if Scenarios that Rewrite World History; and Counterfactual Thought Experiments in World Politics. Tetlock is also co-principal investigator of The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments of high-stakes, real-world events.
For more see here: https://en.wikipedia.org/wiki/Philip_E._Tetlock
For CV: https://www.dropbox.com/s/uorzufg1v0nhcii/Tetlock%20CV%20%20march%2018%2C%202016.docx?dl=0
Twitter: https://twitter.com/PTetlock
LinkedIn: https://www.linkedin.com/in/philip-tetlock-64aa108a?trk=hp-identity-name
For an interview: https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting

Dan Gardner is the New York Times best-selling author of books about psychology and decision-making. His work has been called "an invaluable resource for anyone who aspires to the think clearly" by The Guardian and "required reading for journalists, politicians, academics, and anyone who listens to them" by Harvard psychologist Steven Pinker.
Gardner’s books have been published in 25 countries and 20 languages.
In addition to writing, Gardner lectures on forecasting, risk, and decision-making.
Prior to becoming an author, Gardner was a newspaper columnist and feature writer whose work won or was nominated for every major award in Canadian newspaper journalism.

Discover more of the author’s books, see similar authors, read book recommendations and more.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonCustomers say
Customers find the book insightful, informative, and interesting. They describe it as readable, well-written, and worthwhile. However, some find the content repetitive and filler-heavy.
AI-generated from the text of customer reviews
Customers find the book insightful, informative, and useful. They say it has lots of stories and examples of experts and amateurs being right or wrong. Readers mention the book is a good guide to promoting analytical skills. They also say it helps them think better and put structure to everyday analysis.
"...actively open-minded, reflective, numerate, pragmatic, analytical, probabilistic, belief updaters, intuitive psychologists, growth mindset...." Read more
"...apply these principles to your everyday life, this is still an interesting story and we could use the way these superforecasters think as a model to..." Read more
"...INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges..." Read more
"...As you read the book, you’re also reading an excellent review of cognitive biases.I loved the many historical examples...." Read more
Customers find the book very readable, well-written, and worth reading. They say it's written for a general audience, with anecdotes. Readers also appreciate the clear presentation of what makes a superforecaster. In addition, they mention the author follows its own logic and is well-structured.
"...Researched qualities to strive for as a forecaster: cautious, humble, nondeterministic, actively open-minded, reflective, numerate, pragmatic,..." Read more
"...It's a harrowing underdog story.The author does a good job on showing how to predict the future when it comes to financial and socio-..." Read more
"...Very well written with much to learn from its subject matter." Read more
"...of applicable information, lost in a sea of superfluous and blatantly obvious content...." Read more
Customers find the book straightforward, approachable, and interesting. They say it doesn't require any advanced math and is a good elementary introduction. Readers also mention the book is well-written and hard to put down.
"...Hedgehogs tell tight, simple, clear stories that grab and hold audiences.Hedgehogs are confident...." Read more
"...of complex equations and probabilities relax - Tetlock's book is straight forward and offers ideas that are easy to apply in our everyday forecasting..." Read more
"...The language is not very sophisticated and easy understandable for laypeople...." Read more
"A well-written, easy to read book about the pitfalls of forecasting anything, but mainly world events in politics, society and economy...." Read more
Customers find the content too verbose, repetitive, and boring. They say the book has too much filler and belabors the point too much. Readers also mention the book lacks details and a tiny sliver of applicable information.
"...An example of blatantly obvious filler content: three entire sections of one chapter are actually devoted to talking about how sometimes adjusting..." Read more
"...the read - I think the text is far too verbose and much of the content either repetitive or not very interesting...." Read more
"...I gave it three stars, because it wasn't a bad book, and moved along quickly...." Read more
"...some readers will progressively realize that the book has too much filler...." Read more
Reviews with images
An interesting read on forecasting and how predictions are made
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
The book mentions repeatedly the importance of measurement for assessment and revising forecasts and programs. Many people simply don't create any metrics of anything when they make unverifiable and chronologically ambiguous declarations.
The book emphasizes the importance of receiving this feedback on predictions that measurement allows, as there is a studied gap between confidence and skill in judgment. We have a tendency to be uninterested in accumulating counterfactuals, but we must know when we fail to learn from it. If forecasts are either not made or not quantified and ambiguous, we can't receive clear feedback, so the thought process that led to the forecasts can't be improved upon. Feedback, however, allows for the psychological trap of hindsight bias. This is that when we know the outcome, that knowledge of the outcome skews our perception of what we thought at the time of the prediction and before we knew the outcome.
The main qualities for successful forecasting are being open-minded, careful, and undertaking self-critical thinking with focus, which is not effortless. Commitment to self-improvement is the strongest predictor of long-term performance in measured forecasting. This can basically be considered as equivalent to the popular concept of grit. Studies show that individuals with fixed mindsets do not pay attention to new information that could improve their future predictions. Similarly, forecasts tend to improve when more probabilistic thinking is embraced rather than fatalistic thinking in regards to the perspective that certain events are inevitable.
A few interesting findings that the authors expand upon in more detail in the book: experience is important to have the tacit knowledge essential to the practice of forecasting, and that grit, or perseverance, towards making great forecasts is three times as important as intelligence.
Practices to undertake when forecasting are to create a breakdown of components to the question that you can distinguish and scrutinize your assumptions; develop backwards thinking as answering the questions of what you would need to know to answer the question, and then making appropriate numerical estimations for those questions; practice developing an outside view, which is starting with an anchored view from past experience of others, at first downplaying the problem's uniqueness; explore other potential views regarding the question; and express all aspects and perspectives into a single number that can be manipulated and updated.
Psychological traps to be aware of discussed in the book include confirmation bias, which is a willingness to seek out information that confirms your hypothesis and not seek out information that may contradict it, which is the opposite of discovering counterfactuals; belief perseverance, also known as cognitive dissonance, in which individuals can be incapable of updating their belief in the face of new evidence by rationalization in order to not have their belief upset; scope insensitivity, which is not properly factoring in an important aspect of applicability of scope, such as timeframe, properly into the forecast; and thought type replacement, which is replacing a hard question in analysis with a similar question that's not equivalent but which is much easier to answer.
Researched qualities to strive for as a forecaster: cautious, humble, nondeterministic, actively open-minded, reflective, numerate, pragmatic, analytical, probabilistic, belief updaters, intuitive psychologists, growth mindset.
The authors then delve into a bit of another practical perspective on forecasting, which involves teams. Psychological traps for teams include the known phenomenon known as groupthink, which is that small cohesive groups tend to unconsciously develop shared illusions and norms that are often biased in favor of the group, which interfere with critical thinking regarding objective reality. There is also a tendency for members of the group to leave the hard work of critical thinking to others on the team instead of sharing this work optimally, which when combined with groupthink, leads the group towards tending to feel a sense of completion upon reaching a level of agreement. One idea to keep in mind for management of a group is that the group's collective thinking can be described as a product of the communication of the group itself and not the sum of the thinking of the individual members of a group.
There are some common perceived problems with forecasting, which receive attention in the book: the wrong side of maybe fallacy, which is the thinking that a forecast was bad because the forecast was greater than 50% but the event didn't occur, which can lead to forecasters not willing to be vulnerable with their forecasts; publishing forecasts for all to see, where research shows that public posting of forecasts, with one's name associated with the forecast, creates more open-mindedness and increased performance; and the fallacy that because many factors are unquantifiable due their real complexity, the use of numbers in forecasting is therefore not useful.
Some concepts that I took note of for further research from the book were: Bayesian-based application for belief updating, which is basically a mathematical way of comparing how powerful your past belief was relative to some specific new information, chaos theory, game theory, Monte Carlo methods, and systematic intake of news media. These are concepts that I was particularly interested in from the book based on my own interests and that I have continued to explore. This book was very valuable for cohesively bringing together the above concepts in the context of a compelling story, based on the DARPA research project which was compellingly won by the author's team as a product of the research that led to this groundbreaking book.
The author does a good job on showing how to predict the future when it comes to financial and socio-political forecasts but he doesn't go far enough in explaining how we could use these techniques in our daily life when it comes to everyday things like whether to save or spend money, and how much, where to go to school, what career to stay in, whether a relationship will last, how long a given business will stay afloat. After all, we make these big decisions based on future forecasts!
The author does state that in the beginning of the book that we make forecasts all the time in our lives but I'm not sure to what degree we're able to consciously apply forecasting principles to every-day life situations. He could've given more practical examples if that were the case.
He does say "Just as you can't learn to ride a bicycle by reading a physics textbook, you can't become a superforecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding. " So going off that you can't just expect to automatically become a good forecaster by reading this book. You have to getting out make a lot of forecasts, get feedback, and revise the way you do things accordingly. The problem is I'm not sure how many people reading this book would be motivated to go out of their way to do this.
Still I don't want to detract you from reading this book because it truly was a good read. Just reading about the way these superforecasters would think and go about things should inspire us to do the same. They didn't see their views as "treasures to be guarded but as hypotheses to be tested." They were able to look at multiple perspectives and handle the cognitive dissonance (most ideologically driven people could not bear to do like-wise.) They would seek "active open-mindedness" which means they would go out of their way to have other falsify their views so they can sharpen their perspective. They would tap into the "Wisdom of the Crowds" by getting in lengthy internet discussions with other forecasters where they would "disagree without being disagreeable". They had the "growth mindset" which means they treat every failure not as a blow to their ego but as a learning opportunity as they would have lengthy postmortems on their failed predictions. They had the intellectual humility to recognize that reality is complex, but the confidence in their abilities to execute their task in a determined way.....And so on....
So regardless of whether or not you are able to successful apply these principles to your everyday life, this is still an interesting story and we could use the way these superforecasters think as a model to how we should approach our beliefs about the outside world.
Top reviews from other countries
It runs you through sequentially so you understand the key techniques while also explaining theory and reasoning behind it.
I would say it's worth read 'thinking slow and fast' as some of the key terms and ideas in it are referenced In this.
This well written but still insightful. I took away ideas which will help me and which I can use.





