Sunday, September 27, 2015

Tetlock & Gardner, Superforecasting

We all crave knowledge of the future. Is it going to rain this weekend? Where will the equity markets be in a year? Next week? Five minutes from now? Predictive models, many using big data and statistical algorithms, have begun making inroads into this problem. But IBM Watson’s chief engineer, David Ferrucci, doesn’t think that machines will ever completely replace subjective human judgment. In forecasting, combinations of machines and experts may prove more robust than pure-machine or pure-human approaches. “So,” say the authors of Superforecasting, “it’s time we got serious about both.” (p. 24)

Philip E. Tetlock, a professor at the University of Pennsylvania and co-leader of a multiyear online forecasting study, the Good Judgment Project, and Dan Gardner, a journalist, teamed up to produce one of the best books I’ve read this year. Superforecasting: The Art and Science of Prediction (Crown Publishers, 2015) argues that “it is possible to see into the future, at least in some situations and to some extent, and that any intelligent, open-minded, and hardworking person can cultivate the requisite skills. … Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs.” (pp. 9, 20)

It’s pretty easy to get started learning to forecast more accurately. A tutorial for the Good Judgment Project covering some of the basic concepts in this book and summarized in its Ten (actually eleven) Commandments appendix “took only about sixty minutes to read and yet it improved accuracy by roughly 10% through the entire tournament year. …And never forget that even modest improvements in foresight maintained over time add up. I spoke about that with Aaron Brown, an author, a Wall Street veteran, and the chief risk manager at AQR Capital Management, a hedge fund with over $100 billion in assets. ‘It’s so hard to see because it’s not dramatic,’ he said, but if it is sustained, ‘it’s the difference between a consistent winner who’s making a living, or the guy who’s going broke all the time.’” (p. 20) Did that get your attention?

Admittedly, Superforecasting doesn’t focus on the financial markets because the authors recognize that they are rife with aleatory uncertainty (the unknowable), not just epistemic uncertainty (the unknown but potentially knowable). “Aleatory uncertainty ensures life will always have surprises, regardless of how carefully we plan. Superforecasters grasp this deep truth better than most. When they sense that a question is loaded with irreducible uncertainty—say, a currency-market question—they have learned to be cautious, keeping their initial estimates inside the shades-of-maybe zone between 35% and 65% and moving out tentatively.” (p. 116) Note that, even here, superforecasters don’t just throw up their hands and say 50-50.

In a second reference to the markets, the authors compare superforecasting investing to black swan investing. Playing the low probability, high reward card is not the only way to invest. “A very different way is to beat competitors by forecasting more accurately—for example, correctly deciding that there is a 68% chance of something happening when others foresee only a 60% chance. … It pays off more often, but the returns are more modest, and fortunes are amassed slowly.” (p. 195)

At its core, Superforecasting teaches its readers how to think probabilistically, something that doesn’t come naturally to most people. We tend to use a two- or three-setting mental dial. Something will happen, won’t happen, or may happen. But this way of thinking gets us into trouble. The “will” and “won’t” settings reflect a faulty view that reality is fixed. Even death and taxes may not be certain someday. And the “maybe” setting “has to be subdivided into degrees of probability. … The finer grained the better, as long as the granularity captures real distinctions—meaning that if outcomes you say have an 11% chance of happening really do occur 1% less often than 12% outcomes and 1% more than 10% outcomes.” (p. 117)

What does it take to be a superforecaster? Well, for starters, a lot of time and mental energy. Those who have a superabundance of both can join the thousands of people predicting global events at the Good Judgment Project. The rest of us can use this book to improve our own, most likely more modest predictions. If, that is, we have, or are willing to cultivate, certain qualities. Superforecasters are foxes, not hedgehogs. They look at problems from multiple perspectives. They tend to be, among other things, cautious, humble, nondeterministic, actively open-minded, intellectually curious, reflective, numerate, pragmatic, and analytical, with a growth mindset and grit. “The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.” (p. 155)

Superforecasting is a must-read book for everyone who is sick to death of “the guru model that makes so many policy debates so puerile: ‘I’ll counter your Paul Krugman polemic with my Niall Ferguson counterpolemic, and rebut your Tom Friedman op-ed with my Bret Stephens blog.’” (p. 24) As the authors write, “All too often, forecasting in the twenty-first century looks too much like nineteenth-century medicine. There are theories, assertions, and arguments. There are famous figures, as confident as they are well compensated. But there is little experimentation, or anything that could be called science, so we know much less than most people realize. And we pay the price. Although bad forecasting rarely leads as obviously to harm as does bad medicine, it steers us subtly toward bad decisions and all that flows from them—including monetary losses, missed opportunities, unnecessary suffering, even war and death.” (p. 42) It’s time for a change—for all of us to change.

No comments:

Post a Comment