Improve mental models not metaphorical balls
Writing in the magazine Foreign Affairs Scoblic & Tetlock highlight that the US spends over a trillion dollars a year on national security but is continually being surprised by events. This they put down to taking the wrong approach in thinking about the future.
The tendency (not just in the US) is to extrapolate from (and so plan for) past events, and/or focus too much on some issues, and dismiss others too quickly.
Making better predictions
Scoblic & Tetlock propose combining two approaches to be better prepared for the future. The first, scenario planning, often considers futures in terms of plausibility rather than probabilities. The second, probabilistic forecasting (in which Tetlock’s research on “superforecasters” has become well known ), tries to calculate the odds of possible specific short-term outcomes.
They note that forecasters, policy makers, and strategists often view scenarios as vague, possibly misleading, and failing to identify courses of action. On the other hand, policy makers can view the questions that probabilistic forecasters focus on as exceedingly narrow, and so also don’t provide enough information to inform decision-making.
They take an American-centric view of scenario planning. In their description rigorous scenario planning exercises involve identifying key uncertainties and then imagining how different combinations could yield situations that are different from what extrapolation of the present would suggest. This is the classic business consultant scenario approach.
I think that is too simplistic, and old fashioned. As Pierre Wack pointed out decades ago (in that bastion of American centrism the Harvard Business Review)
“… simply combining obvious uncertainties did not help much with decision making. That exercise brought us only to a set of obvious, simplistic, and conflicting strategic solutions.”
Such scenario matrices give the false impression that the uncertainties, once identified, can be managed. Wack noted that these types of scenarios are only a stepping stone to developing more useful ones that aid decision making.
Scoblic & Tetlock do acknowledge that the role of scenarios is to be provocative not predictive
“… challenging planners’ assumptions, shaking up their mental models of how the world works, and giving them the cognitive flexibility to better sense, shape, and adapt to the emerging future.”
Still, if you develop a set of simple scenarios you may not be challenging assumptions as much as you need to.
The power of a good set of questions
They suggest that once these scenarios have been developed then sets of questions need to be crafted that give early, forecastable indications of which (if any) scenario is likely to emerge - signposts for what is emerging.
This seems fairly obvious. It is already adopted to some extent in European and Asian foresight work, and by some US practitioners.
But Scoblic & Tetlock's approach is also anachronistic. They say the scenarios are provocative not predictive, but then assume they are predictive by developing questions to test them.
However, their emphasis on the need for precise questions is valuable. They call these “diagnostic questions”, which they suggest help identify trajectories.
The questions should, they stress, not allow wiggle room. They must pass a “clairvoyance test” – where a genuine clairvoyant (if one existed) would be able to answer it without having to ask for clarification.
“Will we ‘bounce back’ from the pandemic?” would not pass the test. “Will revenue from international tourism be at least 75% of January 2019 levels by January 2022?” is.
A diverse set of questions is also required to ensure that the most information about an emerging future is gathered and avoid giving too much weight to a potentially unimportant signal or signpost. Asking diagnostic and diverse questions also help decrease confirmation biases.
Puzzles vs mysteries
Scoblic and Tetlock's approach has superficial attractiveness, but several weaknesses.
Some quick and simple scenarios with some focused questions can provide an illusion of rigour and insight. The plausible range of futures isn't covered by four scenarios based on uncertainty matrices. If none of the scenarios seem to be developing then you are back where you started.
I also think they may be misled by being focused on events and questions that are predictive – will?, what?, when?, how? In their portrayal there is little scope for questions that develop an understanding about what’s really going on – “Why is the situation emerging in this way?"
Scoblic and Tetlock seem to view futures as a puzzle, like a jigsaw, where the picture emerges once you have enough pieces (answers to diagnostic questions). The CIA pointed out, in 1999, that this isn’t how such analyses usually work. Analysts, the CIA claim, usually start with a picture and select the pieces that fit. One piece can often fit several different puzzles.
Scoblic and Tetlock’s diverse question sets are intended to avoid this. But they are probably daunting to compile if you only have descriptions of change not hypotheses.
The CIA view intelligence analysis more like a medical diagnosis (or a scientific hypothesis). Indicators (symptoms) of what is happening are collected, then knowledge of how the body works is used to develop hypotheses that might explain the observations. Tests or collection of additional information is used to evaluate the hypotheses, then a diagnosis is made. This can be more “mystery” than “puzzle.”
While there are similarities between the two approaches, Scoblic & Tetlock's already has a narrow range of "solutions" in mind, that will likely influence what the questions are.
It is worth noting that the CIA don’t have a stellar foresight record. Tetlock’s own work found that CIA analysts weren’t that good in probabilistic forecasting.
Improve mental models not metaphorical balls
That CIA chapter, called Do you really need more information? is particularly helpful for futurists or forecasters. It points out how few variables people use to make informed judgements. In experiments (some involving horse betting) they found that providing additional information didn’t tend to improve accuracy, but it did make the analyst more confident, and sometimes overconfident. This is similar to some of Tetlock’s superforecasting findings.
But the CIA discuses in much more depth mental models, tools for thinking, and cognitive biases. Lack of information was not the main obstacle to accurate intelligence analysis. The chapter concludes that
“efforts should focus on improving the mental models employed by analysts to interpret information and the analytical processes used to evaluate it. “
It acknowledges that this is difficult, but would have greater impact than improving collection of information. How we see (that is, what we pay attention to and how we interpret it) is more important than what we see.
Tetlock’s work with the Intelligence Advanced Research Projects Activity was called the “Good judgement project”, so it’s surprising that Scoblic and Tetlock mention mental models only in passing. Mental models are the frame, or window, through which we perceive and make sense of things.
They called their article “A better crystal ball” (or at least the editor made it the title), reflecting a need for certainty, rather than for a better way of looking at and thinking about the world. A crystal ball isn’t very good if you are looking at it through an old, cracked window.