Make better business forecasts by changing how you think

Tomorrow's heroes

One Canadian psychologist became the world’s leading forecasting expert, and it's a skill he hopes to see cultivated in all major industries.

In the mid-1980s, Canadian psychologist Philip Tetlock gathered a group of economic gurus, media commentators, academics and general know-it-alls to make predictions about the important issues of the day. The research took 20 years to complete, and the findings were surely uncomfortable for those who make a living from forecasting the future.

The average current-events pundit, according to Tetlock’s study, performed about as well as a theoretical dart-throwing monkey. In some instances, the monkey beat them.

Tetlock, a professor of political psychology at the University of Pennsylvania, has spent decades studying the limitations of expert judgement. The results of this first study, which compared pundits with primates, appeared in his 2005 book, Expert Political Judgement: How Good Is It? How Can We Know?

In the past decade, Tetlock has expanded his gaze far beyond political commentary. His new book, Superforecasting: the Art and Science of Prediction (co-written with journalist Dan Gardner), springs from a four-year, US Government-funded forecasting tournament – a contest designed by the US intelligence community to identify good forecasters of all sorts of events.

Related: 7 steps to mastering the business of superforecasting


The tournament required five teams of forecasters to predict complex geopolitical events, from Chinese military expenditure to the stability of the eurozone. A small percentage of forecasters were so consistently accurate in their predictions that they outperformed even the intelligence analysts who had access to classified information. Tetlock calls them “superforecasters”. His latest book shows how others can learn from their approach.
“Forecasting is a serious skill... if you take it seriously, you can cultivate it and you can get better at it.” Philip Tetlock

European flag

Hazy predictions

Everyday forecasting is riddled with holes. Predictions are often made to advance political agendas or galvanise action. They frequently lack timelines against which their accuracy may be measured, and they are often formulated using ambiguous language, such as “could”, “might”, “likely” and “good chance”.

Tetlock has been troubled by the hazy nature of expert forecasting since the Cold War era, when US public debate was dominated by unverifiable predictions.

“It struck me as worrisome,” he says. “I thought it would be harder to engage in slippery thinking if you honestly and explicitly stated what your expectations were up-front and had to openly confront the discrepancies between your probability judgement and reality. It’s of interest to me, as a citizen, that public policy debates are conducted at such low cognitive levels of functioning.”

It’s not just issues of public policy. Business leaders and stock market investors make decisions every day based on forecasts of untested quality. Many of these predictions are dressed in what Tetlock describes as fuzzy language.

“It’s hard to tell whether they’re on the money or off the money, because they’re so vague,” he says.

Take the example of Harvard economic historian and popular US commentator Niall Ferguson, who told an interviewer in January 2012 that the “Greek default may be a matter of days away”. Which definition of “default” was Ferguson using, asks Tetlock, and why did he choose a hollow word such as “may”?

When framed in fuzzy language, Tetlock says a prediction is difficult to disprove – so whatever the outcome, the forecaster’s reputation stays safe.

“If you exist in a blame game culture like Washington DC, you’re much safer saying ‘there is a distinct possibility that the Iranians will cheat on a nuclear agreement’ than to say ‘there’s a 20 per cent or 80 per cent possibility’,” he states.

“People will jump on you if you’re on the wrong side of maybe.”

Battle of the forecaster

What if verifiable predictions replaced vague verbiage? What if every forecast arrived with precise time frames, unambiguous language and numerical probabilities? Predictions could be rigorously examined for accuracy and the best ways of thinking about problems could be discerned. These were the basic ground rules of the forecasting tournament that produced Tetlock’s superforecasters.

It’s easy to assume that the superforecasters are more knowledgeable or intelligent than the rest of us. In fact, Tetlock crowdsourced his tournament team and ended up with a team of relatively ordinary people – retired computer programmers, former ballroom dancers, pharmacists and economic analysts. In the US Government tournament, this group managed to predict the future with a 60 per cent greater degree of accuracy than the other forecasters.

Professional Development: Improving budgeting and forecasting effectiveness

All the forecasting teams tackled tough questions usually reserved for intelligence analysts. Would the president of Tunisia flee to a cushy exile in the next month? Would Serbia be officially granted European Union candidacy by 31 December 2011? The forecasters updated their predictions in real time and their accuracy was evaluated using Brier scores, which measure how closely forecasts correlate with actual events.

Quality thinkers

With the tournament over, Tetlock looked at the performance of the tournament’s top 2 per cent – the superforecasters – to identify common traits. He discovered that their accuracy was based not on what they thought but on how they thought.

“To cope in a complex world, there is an advantage to thinking in complex, multi-dimensional, flexible ways,” he says.

“The thing that unites the superforecasters is a willingness to explore the limits of precision.”

Philip Tetlock put forecasters under the microscope to distinguish the good and the great. Photographer: Eric Mencher

Foresight isn’t a mysterious gift bestowed at birth, says Tetlock. Rather, it’s the result of particular ways of thinking; gathering information, breaking questions down into manageable parts and updating and revising forecasts. Superforecasters scored highly on measures of “actively open-minded thinking”. They believed ideas should be tested. They seriously considered the possibility that they could be wrong.

“Superforecasters working on teams are continually questioning each other,” says Tetlock.

“That’s one of the key reasons why they don’t fall for groupthink. They rigorously question each other’s assumptions, but they do it in a fairly diplomatic way.”

“There is an advantage to thinking in a complex, mult-dimensional, flexible ways.” Philip Tetlock

Committing to accuracy

Forecasting tournaments create artificial environments in which competitors are playing for one thing.

“They aren’t trying to be politically correct for one side or another,” says Tetlock.

“They are simply trying to be as accurate as possible and don’t have to care about recriminations. Let the chips fall where they may; all that matters is accuracy and to have a laser-like attention to the details of a question and the details of the world.”

Superforecasters must be more data-driven than theory-driven in their thinking, and Tetlock says the tournaments impose an almost monastic discipline on their competitors.

“It takes all the fun out of politics,” he laughs.

Related: Why economic forecasting is a flawed science

Fun aside, Tetlock believes these forecasting methods can improve the nature of public discourse, if only they were applied in the real world.

“There is a perverse inverse relationship between what the public values in leaders and the factors that promote accurate judgement in forecasting tournaments,” he says.

“The public values boldness and decisiveness and simplicity. Forecasting tournaments reward nuance and flexibility and complexity.

When you look at what people vote for in elections, they are often attracted to simple, decisive rhetoric more than complex rhetoric. They prefer candidates who step on the accelerator in their arguments.”

Tetlock maintains this should not dissuade anyone from developing their forecasting capability.

“The simplest and biggest message of this research program is that [forecasting] is a skill and, if you take it seriously, you can cultivate it and you can get better at it,” he states.

“The real judgement call is whether or not you value the accuracy enough to justify the effort of cultivation.”

Read next: 6 economists who predicted the global financial crisis


March 2016
March 2016

Read the March issue

Each month we select the must-reads from the current issue of INTHEBLACK. Read more now.

TABLE OF CONTENTS