In parts 1 and 2 of this series we have looked at the affect cognitive bias have on our view of the past and the present. In this third and final part we’ll be looking at how such biases effect our view of the future.
Physicist Niels Bohr (a contemporary and collaborator of Einstein) said “prediction is difficult, especially about the future”. If humans are biased in our views of the past and present, our views of the future are even more fraught. We are caught between the Scylla of hopes, fears, emotional attachments, self-justification, identity, psychological projection and the Charybdis of our limited cognitive abilities in areas such as planning, probability, judgment, and causal inference.
Just what do future-oriented biases look like, and what can be done about them by leaders contemplating major change?
Future-oriented biases
“Lake Wobegon …where all the men are strong, all the women good looking, and all the children above average” (Garrison Keillor)
For executives making strategic decisions and decisions involving major change, three areas are important: optimism biases, thinking probabilistically, and the planning fallacy.
Optimism: a double-edged sword
”Ver la vida como es y no como debería ser [1].” (Cervantes)
Scan the internet for quotations and articles on optimism and you will find an overwhelmingly optimistic view of optimism. Yet optimism proves to be at the root of several biases which doom change from the start, and sometimes destroy businesses. Optimism renders us “confident without competence” in our judgments, to analyze risk and probability badly, or to be Pollyannaish when planning change.
Lest it be thought that optimism is a bad thing, in general terms it is a very good thing. An optimistic outlook means you live longer, are healthier, more emotionally resilient, earn more, attempt more and succeed[2]. To be sure, when a new strategy or change program is contemplated, there is enormous value in full-throated commitment and enthusiasm.
The challenge therefore, for leaders, is not to stamp it out. It may be right to stoke it. Leaders must be conscious of when optimism becomes hubris, and when a positive outlook becomes self-delusion. Leaders must know whether to abandon, forge ahead or proceed with caution.
British Petroleum’s (BP) Engineers and staff faced such a decision on the Deep Water Horizon drill platform. In that case they did indeed forge ahead with the simplest and faster approach to the drilling despite very clear warning signs of danger. What were those warnings?
- An internal BP report recommended against a “long string casing”, but engineers on the Horizon countered that using one would “save at least 3 days, and $7 – $10m dollars”. Expedience won.
- Halliburton modelling recommended using 21 “centralizers”. Some BP engineers agreed: “we need to honor the modelling having chosen the risker casing”. Other engineers argued ”its a vertical hole so hopefully the pipe stays centralized due to gravity”. Hope (and time and money won) again and BP used only six centralizers.
- There were portents that the well had gas leak problems, and a test could have validated or disproven that, but other engineers argued that the test “was only a model” and could wait until full production began.
Here we see optimism’s dark side: denial of expertise, mistrust of models, and hope versus prudence. We see people trusting their gut where hundreds of billions, and many lives are at stake. My Irish grandmother used to say “don’t get a dog and bark yourself”, yet in the Deepwater Horizon disaster, recommendations, models and formal decision processes were wilfully ignored because decision makers preferred the sound of their own bark to expert recommendations.
There is no easy answer to when optimism becomes denial, or hubris. Had things gone smoothly, the managers involved might have been applauded for their can-do practical attitudes. The world of risk means that very poor decisions can turn out well, or “perfect” decisions can fall prey to force majeure.
Confidence without Competence
Problem not just that thinking is flawed, but that we are confident in flawed thinking.
In my Strategic Decision Making program at US Business Schools, I ask executives to answer ten questions, some of which are:
- How many books in the Old Testament?
- How long is the Nile River?
- How many neurons are in the human brain?
- How far is it from Earth to the Moon?
Nobody but the saddest nerd would get them all right, but I ask them to provide their answer as a range where they would be 90 percent sure of being correct, what statisticians call a 90 percent confidence interval. The most common score is 40 percent, four out of ten answers within their range. My executives get over half wrong despite being 90 percent sure on ten of them!
This phenomenon does just apply to subjects where our knowledge is limited. Berkeley Professor, Philip Tetlock, has spent a career studying expert predictions. The experts that he studied came from every imaginable area of expertise and despite their credentials and reputation, fared little better than pure chance and worse than simplistic methods (such as merely averaging the election results in a precinct over the last few cycles). Fully 15 percent of events that had no chance, happened, and a whopping 25 percent of events they were absolutely sure of failed to happen.
Tetlock categorized experts as hedgehogs or foxes. Hedgehogs have a Big Idea, and are specialized and ideological. Foxes know less but about lots of things, are self-critical, study the evidence, and are cautious and flexible. It sounds as if we want our world run by foxes, but quite the opposite is the case. We vote for hedgehogs, employ them as pundits, and journalists, and writers. And CEOs. Al Dunlap built a very (very) lucrative business career as a master cost-cutter, earning him the moniker “Chainsaw Al”. He slashed and burned his way through CEO roles with charisma and competence. It worked well when it was the correct strategy, and abysmally when it was the wrong strategy.
Humans are confident without necessarily being competent, but in boardrooms, it is worse. “Strong leadership”, “knowing one’s stuff”, and “exuding confidence” are good personas to project. Are not leaders are supposed to be sure of themselves, and have the answers? To demur, reflect, or to change ones views is seen as “flip-floppy” (more so in the national political arena). To employ Tetlock’s dichotomy, boardroom culture is heavily hedgehog.
Hedgehogs respond badly to challenge, deal badly with complexity, like straightforward solutions, and generally do not play well with foxes. Those are exactly the attributes that often get promotions, and are very poor at managing major change, or complex risky endeavors (such as ultra-deep offshore oil wells). Which is your team culture?
Thinking risk and probability
The illusion of control deludes leaders into banking on a single outcome in situations that are highly risk dependent. (Kahnemann)
Imagine an ambitious Vice President who presents a 30 slide presentation of a business case. He concludes with:
- “This project will take 18 months, and cost $65 million dollars, however, there is a 40 percent chance that it will take twice as long and cost 75 percent more.”
- “This $10 million investment clears our hurdle rate, but 50 percent of the time we will only break even and 15 percent of the time we will lose money.”
- “Based on my analysis the probability of this product succeeding is 35 percent.”
Our VP will not last long: corporate culture does not applaud probabilistic analyses. President Truman (a hedgehog) once moaned, “can someone please find me a one-handed economist”, because foxes bore the pants of hedgehogs with their “on the other hands” and “howevers”.
While probabilistic reasoning is tedious and leads to more ambiguous conclusions, reality operates probabilistically. The more complex the system, the more variable (risky) the outcomes. The profound implications of this essential feature of reality still elude us in all the practical disciplines. A complex project, or an oil rig, or an economy, can be blown of course (or blown up) by a myriad of factors. Each part of a project, each small task, each person who must be persuaded, each milestone to be met has its own distribution of possible outcomes. The expected delivery date of a project that has 10,000 tasks is the sum[3] of those probabilities.
When leaders consider a single plan and budget, they try to land on a dime how long 10,000 tasks will take and how much they will cost. They arrive at a best-case presentation, perhaps a most probable one. In the hundreds of consulting proposals I have written or reviewed, not a single one had anything other than a single end-date and cost[4].
In a world where reality looks like bell-shaped curves, one falls back on proxies for reality: means, medians and modes. However, if you cannot swim, walking across a river that is an average of four feet deep, or four feet deep in most places will get you into big trouble. The river’s depth is a distribution and using just one slice of the distribution is, well, absurd.
Venture capitalists and traders think like this, knowing they will only get some right, and their job is to get the big ones right, and the small ones wrong[5]. Every endeavor will have a distribution of returns and a business has to be resilient enough to handle deviations from the mean.
Planning fallacy
In preparing for battle, plans are useless, planning is indispensable. (Eisenhower)
Professor Bent Flyvbjerg, of Oxford’s Said Business School, specializes in analyzing major projects. He studied 1471 such projects and found an average overrun of 27 percent. That number may strike you as tolerable, perhaps good? But, bearing in mind our caution about crossing a stream an average of four feet deep, that figure should raise eyebrows rather than give comfort. Flyvbjerg also found that one in six had a cost overrun of 200 percent, and a schedule overrun of almost 70 percent.
To recap some of the more grotesque overruns from chapter 1:
- US Air Force, 100 percent ($600 million overrun)
- NHS Connecting for Health: 700 percent (a $15 billion overrun)
- Boston’s Big Dig, 190 percent ($12 billion overrun)
- Denver International Airport, 100 percent ($3 billion overrun)
Lest you be seduced by thinking this phenomenon only applies to projects of enormous complexity, recall that Flyvbjerg studied 1471 projects. It equally applies to projects with tiny scales. One study asked students to estimate how long it would take to complete a term paper a) if everything went as well as it could, or b) everything went as poorly as it could. They provided a range of 27 to 49 days. Term papers were handed in after an average of 55.5.
Kahneman summarizes as follows, “Executives make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns — or even to be completed.”
What does this suggest for change strategy?
The first thing is management of capital. Banks “stress test” their portfolios (not very well as 2007 proved) against numerous scenarios, and corporations must do likewise estimating their ability to shoulder the one in six 200 percent overrun and must also consider the investment in the view of middle of the distribution events such as only realizing 50 percent of the benefits.
The second is to diagnose and name “fat-tail” events and to rationally reassess prospects. Fat tails are part of the fabric of the universe, and the best decisions will encounter them. Executives must use the language of this chapter, “fat tails” and “escalation of commitment” to rigorously re-assess projects bearing in mind the sunk-cost bias, and impression management effects (that is, not wishing to look bad to either the business, to Wall Street, or to customers).
The third structure is proposed by Kahneman, who did the pioneering work on the planning fallacy in 1979. He suggests the following procedure for “grounding” planning in reality:
- Identify an appropriate reference class (e.g., school building project, IT project, family room addition, etc.)
- Obtain the statistics of the referenced class (e.g., percentage by which expenditures exceeded budget, project delays, cost per square foot, etc.). Use this objective research to generate a baseline prediction.
- If, despite your disciplined efforts, you believe optimism bias is still at play, adjust the baseline prediction as necessary.
Most writing on leadership contains a great deal on creating a compelling and optimistic view of the future. Little is written about its darker sides, hubris and denial. Less still is written about how to balance those forces. Given the uncertain and probabilistic nature of change, leaders must be conscious of how often 100 percent confidence, means 75 percent correct (as Tetlock showed), they must ensure that big decisions are evaluated probabilistically, and that the multiple biases (illusion of control, planning fallacy) that affect project budgets and timescales are factored in.
Conclusion
One could conclude after this tour of human fallibility that it is miraculous how successful we are at building cathedrals, computers and corporations given how wrong our judgments can be. There are interesting hypotheses about the roots of bias, and about how the evolution of our brains and our social technologies (decision making) have not kept up with our technological prowess.
Contributed article: This article is part three of a three-part series by Paul Gibbons who is writing The Science of Organizational Change: How to Leverage 21st Century Intelligence on Human Behavior. He blogs at www.paulgibbons.net and his most recent book Reboot Your Life: A 12-day Program for Ending Stress, Realizing Your Goals, and Being More Productive is available on Amazon.com and from www.paulgibbons.net/rebootyourlife
- [1] To see life as it is and not as it should be.
- [2] As in most social science research, the arrow of causality could run either way, or more probably both. Healthier, wealthier people are (understandably) more optimistic.
- [3] It is not really a sum, with the same sort of mathematical complexity as weather systems.
- [4] Assessing variability and risk of projects is in fact an important and relatively recent addition to the core capabilities of big consulting firms.
- [5] Strategies vary and are more complex than this, but both VCs and traders expect to lose some of the time.