“The most important failure was one of imagination.”
Executive Summary, Final Report of the National Commission on Terrorist Attacks Upon the United States (“The 9/11 Commission Report”)
“Use imagination
As a destination
Use imagination
As a destination
As long as long as you believe
Then you hold the key
Use imagination And arrive
Justice, “Pleasure”
It's a phrase that has become rote. It's hauled out every time something happens that people think, justifiably or not, “should” have been foreseen. Business failures, engineering failures, strategic surprises. Everything gets called a “failure of imagination.”
What is a failure of imagination?
It seems like a simple concept. You read the 9/11 Commission Report and think to yourself, “yup, should have seen it coming.” You, of course, could have read “Bin Laden determined to attack inside the US” and known instantly it meant that al Qaeda planned to crash airplanes into buildings, not hijack airlines and hold the passengers hostage, or plant bombs on them and blow them up in midair. Because, as a cliched but nevertheless true saying puts it “hindsight is 20/20.”
It is completely understandable that the American people demanded accountability from their leaders for the attacks. And if accountability was not forthcoming - no one was fired or resigned after the September 11, 2001 terrorist attacks on the U.S. - they were entitled, at the very least, to an explanation of what went wrong and why terrorists were able to kill over 3,000 people. An explanation of why their multi billion-dollar intelligence gathering apparatus, with SIGINT, ELINT, IMINT, and HUMINT on a scale greater than probably any other country, did not detect and prevent the attacks. The ultimate answer boils down to a failure of imagination - no one thought of the possible attack vector, so no efforts were devoted to stopping it.
But the phrase bothers me. Because it is a very mild reprimand, but also a blank cheque.
The specific content of a failure of imagination, in regard to the 9/11 Attacks, the 2008 Financial Crisis, the rise of Da-esh (also known as ISIS), are failures of divergent thinking. One potential ‘read’ on data, one potential story, is picked out as the explanation for the information. Analysts don't bother to come up with any more potential scenarios or interpretations, just the first most plausible scenario or two or three. Then the process of filtering raw intelligence into reports which get filtered into higher-level summaries which get filtered into the President’s Daily Briefing removes all nuance and subtlety, and there is just a paragraph or two summarizing potentially tens of thousands of man hours of collection, analysis, and processing. Bold speculations or hypotheses don’t make it very far up the chain.
Now, I am the very last person in the world who will ridicule imagination. I'm an artist and a writer. Among the most fulfilling parts of my life are the time I spend creating, taking dreams from within my mind and making them things apart from myself. And even if I did not derive great satisfaction from that, my own life has been immeasurably enriched by the imaginative productions of others. But I am freely creating, without limits other than my own time, attention, and interest. Analysts, whether of national security, politics, or economies and financial markets, have to produce actionable information.
The range of potential scenarios has to be restricted. There have to be limits to imagination in any practical, time- and resource-limited context. Not just to save time, but because if you start using imagination to guide policy, you can wind up in some very dark places.
He’s gonna take you back to the past…
Let's go back seventy years or so. The Soviets gave the appearance of being an enormous military colossus to the Free World (and I use the term “Free World” without irony). They had the largest army in human history, with seemingly endless arrays of tanks and war planes. They had conquered half of Europe and gave all the appearances in the world that they intended to take the rest when the time was right. The most populous nation on Earth, China, had become Communist.
And the Soviets had nuclear weapons. Their spies had stolen the secret of nuclear fission from Los Alamos, and they had the means to deliver them. Khrushchev boasted that their intercontinental ballistic missiles were rolling off the production line “like sausages.”
If imagination - also known as fear, uncertainty, and doubt - had ruled the day, the Americans would likely have launched at the slightest provocation, a preventive war like the Japanese had launched against them. When you perceive your opponent to be ever strengthening, and your position to be ever weakening, it's an understandable move.
But rather than be ruled by imagination, President Eisenhower, his advisors, and people in the United States intelligence community asked a smart question:
If we cannot figure out what the Soviets intend to do, can we figure out what they could do?
Intelligence agencies, when they aren't the subject of paranoid fantasies, are often the subject of mockery. But the thing that is missing in many of those finger-pointing exercises is a distinction between the types of intelligence that fail most often. Because the kind that fail most often are intention estimates, the attempts to figure out what leaders of other nations plan to do. Even the leaders of allied countries do not keep each other constantly updated on everything they plan on doing, especially in any area where two nations compete or are seeking zero-sum goods in negotiations. The situation is even harder when the other country is hostile and a closed society.
The kind of intelligence estimates that fail less often are capability estimates: what can a nation do? Factories and tanks can be counted, missile silos can be located, and troop deployments and movements can be mapped. This can be combined with other intelligence sources, from electronic monitoring of radar emissions to listening in to decrypted radio communications. Because it might be hard to tell if a country really intends to invade a neighbor or is just bluffing - and their leaders aren't about to show their cards just because you ask nicely - but you can tell whether or not they could invade their neighbor if they wanted to.
To figure out what the Soviets could do, the CIA built the U-2, a high-altitude reconnaissance plane that could, in one flight, cross the entire Eurasian landmass, snapping high-resolution photographs of thousands of square kilometers. Later, they built the CORONA and KEYHOLE satellites, which provided more regular data than the U-2 overflights could and didn't involve risking the lives of pilots.
From this, Eisenhower and his administration learned that the Soviet nuclear threats were a bluff. They had less than a dozen deployed ICBMs capable of reaching the United States, and a bomber force small enough that it could, in time of war, be intercepted and shot down by continental air defenses. The land forces in Eastern Europe were substantial, but standing up for Western Europe did not mean, at that time, an existential risk to American citizens.
Of course, this changed, but the itchy trigger fingers evident in early nuclear war plans were shelved. The most dramatic example of this was during the Cuban Missile Crisis, when President Kennedy and his advisors contemplated - but did not act upon - the idea of a preemptive strike against both Cuba and the Soviet Union's strategic weapons. No better moment for a counterforce attack existed in US/Soviet history, but Kennedy decided not to. The Soviets were not as strong as they boasted themselves to be. Khrushchev’s banging his shoe on the lectern and proclaiming “we will bury you” was all bluster. The US knew that this was the case. The Soviets could not follow through on their threats. The US stood its ground. The Soviets backed off, in (secret) exchange for the US removing missiles from Turkey.
If the President and his advisors had relied on their imagination, instead of capability-based intelligence, much of modern Russia and Eastern Europe could be an irradiated wasteland, and the United States would bear a war guilt orders of magnitude greater than that of Japan for the Pacific War.
Retrospective analyses of intelligence failures, where they cite specific bureaucratic, organizational, and political obstacles to intelligence collection, analysis, dissemination, and tasking, are on point. But to blame a “failure of imagination” is so broad as to be useless.
Maybe Al Qaeda could use the rapidly developing drone technology to murder Americans from the skies. Maybe they could cause two tanker trucks, each loaded with half the components of a binary chemical weapon, to collide in a major city, creating a mass casualty event. Maybe they could buy firearms at gun shows without background checks and go on mass shooting rampages. Maybe they could hack critical infrastructure and cause power grids to fail, or dams to open their locks and kill thousands upon thousands of people downstream. Once you start imagining, the possibilities are endless.
Imagine this from the perspective of policymakers. You are presented with dozens of potential scenarios, all nightmarish, all competing for a vast but still finite amount of funding.
And maybe Saddam Hussein is lying about having abandoned his weapons of mass destruction. Maybe he has built mobile labs to culture lethal pathogens and create mustard gases and nerve agents. Maybe he didn't destroy all of his long-range missiles.
And maybe we need to attack now, because we don't want “the smoking gun [to be] in the form of a mushroom cloud.”
More Responsible Imagination
Imaginary scenarios for bad things that could happen have a lot of flaws. Most basically:
They tend to be monocasual (A causes B, and B causes C, and C causes…) while the real world is both multi-causal and the direction of causality between elements can flow in multiple directions. The world is not a directed acyclic graph.
Any single story is highly improbable.
They can railroad imagination: a particularly vivid scenario is elaborated automatically by the human mind, while a less vivid one is either ignored or given only a cursory consideration. You tell people a story about terrorists using AI to figure out how to make bioweapons in their apartment and then, before you know it, people are holding symposia on preventing exactly that and only that problem.
The responsible way to use imagination in these situations is to generate a very large number of scenarios but be committed to none of them. After generating them, step back and see what the scenarios have in common (ideally have multiple, independent teams come up with scenarios to prevent idea fixation). Identifying themes and clustering the scenarios based on similarities will help you identify what the factors that lead to the bad outcomes are, and that is where efforts of collection, analysis, and mitigation can be focused.
This technique wouldn’t have prevented the 9/11 attacks. The root cause seems, from my reading, to have been twofold: (1) a Clinton administration unwilling to commit to killing enemies they could not reach through law enforcement abroad, and (2) bureaucratic structures that not only made difficult but actively encouraged the CIA and FBI to not share information with each other. But maybe a sufficiently robust analysis would have found also asked “in all these scenarios, what internal to the United States and its intelligence communtiy would stop us from preventing the attack?” An inverse capability analysis: what about us makes us not able to stop our enemy.
Imagination is great, but it’s the first stage of solving problems, not the terminus.