I’ve been an analyst, or have managed analysts, almost all my career, either in intelligence at the National Intelligence Council (NIC) or in policy, especially at the RAND Corporation. These are the dos and don’ts I’ve learned, with stories to make them more vivid.
By Gregory F. Treverton
Note: The views expressed here are those of the author and do not necessarily represent or reflect the views of SMA, Inc.
I should lead with my chin: what I got wrong. Many things, I’m sure, but most recent, hence vivid, is the Russian invasion of Ukraine. As I watched the Russian troop buildup around Ukraine, I asked myself, why in the world would Vladimir Putin mount an all-out assault? I couldn’t think of any reason, and so I concluded this must be a show of force to push the United States and NATO to engage Russia in serious security negotiations. It was a rookie mistake, mirror-imaging. I’m not Putin. Meanwhile, my former colleagues in intelligence got it right. They saw that the number of troops Russia had deployed was far too many for a show of force. An invasion impended. They were right and I was wrong.
That leads to my first lesson about analysis: do pay attention to the data and the numbers. I should have learned that lesson when I was Vice Chair of the NIC in the 1990s. During the peso crisis in Mexico, I had a wonderful National Intelligence Officer for Warning, who watched the peso fall in value (in fact the fall was even greater than she observed because the Mexicans were cooking the books). She concluded that the country would have to devalue. But both the Treasury Department and Wall Street told a complicated, reassuring story about why a new Mexican financial instrument, tesobonos, would save the day. Suffice to say, she was right, and they were wrong. Pay attention to the data.
My second do is do use some method when appropriate. Some years ago, one of RAND’s competitors did a project I would have loved to have done, looking across the U.S. intelligence agencies to see what methods they used in forecasting. What they found was doubly bad. Mostly, they didn’t use any methods, bad number one. When they did, it was bad number two, brainstorming to reach an agreed forecast. Research has shown that to be a recipe for groupthink. It is much better to ask experts individually, then somehow aggregate their answers.
My sample size is small, but I also have the sense that decision-makers accept bad news better if it derives from some method. This may be an artifact of my time in Washington, where things tend to get taken personally. Methods depersonalize the bad news, suggesting some rigor as the analyst pours cold water on a pet scheme of the decision-maker. At the beginning of the Clinton Administration, I commissioned an exercise called Factions, a way of aggregating the strength of various political actors and their positions—things intelligence analysts know. The question was what it would take to make Serbia change its goals in Bosnia. The answer was airstrikes of industrial targets in Serbia proper, a step that the Administration was years away from even considering.
A third do is do ask yourself, “why do I like this analysis?” Is it because the conclusion is convenient? During the Cold War, when Solidarity in Poland began its protests, intelligence agencies expected a crackdown. But they expected that to be Soviet tanks rolling into Warsaw, and that scenario was what drove planning. When the Poles imposed martial law on themselves, that was in some sense better for the world. But it was inconvenient. It hadn’t been planned for.
A fourth do is do be humble. I wish intelligence would more often say “we don’t know.” At the NIC, we worked hard to be more systematic about likelihood. Each of our products had an appendix of definitions: when we say “probably,” we mean between 50 and 70 percent likely, for instance. I’m sure that none of our policy counterparts ever noticed the appendix, much less paid attention to it. But it disciplined us.
By contrast, our work on confidence was just a hand wave. I found in one of our products a footnote that said something like, “unless otherwise indicated, we have moderate confidence in all this paper’s assessments.” That would have earned the comments I write on student papers, “NSS” for “no shit Sherlock.” It was meaningless. The furthest I have gotten in doing better is to ask of any analytic conclusion three questions: How reliable is the available evidence? What is the range of opinion? And perhaps most important, what would it take for me to change my view?
A final do is do think of analysis as a team effort. At the NIC, we took peer reviews very seriously, and would do them both on terms of reference and on finished products. In almost every conversation, as we went around the room, someone would make a comment that was immensely helpful in crystallizing a doubt or changing the perspective. We often think of peer reviews as tedious as best, so I will always value the comment one National Intelligence Officer, a career CIA analyst, made at his retirement celebration. He said that what he would remember most fondly from the NIC was the peer reviews.
I have two don’ts. The first is don’t be too linear in thinking about the future. To be sure, tomorrow is likely to look a lot like today, but the critical instances are often when it doesn’t. As part of the run-up to Global Trends—the NIC’s every four-year unclassified look out twenty years—I reviewed earlier versions, especially Global Trends 2015, which was done in the late 1990s. Since 2015 had come and gone, a review seemed fair, and on the whole the publication stood up well. My main critique was that it was perhaps too straight-line in extrapolating the present into the future. After all, while the undersides of globalization—terror and global “bads” moving as easily as goods—were apparent, in the late 1990s globalization looked pretty rosy.
I grappled with the same challenge in a book of my own about intelligence, one written before 9/11 but published afterward.[1] When I turned to the conclusions, I realized that my vision of the future of national intelligence presumed a fair-weather view of globalization. Accordingly, in the first section of the concluding chapter, I asked what might knock that rosy prospect off course. I came up with two possibilities—a major terrorist attack on the United States and a major global recession. Neither was rocket science, for a parade of blue-ribbon panels had warned that terrorism would come to America’s shores, and in any twenty-year period a recession of some size is almost inevitable. Still, I didn’t expect—and surely didn’t want—to bat two for two!
The final don’t is don’t be casual about assumptions. My years of trying to make analysis improve decisions have led me to one principal maxim: assumption is the parent of fuck-up. When I left the NIC, I had glass paperweights made for my closest colleague. They bore the maxim but in letters small enough to make the paper weights presentable. The older I get, the more I think the maxim applies to more than intelligence. It applies to life as well.
[1] Treverton, Gregory F., Intelligence for an Age of Information (Cambridge: Cambridge University Press, 2002).