The Future of Intelligence

A funny thing happened between my preparing to leave the chair of the National Intelligence Council (NIC) at the end of last year and now: Donald Trump was elected president.

By Gregory F. Treverton

NOTE: The views expressed here are those of the author and do not necessarily represent or reflect the views of SMA, Inc.

As I prepared to work and speak about the future challenges of national intelligence, I thought I had a clear view of the issues on the horizon, if not always an equally clear view of what do to about them. Yet here, too, as in almost every aspect of US policy, Trump’s election has scrambled the deck, injecting enormous uncertainty. So this list begins with the challenges as I might have portrayed them in more normal times, then concludes with reflections on my puzzlement about how enduring and how momentous the distinctly non-normal times Trump has ushered in will be.

Balancing Strategic and Tactical. This is an enduring challenge; hand wringing about the primacy of the urgent over the important has characterized all of my years as a student, consumer, and sometime practitioner of intelligence. Yet it is made worse by the shapelessness of the current world, which means that every crisis has to be approached afresh on its own terms, and, especially, by the nation’s hyper­sensitivity to the terrorism threat. That threat to the United States homeland remains minimal, but that is hardly the way it is perceived by the public—or characterized by politicians. From my perch at the NIC, the acute sensitivity was doubly deforming of our work. When we looked at Nigeria, there was not much Nigeria: it was Boko Haram. And even when we looked at Boko Haram, there was not much Boko Haram: it was all deciphering networks and targeting bad guys. We all wondered and worried, where do these people come from, and why are they doing what they’re doing? We did what we could at the NIC trying to understand root causes and motivations. But we were only scratching the surface.

In 2016, the NIC produced about 700 pieces of paper, and more than half of those were memorandums from a National Intelligence Officer to the National Security Adviser, her deputy or another senior National Security Council official. They came directly from the deliberations of the two main policy­making bodies in the administration—the Principals Committee, the relevant cabinet secretaries, or, especially, the Deputies Committee, their deputies and the focal point for assessing options and teeing up decisions. Not all those papers were purely tactical. Some were the “what ifs?” of the sort that should be the woof and warp of intelligence-policy relations: “if we do x, how will Putin respond?” Because we were at all the policy meetings, we knew what was going on. But my task, every day, was to find time and capacity not just to answer the questions policy officials asked but also to answer the more strategic ones they weren’t asking.

Building—and Adjusting—“Stories” in a Shapeless Word. This is a kin of the strategic/tactical challenge, and one that bears more directly on warning. I have come to think that intelligence is ultimately about telling stories, and most intelligence—or warning—failures derive from holding onto stories that events have outmoded. A story from another realm, Ebola, drives the point home. The medical community had a “story” about Ebola: because death was quick, its period of contagion was brief, thus it would flare up and die out in remote regions. The trouble was that much better transit from rural areas to urban had overtaken the story.

The shapelessness of the world both confounds and demands strategic analysis. If intelligence is storytelling, many of our current stories are suspiciously long in the tooth. In policy terms, for instance, we have been telling ourselves the same story about North Korea for a generation: with just the right combination of carrots and sticks, primarily the latter, and with China as a real partner, we can induce North Korea to foreswear nuclear weapons. Meanwhile, North Korea has gone from an incipient nuclear power to a real one. Intelligence cannot prove and thus cannot say the truth: North Korea is a nuclear power and will remain one; that is all the regime has. But at least challenging the prevailing story would be a start.

For other critical issues, like the Middle East, we have no real story beyond demonizing terrorists and Iran. To be sure, the task is hard. Throughout my tenure at the NIC, I looked for strategic insights and found precious few because the issues are complicated and the causal arrows tangled. The best I found came from our Australian colleagues, who divided the conflicts into three and a half factions—the ISIL-led Sunni extremists: the Saudi-led Sunni autocrats; the Iran-led Shias; and the missing half, the Muslim Brotherhood-led Sunni moderates, recognizing that the term “moderate” is relative at best. But the difficulty of the task is no justification for not trying it. Otherwise, we can all too easily blunder into major campaigns against minor threats or still worse, create those threats.

Transparency and “Big Data.” These are two sides of the same coin. The same ubiquity of information that produces so much for intelligence agencies to assess also makes it impossible for their operatives to remain secret for long—and, alas, guarantees that there will be more leaks of methods, if not more Snowdens. Perhaps the vision of the future should be more akin to Silicon Valley where secrets are kept but not for long, and where the premium is on collaboration even if today’s partner may be tomorrow’s competitor.

But that data will be a godsend for intelligence. Indeed, the analytic challenge is greater for intelligence than for private businesses, most of which wants to predict where I will be tomorrow so they can besiege me with ads for things I like. At the NIC, I started an experiment in the Africa account. Its premise was that while there isn’t a huge amount of intelligence information on Africa, there is a lot of data out there; the goal was an existence theorem: if the NIC, with a hundred analysts, could make use of data, any place in the Intelligence Community could. Not surprisingly, we found that social media and other available data were pretty good at predicting famine and disease. The next step was to cull “tips” from the data: where should analysts look, and what connections should they probe that they hadn’t considered.

The NIC also inherited a nifty bit of crowd sourcing that had been developed by the Intelligence Advanced Research Projects Activity (IARPA), intelligence’s counterpart to DARPA. There were two prediction markets, one classified and composed on intelligence professionals, and the other unclassified. The open one was the creation of Philip Tetlock, and it had made two important discoveries. Just as some people are better athletes than others, so, too, some people are better predictors; his open market came to feature “super-predictors.” Even better, a small amount of training improves prediction. Unsurprisingly, the burden of that training is helping people keep an open mind just a few seconds longer. I used the internal market as a kind of “red cell”: if the experts thought development x was y percent likely but the market was betting 2y, what was going on? I didn’t care about the numbers, it was the conversation that mattered. And I hoped to move to market from fairly short-run predictions, which could be settled soon, to longer, more strategic questions. For them, I hoped we might create way stations on which to bet and, in the process, perhaps do better at constructing what intelligence calls “indicators.”

Breaking the Cycle. It has been often said that the canonical intelligence cycle, from requirements through collection to analysis and dissemination, is often short-circuited. That is true enough—no matter how much intelligence agencies dislike it, policy officials will hanker for the next “raw” spy report or intercept. But as a paradigm the cycle is increasingly unhelpful. In this as in many other ways, what worked tolerably well in the Cold War is dysfunctional now. Then, with one overarching and secretive foe, it made a certain sense to ask, in a linear way, what we needed to know and how we might collect it. Even analysis had a certain industrial quality about it: a friend who was an NSA Soviet analyst recalls starting the day with a large stack of “her take,” the overnight SIGINT collection relevant to her account.

Before I returned to the NIC, I had become a fan of “activity-based intelligence,” or ABI. It was devel­oped in the war zones in Afghanistan and Iraq primarily to unravel terrorist networks and identify bad guys. Identifying Osama bin Laden’s driver was one of its successes. It amassed information from many sources around particular locations, and then used correlations to develop “patterns of life” that would distinguish potential terrorists from ordinary pious Muslims at prayer. For me, its side-benefit was creatively disrupting the canonical cycle. It was “sequence neutral”: we might find the answer before we framed the question. Think how often in life that occurs; you don’t know you were puzzled about something until you find the answer. And in a world of ubiquitous information, ABI doesn’t prize secret sources: if information is useful, it’s good; if not, it’s not. Finally, perhaps advancing age has made be skeptical of the causation that infuses the canonical cycle. I feel more comfortable with correlation while recognizing that many of the correlations will be spurious.

Intelligence as an Argument for Policy. This, too, is hardly new. In the past, in times of divided government, Congress was tempted to, in effect, turn intelligence issues into policy choices by mandating that if intelligence caught Iran exporting x, then y sanctions would be automatic. To be sure, the practice was more than uncomfortable for intelligence, for it meant asking intelligence to put a gun to the heads of its policy counterparts in an administration! More recently, in days of intense partisanship, administrations have been tempted to use intelligence to argue for their policy choices. So it was in the run-up to the 2003 invasion of Iraq. The intelligence assessment that Iraq had weapons of mass destruction made it diffi­cult for Democrats in Congress to oppose the invasion and provided policy cover for supporting it. So, future administrations will be tempted to turn intelligence findings into policy choices: imagine if the Intelligence Community found what it so far has not—evidence that Iran was persistently cheating on its obligations under the nuclear deal.

New Competitors, New Colleagues. Intelligence has always worried about the competition. A generation ago, that was CNN: was intelligence always to be scooped by CNN? (I always thought that concern was misplaced: better to get it right than get it wrong, first.) Now, though, the list of sophisticated private organizations doing “intelligence” is a long one, from Eurasia Group through Bloomberg and Oxford Analytic to Stratfor. The cyber arena is a striking example of the change. In the traditional process, if a major hack occurred, it would fall to the Intelligence Community to attribute it to the perpetrator, then policy would decide on a response, name and shame, seek indictments or whatever. Now, however, that tidy process is disrupted, for while intelligence is doing attribution, so are a host of private companies. And they will not be shy about identifying the perpetrator, never mind what the government might prefer. In the short run, this seems competition; in the long run, I hope it will become creative collaboration.

Truth as Malleable. So much for my list in normal times. It might be useful, even impressive, for my graduate students. Yet it seems overwhelmed now by the prospect that “truth” will be widely regarded as personal, or political, or partisan. Mr. Trump’s “false facts” are the poster-child, but the question is how deep and abiding this trend will be. Intelligence, still more than other endeavors, has always known how elusive the truth can be. And our language, like “true enough,” is mirrored in the distinction between intelligence and law enforcement: true enough for policy is a looser standard than true enough for a court of law. (In passing, while I’ve come to admire the marble entrance to the CIA, I’ve always found the Biblical quotation from John odd, and oddly placed there. In fact, and even in intent, intelligence’s truth is more likely to constrain policy than to “set it free.”) One of the great paradoxes of our times is that all the wonderful technology created to connect people has ended up segmenting them into “echo chambers” in which they hear only what they want and learn only what they already thought.

So far, I see no better response for intelligence than to double down on trying to distinguish what is likely true from what is not. False facts, in principle, make real ones more valuable, and their identification more pressing. The question is: will anyone listen? In the short run and for the Trump Administration, my guess is that the sheer complexity of the issues will continue to turn it toward intelligence and toward a real interest in what is really happening. It is one thing to believe false facts about the turn-out at Inauguration but quite another to believe them while committing GIs to combat in Syria.

Or at least I fervently hope so.

Published on September 20, 2017 by

Dick Eassom, CF APMP Fellow