Editor’s note: This article is part of a series of short articles by analysts involved in the Cyberspace Solarium Commission, among others, highlighting and commenting upon aspects of the commission’s findings and conclusion.
The international community lacks a firm grasp of the cyber domain as an operating space—a key point recognized by the Cyberspace Solarium Commission early in the process of putting together its report. Many grand statements are made about the structure of the system with little knowledge of the basic patterns. Strategies in development all promise some sort of impact in relation to a threat, but how should threat be measured? How should the government plan to evaluate effectiveness and success?
The data available in cybersecurity are generally drawn from rival interactions, painting a picture of the domain skewed toward conflict because of the focus on the actors most likely to fight. In other cases, the data are selected in an ad hoc fashion with no consideration of statistical methodologies. A complete picture of cyber interactions would highlight the diversity of players and the dynamic patterns of conflict globally, illustrating a much different vision of cyber conflict than the current focus on major players.
Developing an effective strategy to combat cyber challenges requires measuring the level of threats overall. The U.S. needs to understand the baseline of cyber activity in order to judge when our efforts are having a desired impact. Only if we understand what behaviors are common can we observe changes in adversary behavior.
With a collaborative effort, we can create viable indicators to measure conflict in cyberspace. The Cyberspace Solarium Commission report proposes a Bureau of Cyber Statistics that would be key to this project. The bureau could collaborate with industry partners producing cyber intelligence threat firm reports and critical infrastructure reports, researchers leveraging machine learning techniques to scrape data from digital sources, and analysts in the intelligence community and elsewhere in and outside government.
What data are needed exactly? The community needs to know who or what the targets are and how often they are attacked, the types of operations utilized and tools leveraged to attack a target, connections to information operations, the severity of attacks (vertical escalation), and the spread of attacks (horizontal escalation). With these data, we can start to understand who the threats are and, more importantly, what sorts of operations can alter the trend.
With data on cyber interactions, we can better measure cyber instability. What states and systems are most at risk? Is our strategy having the desired effect of compelling the adversary to draw back on attacks? What conflict thresholds are in operation, and how do they change over time? The commission has recommended that these questions be answered in the annual classified Cyber Posture Review—but researchers whose views might differ from the official orthodoxy should also weigh in. We need many cooks in the kitchen to understand patterns in this domain, and keeping information behind closed doors will only make the problem worse.
A plethora of data points are available to scholars—they are just not collected in one place with a stable method of evaluation. Trust between government and the public is at a low point, so putting in place an independent process for evaluating threat and success is key to establishing a committed cohort of policymakers and researchers able to understand data indicators to measure the evolution of cyber threats. To do less would be irresponsible and akin to flying blind in this operating environment.