Americans have received urgent warnings of cybercrime, cyberespionage, and even digital-Pearl Harbors since before first Gulf War. The problem, a quarter century later, is that the public debate remains stuck at the level of tropes. Consider the recent argument over whether Russia interfered in the 2016 election. “Yes” says Hilary Clinton, invoking the traditional argument that large-scale espionage and violence are expensive and imply state-level backing. “No,” says President Trump, things are different in the 21st century—“It also could be somebody sitting on their bed that weighs 400 pounds, OK?” Absent more information, it’s hard to choose.
It is time to look behind the tropes. When you do, you immediately see that concepts like “cyberwar” and “cybercrime” cover wildly different activities. Some are dangerous (crashing electricity grids) while others (defacing web sites) are basically nuisances. Some can be done by President Trump’s 400-pound hacker, others require the equivalent of a well-funded Silicon Valley startup. Finally, many activities overlap, so that the profits from crime end up subsidizing espionage and warfare. Sensible policies need to understand and exploit these nuances.
The reality is neither Clinton nor Trump. While “400-pound hackers” are indeed responsible for many crimes, genuine cyberwar capabilities still require big budgets and dedicated teams. That said, the old 20th century line between private criminals and warriors is dissolving. Sometimes this magnifies threats, letting states that conscript cybercriminals punch above their weight.
But it can also favor defenders. It turns out that cyber-spies and warriors need an edge to overcome sophisticated defenses. In practice, this almost always means finding unpublished software flaws called “zero-day vulnerabilities” or just “zero-days.” In the 20th century, this information would have been a military secret. But today, private citizens often know more than governments—and will happily sell their zero-days to the highest bidder. This opens the remarkable possibility that defenders may be able to “drain the swamp” by buying up cyber threats on the open market.
Learning from Stuxnet. Public discussions of cyberwar and especially digital Pearl Harbors were mostly hypothetical before 2011. This changed after Western governments used the Stuxnet virus to destroy centrifuges inside Iran’s Parchin nuclear facility. Twenty years ago, the details (and even existence) of this operation would have been classified. But today, computer security firms often have better sensors than the government itself. Symantec’s installed base of 63.8 million computers in 157 countries gives us nearly as much information as the combatants themselves.
So what does cyberwar look like? First, Stuxnet required hands-on management. For President Trump’s 400-pound hacker, viruses are a “fire and forget” weapon: Once written, they operate on their own. By comparison, Stuxnet required months of cat-and-mouse probing inside the adversary’s network, with software constantly reporting back to its creators and being rewritten as necessary. This is expensive. Symantec estimates that Stuxnet employed five to 10 core developers plus additional management and quality control personnel for about six months—roughly equivalent to a Silicon Valley startup company. Second, Stuxnet is just one point on a continuum: The methods that drive cybercrime, cyberespionage, and cyberwarfare are broadly similar but are of widely varying complexity. The resulting consequences span everything from $10 million thefts to crashing the Ukrainian electricity grid in 2015.
Just the same, practically all high-value attacks require at least one zero-day—that is, a software vulnerability that the target does not know exists. The targets of cyber attack know who they are and appropriate large budgets to defend themselves. So known vulnerabilities are just a commodity: Well-heeled defenders can buy whatever monitoring they need from vendors to protect against vulnerabilities that have already been documented. By comparison, zero-days are game changers: Their existence must be teased out (if at all) by noticing subtle changes in traffic patterns that attackers like Stuxnet do their best to hide. This makes even the best-defended targets vulnerable. Yet this advantage is brittle and fleeting: As soon as the zero-day is noticed and becomes public, sophisticated attackers invariably stop using it.
Cybercrime economics. Most of the cybercrimes the general public experiences are automated and untargeted. Common examples include ransomware, adware fraud, credit card theft, identity theft scams, and taking over consumers’ computers for denial of service attacks. These are volume businesses, with hackers probing millions of consumers in hopes that a few attacks will get through. At the same time, the chances of success in any particular case are too small for hackers to manage or even monitor individual attacks. This leads to a simple economics, in which hackers go on writing viruses until their profits fall below some minimum wage. Whether he literally exists, President Trump’s 400-pound hacker provides a nice shorthand for life in these low-skilled, atomistic, and miserably competitive markets.
But there is a second possibility. Lucrative targets like financial institutions or governments know who they are and can afford to defend themselves. Evading these defenses requires an organized and sustained effort to penetrate and map the target, and then exfiltrate data or money. Typically, this means hiring a core team of 10 to 40 people for an average of eight months or so, at an implied upfront cost of several million dollars. (This staffing level is consistent with the empirical observation that criminals sometimes mount attacks for paydays as small as $10 million.) However, the relatively small cost and size of these teams also raises a puzzling question: Given that there are hundreds and even thousands of possible targets, one might expect the number of gangs to be comparably large. Yet there seem to be just five Russian gangs and this number has been stable for years.
But if targets are not the limiting factor, what is? The answer, it turns out, is the number of zero-days. Suppose for simplicity’s sake that these vulnerabilities become public after the first completed attack. Once the vulnerability is publicized, every other gang using the zero-day will likely be exposed and lose its investment. Formally, this logic is identical to a Silicon Valley patent race, in which the winner takes all and second-place is worthless. Moreover it leads to the same result: The number of gangs is equal to a few times the number of zero-days.
Perhaps most important, the economics of cybercrime spills over into state hacking programs. Given the size of the US defense budget, Russia or China cannot possibly hope to match the Pentagon’s cyber-espionage and cyber-warfare capabilities dollar-for-dollar. Then again, they don’t have to. Instead, they often conscript the talent they need from civilian and, especially, criminal IT experts. This immediately saves training costs and lets governments repurpose illicit technologies for their own ends. Also, states often require gang members to perform “patriotic” attacks in exchange for prosecutors’ willingness to look the other way. But gangs will only buy this “protection” if crime is profitable. This implies that cyberwar threats are at least partly limited by foreign criminals’ ability to steal.
What we know about zero-days. I have argued that reducing the number of available zero days would reduce both the number of cyber-criminals and the state programs that depend on them. The question remains whether policymakers can do anything to drain this swamp; hackers are, obviously, always on the lookout for new vulnerabilities to exploit. One obvious starting point is to ask how many such zero-days exist. The naïve number, as reported by Symantec, is that 54 were detected in 2015. But this is really only an upper limit. First and most obviously, each zero-day relates to a specific piece of software. This means that the zero-day list for targets that aren’t running, say, Adobe Flash quickly gets shorter. Second, not all zero-days are equal: Zero-days that let attackers infect networks tend to be more valuable than those which merely allow them to escalate user privileges, evade security precautions, install malicious software, and avoid detection. The fact that just three vulnerabilities accounted for 97 percent of all zero-day attacks in 2013 shows that number of relevant zero-days is often quite small. Given that high-end zero-days currently sell for $250,000 or so, budgeting $10 million to buy up zero-days ought to make a deep dent in the supply. This is small change in the context of US Cyber Command’s $7 billion annual budget.
Of course, a government program to buy up zero-days matters little, if new zero-days are easy to find. Consider that all new software contains some fixed (but unknown) number of defects. If the number is small, buying up vulnerabilities should indeed reduce and even drain the swamp of available hacking vulnerabilities. But if the number is large, we might just as well try to empty the ocean. Which is it? No one knows for certain. But there are three lines of evidence that suggest the supply is limited. The first and least scientific intuition is economic: The fact that many zero-days sell for $250,000 suggests scarcity. The second involves independent discovery: If the total number of defects is small, new searches should discover the same examples over and over. So we should be encouraged that, anecdotally at least, zero-days seem have a short shelf-life. Scholarly estimates based on small data sets are similarly consistent with a three-year half-life. Third, we should sometimes see evidence that swamps are actually drained, i.e. that mature software needs fewer and fewer patches. Apart from one early study, scholars uniformly claim to see this effect.
Finally, zero-day searches are about more than man hours. Like most creative activities, empirical studies show that different IT specialists tend to notice different vulnerabilities, depending on their talent, knowledge, and life histories. This suggests that diversity matters. As open source enthusiasts like to say, search institutions should enlist as many “eyeballs” as possible.
Vulnerability markets. Since the 1980s, innovators have tried various schemes for buying up software vulnerabilities. Probably the simplest model involves bug bounties,in which software makers publish submitted vulnerabilities in exchange for prizes ranging from simple name recognition to $20,000 cash payments, with isolated paydays sometimes reaching $200,000. That may not seem like a lot, but consider: Suppose you are an IT professional who notices a vulnerability in the course of your day job. Given that even a small reward is better than none, why not collect your payment? (This is especially true since hackers who wait for a better price risk losing out to rivals who discover the bug independently.)
Still, the game is limited: Software makers have very little incentive to offer the kinds of prizes that would encourage hackers to seek out new vulnerabilities. The reason is contract law’s “consequential damages” rule, which limits the financial exposure of software vendors to the levels of losses that an ordinary client would suffer. This immediately excludes the kind of high-end targets—including governments and major corporations—to which zero-days matter most.
To some extent, this gap is mitigated by bigger prizes from firms that specialize in serving high-end clients. That said, it seems intuitive that criminals will often pay more than defenders. The reason has to do with the deep asymmetry between attack and defense: While hackers need to buy only one zero-day to succeed, targets must buy all of them. The saving grace, so far, is that most real zero-day markets don’t work very well. The reason is trust. On the one hand, sellers need to describe the zero-day in enough detail for buyers to want it. But if they do, buyers may be able to find the bug for themselves and pay nothing. On the other hand, buyers need to know that the bug actually works and, if it does, that sellers will not sell it over and over again until it leaks to the public. Such promises are almost impossible to enforce.
One partial solution is to make markets less anonymous. For example, Russian crime groups almost always rely on invitation-only auctions, in which dishonest sellers can be excluded from future transactions or even physically punished. Similarly, Western businesses favor trusted brokers that can vouch for both sides and later act as escrow agents, releasing payment when a working bug is revealed. The downside for both fixes is that the number of buyers and sellers is painfully limited. For sellers, this means that prices differ wildly from one offer to the next, suggesting that some of the best offers may never get made at all. Conversely, buyers know that they are limited to a tiny number of hackers. This sacrifices zero-days that a more diverse group would discover.
It is natural to think that legal markets—which are, after all, backed by courts and contracts— could do better. So far the evidence is ambiguous. Firms like WabiSabiLabi, iDefense, TippingPoint, and Digital Armaments experimented with public auctions in 2007. This was followed in 2015 by new entrants Zerodium, Netragard, Endgame, Northrup-Grumman, and Raytheon. These public sites routinely promise up to $250,000 with occasional payouts up to $1 million. On the other hand, it is equally true that WabiSabiLabi and Netragard have both exited the industry while competition seems to be limited. Evidently, the business remains marginal.
Draining the swamp? The question remains whether a government program could buy up zero-days more efficiently than existing institutions. Testing such a possibility is partly just a matter of money. Current prices seldom reach $1 million. If the United States is really worried about digital Pearl Harbors, it can afford to write much bigger checks than that. Then, too, a government program would probably work better than existing auctions. The reason is that a program designed to destroy zero-days does not need to guarantee secrecy or prevent resale. Instead, it should be enough to show (a) that participating hackers will be rewarded, and (b) that governments do not consistently overpay.
These are still deep problems that demand careful institutional design. (The challenges posed by this “mechanism design” problem are in fact closely related to research that earned the Nobel Prize for Economics in 2007.) Nevertheless, we can readily imagine solutions. Consider for the sake of definiteness a government-backed prize authority that (a) collects and publishes zero-days and (b) pays out prizes on a one-month cycle. Given our limited knowledge of zero-days, any offer is bound to be arbitrary at first. Suppose, then, that the authority announced a $1 million prize to each of the “top five” zero-days submitted and nothing else. Then hackers wondering whether to submit zero-days would see two kinds of risk. The first is the possibility of competing against other hackers and losing. This, however, is fundamentally no different than the decision to enter a lottery or, for that matter, a patent race. So long as the authority promises a higher “expected reward” than anyone else, economists expect rational hackers to participate. (In principle, criminal groups could outbid the authority. This would already be a good thing to the extent that the authority made crime more expensive. The deeper point is that criminal auctions are “invitation-only” and, to that extent, never reach most hackers. This suggests that a government-backed authority would predictably rediscover the same zero-days so that they became useless.)
The deeper problem, as we have seen, is reassuring hackers that the authority would not take their information without paying. This, however, leads to a subtle distinction: Since the prize authorities do not care which hacker receives the money, showing that they paid the money to any third party is normally sufficient. Dropping anonymity makes this easy to prove. (Late-night comedians have long purchased humor from writers who submit jokes by fax. The problem, in practice, is that jokes are often similar. On the face of things, one might expect joke writers to be suspicious. But in fact, writers continue to participate so long as someone is paid every morning.)
The question remains: What can the government do to avoid overpaying for zero-days? The initial answer is almost certainly “not much.” That said, things would get better. The reason involves learning: Following the first auction, we expect the prize authority to examine the fifth-place zero-day and ask whether it was really worth $1 million. If the answer were no, it would scale back the prize. But if the answer were yes, a rational authority would expand the list of $1 million rewards to include, say, the first 10 entries. Over time, an authority could expect to get value for its money.
A strategic choice. The real question, of course, is whether American policymakers want to drain the swamp. Critics already accuse government of stockpiling vulnerabilities to use against enemies, even though keeping zero-days secret leaves US civilians open to attack. While US officials claim to weigh this risk against the country’s intelligence needs, it is reasonable to think that agencies like NSA and FBI place undue value on offensive cyber capabilities. This possibility suggests the need for a comprehensive, government-wide decision on buying zero-day vulnerabilities. In the end, the choice comes down to two basic questions. First, is the US government’s current perceived edge in cyber-espionage and -warfare against other states large enough to justify the private sector’s losses to cybercrime? And second, even if it is, would the net benefits be larger still if the United States’ absolute capabilities were reduced, while leaving its relative dominance intact? For those who take the threat of “Digital Pearl Harbors” seriously, the answer to the latter question is almost certainly “yes.”
There is also the question of who should pay. If Russia and China hate cyberwar as much as they claim, why not invite them to join in collaboration? Then, if they refused, the world would at least see their hypocrisy. This is fundamentally no different than President Truman’s strategy in offering to share the secret of the atomic bomb after World War II, or President Eisenhower’s in calling for an open skies policy to reduce the danger of surprise attack.
And if China and Russia did balk, the US could always threaten to go ahead, while keeping the zero-days secret. As President Trump recently said in a different context, “Let it be an arms race, we will outmatch them at every pass. And outlast them all.” More optimistically, suppose that China and Russia accepted. Then a public prize program would immediately show everyone how many zero days were being discovered each month and, by implication, how well the swamp had been drained. By normal arms control standards, this kind of transparency is unprecedented.
The policy establishment has repeatedly criticized President Trump for not knowing the conventional wisdom about, say, the “one China policy” or the reasons for NATO. But learning runs both ways. I have argued that in the 21st century, markets can matter just as much as militaries. Electing a businessman president could have its compensations.