Small groups, dangerous technology: Can they be controlled?

By Andrew Snyder-Beattie, May 3, 2015

Attempting to usher in the apocalypse in the 1990’s, the Japanese doomsday cult Aum Shinrikyo managed to procure VX nerve gas and a military helicopter. Fortunately a botched chemical weapons deployment limited the casualties to only a few thousand injured and about a dozen killed. Despite having a doctorate in molecular biology, Aum Shinrikyo's Seiichi Endo couldn’t access any truly catastrophic bioweapons—in 1995 nation-states were still by and large the only entities that could realistically kill millions. How long this limitation will hold is still an open question.

The technology and knowledge needed to create more viruses is becoming increasingly democratized. Only six years after the Tokyo attack, scientists inadvertently created a more lethal strain of mousepox that also rendered the standard vaccine ineffective. Five years after that, a journalist successfully mail-ordered some modified smallpox DNA without interference. The number of individuals with the lab knowledge required to produce a working virus from synthesised sequences is growing as well, and many protocols for designing such viruses are openly available in the scientific literature.  

We’re entering an era in which smaller and smaller groups can project violence in unprecedented ways, even rivaling the destructive capacity of states. This shift in power dynamics is the topic of The Future of Violence, a new book by Benjamin Wittes at the Brookings Institution and Gabriella Blum at Harvard Law School. The book addresses one of the most fundamental challenges of our century—how can we structure our society so that these newfound technological powers don’t end in catastrophe?  

Wittes and Blum don’t pretend to have all the answers, but they review the major options as well as some unconventional ones. They consider the merits of technology regulation (e.g. outright bans), creation of industry standards (e.g. international licensing of DNA synthesisers), and even unilateral action (i.e. drone strikes). Three of their unconventional options are particularly interesting: platform surveillance, liability regimes, and empowered citizens.

Surveillance raises concerns of both privacy and overpowered governments, but Wittes and Blum are quick to point out that improved security needn’t necessarily come at the cost of personal liberty. Screening mail-order genomes for malicious sequences and installing video feeds in particularly risky laboratories constitute a limited form of surveillance—what Wittes and Blum call "platform surveillance." Rather than targeting individuals, platform surveillance would operate on the technological infrastructures that pose the largest risks to human life.

These risks could also be reduced by clearly defining liability, as well as mandating liability insurance for particularly risky projects. For example, researchers in the Netherlands and Wisconsin have modified H5N1 viruses so that they would likely kill at least two million (and possibly more than a billion) if they were to escape from the laboratory and start a pandemic. Liability insurance requirements would force these researchers to account for their externalities, using the market to price this risk in a more appropriate way. And if no insurance company would be willing to accept such risks, then neither should we as a society.

But perhaps the most interesting of their suggestions is the role that empowered individuals can play. The technologies that democratise dangerous biotechnology also give us the tools to design countermeasures. Informed individuals can shape the policy agenda with analysis and advocacy (see the seminal paper on strategic terrorism by inventor and technology strategist Nathan Myhrvold), or a sufficiently brilliant patent acquisition strategy might enable a single investor to help safely navigate humanity through our biotechnology transition by altering the sequence in which technologies get adopted.

Another option could be strategic philanthropy.  The Open Philanthropy Project, backed by Facebook co-founder Dustin Moskovitz, suggests that the low profile and high impact of global catastrophic risk (e.g. geomagnetic storms, nuclear exchanges, and the risks of distributed biotechnology) could make the reduction of such risks an uncrowded and high-leverage way for a philanthropist to do a massive amount of good.

In Better Angels of Our Nature, Harvard psychology professor Steven Pinker amasses evidence demonstrating a large and sustained decline in violence over the course of human history, attributing the trend to factors such as state formation, literacy, and feminism. The evidence is compelling, but Pinker’s story is one of aggregates, not of individuals. And in a world where individuals could kill millions, even one person outside the reach of an empathetic upbringing or the Leviathan is too many. To keep humanity’s trajectory peaceful, it will need driven and empowered individuals on the other side as well—those with the ambition to make humanity’s future as bright as possible.

As the coronavirus crisis shows, we need science now more than ever.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Support the Bulletin