Most AI research shouldn’t be publicly released

By Nate Sharadin | June 8, 2023

Last month, researchers demonstrated that it was possible to use a generative artificial intelligence (AI) system to autonomously design, plan, and execute scientific experiments with zero oversight. The system wrote instructions for synthesizing chemicals that could then be executed by a remote lab where chemicals can be synthesized without any humans in the loop.

Separately, last year, researchers reported an AI model’s ability to discover novel compounds even more toxic than VX, a nerve agent widely regarded as being among the most toxic compounds ever discovered. One of the researchers said in an interview that for him, “the concern was just how easy it was to do” and that “in probably a good weekend of work” someone with little money and training and a bit of know-how could tune a model to produce the worrying results.

According to the National Institutes of Health, research is “dual-use” if the “the methodologies, materials, or results could be used to cause harm.” Gain of function research on human pathogens is a case in point in misuse. So is AI research.

Worrying misuse of AI research isn’t limited to the synthesis of dangerously illegal chemical compounds such as VX. AI is already deployed to defend against network intrusions on critical infrastructure, and it’s also being used, much less effectively, to develop novel techniques for compromising networks. But as research on cybersecurity AI continues, it’d be pollyannaish to expect that the present equilibrium favoring defensive AI tools over offensive ones will continue. In part, this is for technical reasons: AI tools designed for network intrusion are likely to operate independently of command and control, making them largely immune to present detection techniques. The equilibrium is unstable for more prosaic reasons, too: The incentives to develop offensive AI systems run in the billions of dollars.

The opportunities for misuse are even more expansive. AI is already an integral component in private and state surveillance regimes. Private firms use it to exclude citizens from their property. State actors use it to surveil, monitor, and control their citizens in ways that would be impossible using only human agents. If authoritarianism relies on an ability to detect compliance with rules, then authoritarianism is about to become much more effective. It’s pretty uncontroversial to say that more effective authoritarianism is a bad AI result.

Why is there so much opportunity for the misuse of AI research? AI research labs themselves hint at the answer. The stated aim of many major AI labs is to achieve and ideally exceed human-level performance on a wide range of tasks; iterative success in achieving this aim may lead to artificial general intelligence (AGI). But whatever the prospects for AGI, intelligence is the paradigmatic dual-use capability, and machine intelligence is the paradigmatic dual-use technology.

RELATED:
Today’s AI threat: More like nuclear winter than nuclear war

The fact that AI research is liable to such broad misuse generates a serious, unresolved tension between norms of transparency and the need to mitigate risks of harm. Transparency in scientific research is undeniably valuable: It speeds discovery, connects disciplines, spreads information, improves reproducibility, and rewards accuracy.

One natural idea would therefore be that all AI research should be completely transparent in all respects under all circumstances. This would be a mistake.

Researchers have long recognized a need to temper transparency with a broader responsibility to mitigate the risk of harm. Other dual use technologies—especially those that, like AI technology, have weapons and biomedical applications—contain existing restrictions on transparency.  To their credit, AI researchers have started coming to grips with the dual-use nature of AI. But despite movement toward responsibly withholding particular research artifacts (code, training procedures, model weights), the norms surrounding publication are insufficiently sensitive to the trajectory of AI research, which is toward increasingly robust capabilities on an increasingly wide range of tasks at decreasing levels of compute.

Researchers aren’t the only ones who can mitigate the risk of misuse from their research. But they have special leverage over this problem: They can choose what parts of their research to publicly report.

On the assumption that AI researchers are making good-faith efforts to live up to their responsibilities, we can extrapolate their existing picture of those responsibilities from actual present practices. But as we’ve noted, present practice apparently includes transparently reporting research (or offering relatively unrestricted access to research artifacts) sufficient to enable other researchers to use that research to derive formulae for deadly chemical compounds. This degree of moral caution is far too lax by any reasonable standard.

A stricter standard is much more appealing. We should borrow a way of thinking about this moral problem from the law. In law, the three-fold idea of a default, a criteria for reassessing the default, and a procedure for doing so, is familiar. For instance, the default is that people are innocent of a crime; the criterion for reassessment is being charged with that crime; and the procedure for reassessment is a criminal trial. Borrowing over: The default is that AI research is liable to misuse (and so should not be made public); the criteria for reassessment is a decision to be transparent about that research; and the procedure is… well, what, exactly?

Without picking any uniquely correct one, the procedure—like those associated with the criminal justice system—should be broadly credible. The easiest way to ensure a procedure is credible is to say exactly what it is, and then have it executed by credible actors. One way to operationalize this is to employ independent examiners (“red teams”) to assess (and accredit) whether one’s research meets the relevant moral standard for public release.

RELATED:
‘AI Godfather’ Yoshua Bengio: We need a humanity defense organization

For example, consider the research that delivered us today’s modern large language models—the paper that put the “T” in GPT, or generative pre-trained transformer. The research reported in this paper is what allowed other scientists, including those at OpenAI, to begin training immensely capable large language models at scale, at speed, and on immense data sets. So, it turns out that this research was broadly liable to misuse, in the sense that the technology it enabled allowed researchers to build systems that are broadly liable to misuse. For the reasons just suggested, then, the default today should be against publishing that paper. Again, this is because, as I’ve suggested, all AI research is liable to misuse, and so the default ought always to be against publication. This does not mean the paper should not have been, or would not be published; instead, it means that the question of whether to do so should be subject to a (credible) procedure for assuring that it is not liable to misuse.

But what about the reasons in favor of transparency? How do they weigh against the reasons to stop publishing AI research? There are broadly two kinds of relevant reason: pragmatic and epistemic. For instance, there’s the (pragmatic) reason to publish that involves flexing for investors, attracting top talent, and otherwise hyping one’s research. I assume no one thinks such pragmatic reasons carry sufficient weight to justify publishing results that could have detrimental consequences. The relative weight of epistemic considerations is more complicated; the point here isn’t that transparency doesn’t have some epistemic benefits, such as enabling other researchers to learn from one’s work. Instead, the point is that these epistemic benefits are not presently being appropriately weighed off against the moral costs associated with predictable future misuse.

Researchers have a lot on their minds. Therefore, their laboratories could consider lightening the moral load on individual researchers by instituting an automatic process whereby capabilities and foundational research are subject to some credible procedure like the one just sketched. Doing so is relatively costless; even better, it cheaply signals that labs take their responsibilities seriously.

I expect to see commercial entities with strong investment in AI research pursuing such a route in the coming months. Increased secrecy will be greeted by some—especially those concerned with democratic oversight—with mistrust. But opaque reporting of research isn’t incompatible with democratic governance any more than secrecy concerning bioweapons research is incompatible with democratic governance; and because it reduces risk, researchers might welcome it.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments