The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Neurotechnology overview: Why we need a treaty to regulate weapons controlled by … thinking

By Filippa Lentzos, Isobel Butorac | April 28, 2020

A brain computer interface.Neurotechnology could help people with disabilities use their thoughts to control devices in the physical world. It may also be useful in weapons systems. Private companies, militaries, and other organizations are funding neurotechnology research. Credit: US Army.

Elon Musk’s newest venture, Neuralink, is attempting to wire brains directly to computers. The start-up’s vision is to insert thousands of tiny threads into the neurons of your brain. The other ends of the threads are attached to chips, embedded under the skin on your head and wirelessly connected to a detachable Bluetooth ‘pod’ behind your ear, enabling you to control a phone or another device with your thoughts. Sound far-fetched? The company has already successfully tested the technology in monkeys and aims to start testing it in humans later this year.

Neuralink’s brain-machine interface could potentially help people with brain and spinal cord injuries who have lost the ability to move or sense, as Musk highlighted at the company’s livestreamed launch event. Even more ambitiously, Musk said his long-term goal is “to achieve a sort of symbiosis with [artificial intelligence].” He wants to build what he calls a digital superintelligence layer to complement the parts of the brain responsible for thinking and planning (the cerebral cortex) and for emotions and memory (the limbic system). In fact, he said, “you already have this layer.” It is your phone and your laptop. But you are limited by how quickly you can process what you see, and how quickly you can type a response. The answer, Musk says, is to increase the band-width of the brain-machine interface.

Neuralink is just one of the organizations developing cutting-edge neurotechnology, although others like teams at Carnegie Mellon, Rice University, and Battelle, are not proposing drilling through people’s skulls and inserting microscopic threads into their brains, opting instead for electromagnetics, light beams, and acoustic waves.

It’s also not difficult to imagine neurotechnology being used for darker purposes, unrelated to the goals of the researchers developing it. A brain-machine interface could, for instance, be hacked and used to spy on or deliberately invade someone’s innermost thoughts. It could be used to implant new memories, or to extinguish existing ones. It could even be used to direct bionic soldiers, remotely pilot aircraft, operate robots in the field, or telepathically control swarms of artificial-intelligence-enabled drones.

Elon Musk gives a presentation on neurotechnology.
A monkey has already controlled a computer with its thoughts, according to Elon Musk. His startup Neuralink aims to start testing its neurotechnology on people this year. Credit: Steve Jurvetson. CC BY 2.0.

In the case of biological, chemical, and nuclear technologies, international rules exist to ensure these are not used for developing weapons. There are also controls to ensure things like certain electronics, computers, software, sensors, or telecommunications technology are not used in conventional weapons. In all cases, the underlying technologies in question have useful and beneficial purposes. But these regulations do not directly apply to neurotechnologies. Of more relevance are discussions taking place at the United Nations on lethal autonomous weapons systems, particularly around aspects associated with human-machine interactions, the loss of human control, and accountability. While these are limited to weaponry, informal discussions at the United Nations are also examining broader issues around artificial intelligence and militarization, including military decision-making, intelligence-gathering, and command and control systems.

Yet, none of the international regimes or current discussions provide guidance for how people should consider the beneficial and harmful potential that neurotechnology holds, a growing area of research among scholars as militaries begin developing the technology.

Building on formative work by researchers like Jonathan Moreno, Malcolm Dando, James Giordano, and Diane DiEuliis, we talked to eight senior neurotechnologists from labs at established universities in the United States, the United Kingdom, and Australia about the risks they saw with the new technology and about who has responsibility for safely developing it. The interviews were part of a pilot project, in which participation was confidential and identifying information was removed from the data, as is usual practice in social science research.

In addition to brain-computer interfaces, the technologists were working on cutting-edge technologies like neuromorphic computing, a field with the goal of designing computer systems that mimic the form of the human brain, and cognitive robotics, an enterprise concerned with designing robots that can more seamlessly and empathetically interact with people. The technologists we talked to didn’t see the potential for their particular technologies to be used as weapons or to pose security concerns. They saw themselves as being “away from the front line.” Yet, at the same time, six of the study technologists we talked to, from each of the three countries, had been previous recipients of direct or indirect Pentagon funding.

Some also said that technology they had created in the past had gone on to be used for entirely unexpected purposes that would have been impossible to predict. One, for instance, designed a component for airbags that eventually found its way into tech products like smartphones.

As neurotechnology advances and applications with potential military as well as civilian uses are developed, debates about the so-called dual-use risks it poses will become more acute.

Military neurotechnology and the definition of dual use. A common way to think about the concept of dual use relates to technology transfers between civilian and military organizations. Civilian and military research and development are thought to go hand-in-hand, where innovations, like the internet and GPS, can be maximized for the mutual benefit of both civilian and military stakeholders in a win-win scenario. Technologies are spun-in from basic research to military application or spun-out from military research to civilian application. The main drivers behind this form of dual use, however, are economic interests.

When the focus shifts to international security, the dual use concept becomes more complicated. Here, civilian and military uses stand in opposition to one another, and technology transfers between civilian and military applications are focused on restricting civilian technologies from migrating to foreign or non-aligned militaries. Under the export controls agreed on by the Australia Group, a group of many of the world’s major economies that have agreed to harmonize regulations to control the spread of technology that could be used in chemical or biological weapons, a company in the United States couldn’t, for instance, export a 20-liter fermenter capable of growing bacteria without a license. A license would be denied if the company were exporting to a country suspected of having a biological weapons program, regardless of whether the recipient was explicitly a military entity or not. As such, there is not just a civilian versus military distinction to dual use, but also a distinction between what are considered legitimate and illegitimate uses.

Representatives to the Biological Weapons Convention meet.
Representatives to the Biological Weapons Convention, the international treaty banning bioweapons activity, meet in 2015. Credit: Eric Bridiers/US Mission Geneva. CC BY-ND 2.0.

International disarmament and nonproliferation treaties like the Biological Weapons Convention, the international agreement that bans bioweapons activities, introduce yet another distinction. They do not use the term dual use but instead differentiate between peaceful and non-peaceful purposes of research and development activities. Originally aimed at curtailing proliferation by states, since 9/11 the Biological Weapons Convention has broadened in scope to also encompass proliferation by non-state actors like terrorists and criminals. This trend has layered on the idea that dual use has to also be thought of in terms of the juxtaposition of benevolent and malevolent purposes.

The technologists we spoke to found these security concepts of dual use too abstract to relate to their own work. The problem is that whichever concept of dual use is applied—civilian versus military, legitimate versus illegitimate, peaceful versus non-peaceful, benevolent versus malevolent–there is very little practical guidance for how to assess the risks of neurotechnology research being used for harm, or to determine the potential contribution of neurotechnologies to a military program. It’s easy to understand how a fermenter that creates bacteria could be used in biological weapons. Countries have done that sort of thing before. There’s no such direct line between existing nuerotechnology and an already developed weapons system.

Developing clear guidance for neurotechnologies is increasingly urgent, because as it stands, militaries are already developing neurotechnology. The US Defense Department’s research wing, the Defense Advanced Research Projects Agency (DARPA), is significantly expanding brain-machine interfaces for use in military applications. It is “preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” Al Emondi, manager of DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program, said.

The N3 program is pushing for “a neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters,” according to its funding brief, and the program is sponsored at approximately $120 million over four years. But DARPA also funds many other programs, as do military research and development units in other countries. These various programs are expanding the reach of neurotechnologies into military intelligence gathering, image analysis, and threat and deception detection, as well as developing technology to manipulate emotional states and to incapacitate adversaries.

The technologists we spoke to talked about the “capabilities race” they saw developing within countries and internationally, and that “technological supremacy” was at the forefront of many researchers’ minds. Despite this, none of the six technologists who had received DARPA funding believed their scientific work was being developed for military application. The other two neurotechnologists we talked to said they would refuse military funding on the grounds that they did not promote warfare and that such funding may instigate political tensions within their labs—echoing the mixed perspectives on defense dollars from the synthetic biology field.

Of course, militaries aren’t the only organizations funding neurotechnology. Universities, major brain initiatives like the European Union’s Human Brain Project, and national health funding schemes all fund projects, as well. But it is private funders that really get technologists excited. According to an article last year in the journal Brain Stimulation, the technologies may constitute a $12-billion-dollar annual market by 2021.

The pursuit of private capital led two of the neurotechnologists we spoke with to move to Silicon Valley in California, a place where, as one of them said, “You don’t even have to explain it.” Half of the people we talked to had spinout companies, separate from their university research. These ventures may promote benefits by creating wider access to neurotechnology, but they also create privacy and other ethical dilemmas separate from concerns about whether a technology could be weaponized or not. For instance, as private companies potentially become gatekeepers of large amounts of personal brain data, they could choose to monetize it.

How can scientists and institutions account for the potential of misuse inherent in the development of neurotechnology? “Boundaries are not always so obvious when people are crossing them,” one of the technologists we spoke to said. “It is only in hindsight that people think, ‘yeah this is bad.’” Different people have different boundaries. Perceptions of beneficial technology can vary, too.

Often the benefits or potential harms associated with a technology are tightly wrapped up in a particular implementation. Even if technologists hold “good” intentions, later applications of their technology are not always within their control. Talking with neurotechnologists underscores that what is and isn’t a dual-use technology is often in the eye of the beholder, even when militaries are paying to develop the products.

While no treaty regulates neurotechnology, safely developing this sci-fi like technology calls for a new framework that articulates specific harmful or undesirable uses of the technology in political, security, intelligence, and military domains. It would be better to develop the framework now, at the stage when many entrepreneurs are more focused on telepathically controlling smartphones than the weapons of the future.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
JustJack
JustJack
4 years ago

treaties are only followed up until a real war begins.. At which point they become completely useless.

A graphic reads, "Test your global insight from nuclear risks to AI breakthroughs. Take our 10-minute quiz." A globe with connecting points spanning across it appears below it. Behind the globe are sprawling lines connected by circles, symbolizing connection and technology.”

RELATED POSTS

Receive Email
Updates