Top US Army official: Build AI weapons first, then design safety

By Matt Field | October 22, 2019

Illustration by Matt Field. Based in part on photos by gloucester2gaza and Julian Hertzog via Wikimedia Commons. CY BY-SA 2.0 / CC BY 4.0. Stylized. Illustration by Matt Field. Based in part on photos by gloucester2gaza and Julian Hertzog via Wikimedia Commons. CY BY-SA 2.0 / CC BY 4.0. Stylized.

Even as the United Nations continues a long-running debate on how to regulate lethal autonomous weapons, a top US Army official is doubling down on his vision for incredibly autonomous systems that can categorize threats, select targets, and fire artillery without any human involvement.

After that sort of system has been developed, the Army’s acquisitions chief Bruce Jette said, an interface can be added for any “safety concerns.” Jette, a former tank operator with a doctorate from MIT, made the comments at an event at the recently-concluded 2019 Association for the United States Army conference. There, Jette talked about building a tank turret hooked to an artificial intelligence system that, he said, could distinguish between a Volkswagen and an infantry fighting vehicle and then “shoot it.” Defense News reported on Jette’s call for fully autonomous weapons.

“Did you hear me anywhere in there say ‘man in the loop?’” Jette said. “Of course, I have people throwing their hands up about Terminator. I did this for a reason. If you break it into little pieces and then try to assemble it, there’ll be 1,000 interface problems. I tell you to do it once through, and then I put the interface in for any safety concerns we want. It’s much more fluid.”

And if it seems like Jette may have known what kind of response he might elicit, it’s because he’s said similar things before. At a May event for the same association, he talked about the tank turret in a video that’s been posted to YouTube but appears to have received scant attention.

“Notice I didn’t say ‘then it asks to fire.’ It just shoots. I flip it on. It hunts for targets and then goes and kills them,” Jette said. “I got that going because it’s easier to put in breakpoints than it is to have piecemeal functions come together. So now I’m going to have this turret that fundamentally I can allow to go kill targets, and it’ll be [fast] as a fly. It’s a lot faster than me. I can’t see and think through some of the things it can calculate nearly as fast as it can.”

The Army didn’t respond to a request for comment on Jette’s remarks.

The Defense Department has a policy that mandates human control in autonomous weapons systems. One of the authors of the policy, Paul Scharre, a former Defense Department official and now a senior fellow with the Center for a New American Security, talked to Quartz in February about the program that Jette appears to have been discussing at the recent association event. Scharre compared it to a system that warns a driver if he’s about to hit something in his blind spot. But that characterization doesn’t seem to fully capture the autonomous functionality that Jette later described both in the May video and at the association event. Jette’s tank will be designed to kill at the flip of a switch, then he’ll have some sort of safety interface developed.

Some artificial intelligence experts and activists don’t have much faith that militaries will always require a human to have meaningful control over autonomous weapons. University of California Berkeley Professor and AI expert Stuart Russell told Quartz that he’s concerned that requirements for human control “will be dropped as soon as it’s politically convenient to do so.”

Ariel Conn, who as communications and outreach director for the nonprofit Future of Life Institute testified at the United Nations for a ban on lethal autonomous weapons, points out that Jette himself seems to suggest meaningful human control is difficult to implement.  “At the United Nations, many countries, including those developing increasingly autonomous offensive and defensive weapons systems, insist there will always be some level of meaningful human control over the systems. However, as Jette points out, that’s unlikely to be feasible,” she said in an email to the Bulletin.

“We need to decide if we want to live in a world in which autonomous weapons systems identify and attack targets faster than humans can think.”


Publication Name: Defense News
To read what we're reading, click here

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
phil griesbach
phil griesbach
4 years ago

only if it works…..all the time.