Technology and Security

“As much death as you want”: UC Berkeley’s Stuart Russell on “Slaughterbots”

By Lucien Crowder, December 5, 2017

Not many films advocating arms control will get hundreds of thousands of hits on YouTube. But not every film advocating arms control comes with a title such as “Slaughterbots.”

At 7 minutes and 47 seconds, “Slaughterbots” is fast-moving, hyper-realistic, anxiety-laden, and deeply creepy. If you’ve never heard of swarming drones before, this is just the short film to turn you against them forever. If you never dreamed that those toy-like drones from off the shelf at the big-box store could be converted—with a bit of artificial intelligence and a touch of shaped explosive—into face-recognizing assassins with a mission to terminate you—well, dream it.

The set-up is simple enough. The CEO of something called StratoEnergetics takes to a stage and demonstrates to a live audience his company’s newest product: a tiny drone equipped with face recognition technology, evasive capabilities, and a deadly explosive charge. The drone, after showing off some tricks, blasts open the skull of a luckless mannequin. Things get much weirder from there.

The prime mover behind the film is Stuart Russell, a professor of computer science at the University of California, Berkeley. Here, Russell checks in with the Bulletin to explain how the film was made, how little stands between us and the drone apocalypse, and what the prospects are for banning autonomous weapons before they get truly out of hand.

LUCIEN CROWDER: The slaughterbot video was really well done. It’s quite disturbing, as it was evidently intended to be. I won’t be showing it to my 10-year-old son. How was that video put together?

STUART RUSSELL: It started with a thought that I had. We were failing to communicate our perception of the risks [of autonomous weapons] both to the general public and the media and also to the people in power who make decisions—the military, State Department, diplomats, and so on. So I thought if we made a video, it would be very clear what we were talking about.

So just to give you one example of the level of misunderstanding, I went to a meeting with a very senior Defense Department official. He told us with a straight face that he had consulted with his experts, and there was no risk of autonomous weapons taking over the world, like Skynet [the runaway artificial intelligence from the Terminator movies]. If he really had no clue what we were talking about, then probably no one else did either, and so we thought a video would make it very clear. What we were trying to show was the property of autonomous weapons [to] turn into weapons of mass destruction automatically because you can launch as many as you want.

So I wrote a one-page treatment of how I thought a short video would go. I happened to meet some people who were capable of producing the movie, and we exchanged a few ideas. Eventually they produced a script, and reiterated the script—I would say not much of my original treatment remained. The idea for the CEO presentation came entirely from the production company. So then once we had roughly agreed on how a script might look, we got funding from the Future of Life Institute, and then we did the production.

CROWDER: Well, it came out great. As you’re aware, it can be very, very hard to communicate risk to the public in a way that really makes an impression, and I think this succeeded in a way that very few things do. Where did the name “slaughterbots” come from? That’s catchy.

RUSSELL: We were casting around because we kept calling them, for want of a better word, “drones”—even though we know that a drone is a remotely piloted vehicle and it upsets the American [government and military] terribly when we use the word “drones” to refer to autonomous weapons. The Americans, I guess for good reason, do not want their remotely piloted drones to be caught up in this whole treaty discussion at all. We just thought and thought and thought, and we came up with dozens of different ideas for what they might be called [once] they’re already in common use. I think “slaughterbots” came from the production team, but it was one of 10 or 15 names that we came up with.

CROWDER: Well, I think you chose the right one. The slaughterbot shown in the opening scene, the one that recognized and killed the mannequin—how was that done? I imagine that it was a remote-control drone and not an AI-enabled device—is that right?

RUSSELL: It’s completely computer-generated.

CROWDER: There was no physical flying vehicle at all? Well, it was quite realistic.

RUSSELL: No, they did a great job. Even the one that sits in [the CEO’s] hand, it’s all computer-generated.

CROWDER: My goodness. Now, you say in the coda to the video that the dystopia it describes is still preventable. What part of the slaughterbot technology package isn’t available yet? I imagine it would be the AI, because the rest of it seems relatively simple.

RUSSELL: Well, the AI is basically available as well. All the bits [one would need to] do, we know how to do. It’s probably easier than building a self-driving car chip, partly because [slaughterbots have] a much lower performance requirement. A slaughterbot only has to be 90 percent reliable, or even 50 percent would be fine. So the technology is already essentially feasible. I think it would still take a good engineering effort to produce something like what you see in the movie, but it doesn’t require research breakthroughs. People who say “Oh, all this is decades in the future” either don’t know what they’re talking about or are deliberately saying things they don’t believe to be true.

I think at the moment it would take a good team of PhD students and post docs to put together all the bits of the software and make it work in a practical way. But if I wanted to do a one-off—a quadcopter that could fly into a building, find a particular person based on visual face recognition, and give them a rose or something like that—I think we could do that in a few months. And if you wanted to produce [something] high-quality, miniaturized, mass-produced, and weaponized, [you] would also probably want to have evasive maneuvers and the ability for many of them to attack an individual at once if necessary, and that kind of thing. So it would be more work, but—if you think about a wartime crash project like the Manhattan Project—I would guess [it would take] less than two years.

CROWDER: Well, that’s not very encouraging to hear.

RUSSELL: It’s not very encouraging. [But it doesn’t make sense to argue that a treaty on autonomous weapons] is completely pointless—that all you would achieve with the treaty is [to] put the weapons in the hands of the bad guys and not the good guys.

We have a chemical weapons treaty. Chemical weapons are extremely low-tech. You can go on the web and find the recipe for pretty much every chemical weapon ever made, and it’s not complicated to make them—but the fact that we have the Chemical Weapons Convention means that nobody is mass-producing chemical weapons. And if a country is making small amounts and using them, like Syria did, the international community comes down on them extremely hard. I think the chemical weapons treaty has been successful [even though] it is not hard for bad guys to make chemical weapons. The whole point is you keep large quantities off the market, and that has a huge impact. The same would be true with these kinds of weapons.

CROWDER: It seems to me it would be reasonably straightforward to enforce a ban against autonomous weapons in the hands of national militaries, but regulating against slaughterbots in civilian hands would be a different issue—but I guess you just answered that?

RUSSELL: As time goes by, it will become easier for non-state actors to make autonomous weapons, at least in small quantities. But if you’re making small quantities, you may as well pilot them yourself. There is no real reason to make them autonomous. For the time being, human pilots are going to be more effective, and if you’re only doing a few dozen, you may as well have human pilots. So it’s only when you want to scale up, and go to tens of thousands, that you can’t use human pilots and you have to make them autonomous.

CROWDER: I see. Now wouldn’t one approach for lethal autonomous weapons in private hands be to include safeguards in commercially available drones? I saw a reference to this in a report on autonomous weapons by the Stockholm International Peace Research Institute. The report suggested that you could include hardware features that limited the devices’ functions, or software that allowed the devices to be de-activated. Do you think that sort of thing would be effective?

RUSSELL: Well, all computer security measures can be defeated, but it is still useful to have them. Geo-coding, so [a device] can’t go outside the country where you bought it, for example, would be good. Because you certainly want to prevent them from being used to start wars. And the kill switch is something that the Federal Aviation Administration is talking about requiring. I don’t know if they have actually done it yet, but they’re talking about requiring it for all drones above a certain size in the US. As with the Chemical Weapons Convention, you would want industry cooperation, so [companies] would be required to verify the bona fides of customers, and they would be required to report orders above a certain quantity, and so on.

CROWDER: They would be required to verify the identities of customers?

RUSSELL: [Yes, as with] chemical companies—if someone orders 500 tons of some chemical that is a precursor of a chemical weapon, they can’t just ship it to them. They have to find out who they are.

CROWDER: That makes sense.

RUSSELL: So in some sense, [industry is a] party to the [Chemical Weapons Convention], and that was very important in its success. That wasn’t true for the Biological Weapons Convention—in fact, a big weakness was a lack of verification and a lack of requirements for industry cooperation.

CROWDER: I see. Now, the CEO in the video says that these devices can evade pretty much any countermeasure and can’t be stopped. But military history, it seems to me, is pretty much a story of measures and countermeasures and further countermeasures, and weapons eventually becoming obsolete. Do you agree with what the CEO said, or were you having him engage in a bit of salesmanship?

RUSSELL: Well, of course, he would say that—wouldn’t he? But to my knowledge, there aren’t any effective countermeasures. There is a laser weapon the Navy is using that can shoot down one fixed-wing drone at a time. It seems that it has to be a fairly large fixed-wing drone, and [the laser] has to focus energy on it for quite a while to do enough damage to bring it down. But I suspect that would not be effective against very large swarms. People talk about electromagnetic pulse weapons [as countermeasures], but I think you can harden devices against that. And then we get into stuff that is classified, and I don’t know anything about that. I know that [the Defense Department] has been trying for more than a decade to come up with effective defenses and I’m not aware of any.

CROWDER: Now, is it implausible for me to think that if you were talking about two militaries, they could simply deploy drone swarms against each other—sort of like miniature air forces—and they could fight it out in the air?

RUSSELL: As a form of anti-swarm defense?

CROWDER: Yes, more or less. The same way that fighter planes go after bombers.

RUSSELL: Yeah, I mean that’s a possibility, but it means you kind of have to have them prepositioned pretty much everywhere that someone might attack.

CROWDER: Right.

RUSSELL: It [also] doesn’t fill me with confidence [when] some people say “Oh, yeah, we will just have personal anti-swarm defenses that we will carry around with us.”

CROWDER: I don’t particularly want to have to do that myself.

RUSSELL: No.

CROWDER: Now, in the coda to the video, you mention that artificial intelligence has enormous potential to benefit humanity, even in defense. I wonder if you were referring to the idea, which some people propose, that robotic soldiers might behave better in battle than humans do—more ethically, so to speak—because they lack emotions. Or were you talking about something else?

RUSSELL: No, I’m not talking about that. I’m talking about the fact that [artificial intelligence] can help with surveillance [and] analysis of intelligence data… . It can help with logistical planning, tactics, strategy, and defensive weaponry [that] even current antimissile defense systems use. I mean, they are simple forms of AI, but they’re pretty effective. The [Defense Department] has been using AI already, in many of these areas, for a long time.

Some people mistake our goal as [banning] AI in the military, or even [banning] AI, and we’re not saying any of those things. We’re just saying [that] once you turn over the decision to kill to the machine… . Just like Google can serve a billion customers without having a billion employees—how does it do that? Well, the software has a loop in it that says “for i = 1 to 1 billion, do.” And if you need more hardware, you just buy more hardware. It’s the same with death. Once you turn over the ability to kill to the machine, you can have as much death as you want.

CROWDER: That’s a vivid way of putting it. Last month, the Convention on Certain Conventional Weapons held its first talks on autonomous weapons via a group of governmental experts. My impression is that basically they agreed to keep talking about it, though members of the Non-Aligned Movement came out in favor of a ban on autonomous weapons. Is that roughly accurate?

RUSSELL: Yes, I think that’s right. Some people are disappointed. I think it depends on how optimistic you were in the first place. I think some people were worried that various nations might just throw a spanner in the works and prevent the talks from moving ahead at all. The way that the [Convention on Certain Conventional Weapons] works, you kind of require a consensus from everyone in order to move ahead. So the fact that everyone agreed to continue the talks next year is a small victory, and certainly the Non-Aligned statement was pretty positive. In the normal scheme of how things move in the diplomatic process, I think we could say that progress was satisfactory. One would hope that over time, it will just become more and more the norm of international dialogue that countries will support a ban, or something resembling a ban. France and Germany actually tried to get agreement on what they called a political declaration—not a treaty, but a kind of statement of principle that people can sign up to, saying essentially that there has to be meaningful human control over lethal attacks. I don’t think that they got much momentum with that, but again, their goal was to avoid scaring off some of the countries that were not necessarily in favor of a treaty, so that things can keep moving forward.

CROWDER: Could you name the countries that are the prime suspects for throwing a spanner in the works?

RUSSELL: I think you would have to say, at the moment, Russia—based on some of the things they were saying. You got the sense that they didn’t really want this process moving forward, that they wanted the right to develop whatever weapons they felt like developing. Of course, they don’t always just say it like that. [They say], well, “we want to make sure that a treaty doesn’t infringe on peaceful uses of AI, which could be very beneficial to humanity.” Which of course is kind of nonsense. We have a Biological Weapons Treaty, but that hasn’t stopped us from doing research on biology for the benefit of humanity. Experts [on artificial intelligence] do not feel that a treaty would be a threat to their own research.

CROWDER: All right, then, let me ask one final question. How do you see the prospects for a treaty banning autonomous weapons—or, if not a ban treaty, an effective international instrument that would improve security?

RUSSELL: I would say that, if I was a betting person, I think the odds of having a ban in place within the next decade are less than 50/50. I could see something weaker than that, which could amount to sort of an informal moratorium, where nations could adopt something similar to what the US already has in Directive 3000.09 [a Defense Department policy statement declaring that “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force”]. There are [also] confidence-building measures, like notifying each other of new weapon systems, and that kind of stuff. There’s a whole continuum of measures that you can have. I think that we may see some of that, and it may be enough to give us time to work toward a treaty. A treaty may not be a total ban. It may be a partial ban—for example, a ban on antipersonnel weapons. But it might allow for autonomy in submarine warfare or aerial combat, where the weapon-of-mass-destruction characteristic doesn’t apply so much.

As the coronavirus crisis shows, we need science now more than ever.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Support the Bulletin