In July, researchers in artificial intelligence and robotics released an open letter—endorsed by high-profile individuals such as Stephen Hawking—calling for "a ban on offensive autonomous weapons beyond meaningful human control." The letter echoes arguments made since 2013 by the Campaign to Stop Killer Robots, which views autonomous weapons as "a fundamental challenge to the protection of civilians and to … international human rights and humanitarian law." But support for a ban is not unanimous. Some argue that autonomous weapons would commit fewer battlefield atrocities than human beings—and that their development might even be considered morally imperative. Below, authors from Brazil, India, and the United States debate this question: Would deployed autonomous weapons promote or detract from civilian safety; and is an outright ban—or rather effective international regulation—the proper response to development of autonomous weapons?