The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By John Mecklin | March 3, 2016
Last year, researchers in artificial intelligence and robotics released an open letter, endorsed by high-profile individuals such as Stephen Hawking, calling for “a ban on offensive autonomous weapons beyond meaningful human control.” The letter echoed arguments – made since 2013 by the Campaign to Stop Killer Robots – that autonomous weapons are “a fundamental challenge to the protection of civilians and to … international human rights and humanitarian law.” But support for a ban is not unanimous. Some argue that autonomous weapons would commit fewer battlefield atrocities than human beings and that their development might even be considered morally imperative. Here, Paulo E. Santos of Brazil, Heather Roff (2016) of the United States, and Monika Chansoria (2016) of India debate whether deployed autonomous weapons would promote or detract from civilian safety – and whether these weapons ought to be banned or merely regulated.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Issue: Bulletin of the Atomic Scientists Volume 72 Issue 2
Topics: Uncategorized