When it comes to future autonomous weapons, many governments say they want to ensure humans remain in control over lethal force. The example of the heavily automated air defense systems that militaries use to guard protected airspace shows that doing so will be difficult.
As countries around the world race to incorporate AI and greater autonomous functionality into weapons, the years-long debate at the United Nations over what if anything to do about lethal autonomous weapons has not resulted in an answer. Here's one path forward.
A former Army Ranger—who happens to have led the team that established Defense Department policy on autonomous weapons—explains in a Bulletin interview what these weapons are good for, what they’re bad at, and why banning them is going to be a very difficult challenge.
The failure of the chemical weapons ban in Syria is not a strike against a proposed global ban on autonomous weapons. Bans derive their strength from morality, not practicality.
Should legal and regulatory norms be adjusted to address the threat of hyperintelligent autonomous weapons in the future? Maybe—but dumb autonomous weapons are altering norms right now.
Some say trying to use the Convention on Certain Conventional Weapons to pre-emptively ban lethal autonomous weapons systems has failed—and consequently should be abandoned. This argument is wrong.
If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by others, resulting in destabilizing global arms races. And that’s only a small part of the problem.
The Turkish made Kargu-2 drone can operate in autonomous mode and may have been used to attack retreating soldiers fighting against the UN-recognized government in Libya. There's an ongoing global debate about these sorts of weapons, and the Kargu-2 is evidence that it's happening none too soon.
If open-source analysts are right, a loitering munition capable of using AI to pick a target--a killer robot--was used in the Russia-Ukraine conflict. Autonomous weapons using artificial intelligence are here. And what’s more, the technology is proliferating fast.
The United Nations has debated whether to ban lethal autonomous weapons for years now. As countries make rapid progress in the autonomous capabilities of weapons systems, will any ban be too late to prevent these weapons from being used at borders or in war?
From Harvard University's Belfer Center, this study of artificial intelligence and its likely security implications is an outstanding one-stop primer on the subject.
The Bulletin produced a lot of great coverage of biosecurity, lethal autonomous weapons, and more. Take a look at some of our best disruptive technology stories of the year.
A compilation of quality nuclear policy news published on the Web, around the world. North Korea Will North Korea’s long range missile success help Iran? A Summit Without Fireworks Over North Korea Australia should consider missile defence to counter North Korea: Kevin Rudd North Korea’s missile could hit Canada, and we might not be protected: … Continued
Although activists are calling for an international ban on lethal autonomous weapons, incorporating AI into weapons systems may make them more accurate and result in fewer civilian casualties during war.