Search results for autonomous weapon

peace symbol march 1958

What the Campaign to Stop Killer Robots can learn from the antinuclear weapons movement

What today's campaigners against the battlefield use of A.I.-powered autonomous robots can learn from the successful antinuclear movements of yesteryear.
beatrice fihn ican abolish nuclear weapons

Beatrice Fihn explains why nuclear weapons are a scam

A year ago, a majority of the world’s countries—122 of them—voted to enact a treaty with the highly ambitious goal of abolishing nuclear weapons entirely. To the treaty’s critics, it wasn’t so much ambitious as foolish, counterproductive, or irrelevant. But proponents and critics alike can at least agree that it was unprecedented. While the community … Continued

Seven of 2017’s freshest perspectives on nuclear weapons, biological threats, and more

A sampling of the year's best "Voices of Tomorrow" essays.
January-Intro.jpg

Neuro, cyber, slaughter: Emerging technological threats in 2017

Looking back at our best coverage in 2017 of emerging technological threats. 

Rising experts at the Bulletin of the Atomic Scientists

In a time of growing global risk, it is more important than ever that rising leaders share their research, find their voices, and test their arguments to create a safer, healthier planet. In our Voices of Tomorrow section, the Bulletin proudly publishes the work of rising experts immersed in the issues central to our core interests—nuclear … Continued

New digital journal: Security at sea, and under it

In their article on China’s security agenda in the South China Sea, experts John Lewis and Xue Litai quote Chinese president Xi Jinping: “History and experience tell us that a country will rise if it commands the oceans well and will fall if it surrenders them.” This special issue of the Bulletin of the Atomic … Continued

US killer robot policy: Full speed ahead

The Defense Department's policy for autonomy in weapon systems may appear to reflect caution, but it allows the Pentagon to fund, test, buy, and use technology that could target and kill by machine decision.
Kargu drone test.

If a killer robot were used, would we know?

After a recent UN report suggested that a Turkish-made Kargu-2 had autonomously hunted down retreating troops in Libya, numerous media outlets devoted coverage to the issue of so-called lethal autonomous weapons. But much of the coverage misses an important point: It will be extremely difficult to verify if and when such a weapon is used.

Ban killer robots? How about defining them first?

There's a lot of talk about regulating autonomous weapons, but thoughtful, effective policy will be difficult to make if we can’t even agree on what they are
The military applications of AI

An expert collection on the military applications of AI

Over the course of this week, the Bulletin, in partnership with the Stanley Foundation, is publishing top experts on how to manage the explosion of military AI research and development around the world. Here’s what you need to know: The promise and peril of military applications of artificial intelligence Michael C Horowitz; @mchorowitz Published Monday, … Continued

Manifestos and open letters: Back to the future?

Why UN discussions on the management of lethal autonomous weapons need greater participation by the scientific and research communities and representatives of the private sector. Statements of alarm are not enough.

Perdix drone swarm test.

Meet the future weapon of mass destruction, the drone swarm

Drone swarms are getting larger and, coupled with autonomous capability, they could pose a real threat. Think “Nagasaki” to get a sense of the death toll a massive drone swarm could theoretically inflict.

“As much death as you want”: UC Berkeley’s Stuart Russell on “Slaughterbots”

If you never dreamed that toy-like drones from off the shelf at the big-box store could be converted—with a bit of artificial intelligence and a touch of shaped explosive—into face-recognizing assassins with a mission to terminate you—well, dream it.

AI and the future of warfare: The troubling evidence from the US military

US military officers can approve the use of AI-enhanced military technologies that they don't trust. And that's a serious problem.
Illustration by Matt Field. Based in part on photos by gloucester2gaza and Julian Hertzog via Wikimedia Commons. CY BY-SA 2.0 / CC BY 4.0. Stylized.

Top US Army official: Build AI weapons first, then design safety

A top US Army official advocates for lethal autonomous weapons.

The promise and peril of military applications of artificial intelligence

The promise of AI—including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making—is driving militaries around the world to accelerate research and development. Here’s why.

The Russian Uran-9 is an armed robot. Credit: Dmitriy Fomin via Wikimedia Commons. CC BY 2.0.

In Syria, Russia found the chance to showcase its swagger–and its robot weapons

The Syrian civil war gave Russia the chance to test and purportedly improve its robotic and autonomous weapons. Weapons makers showcased some of their products at a recent convention in Moscow.

Who’ll want artificially intelligent weapons? ISIS, democracies, or autocracies?

If you’re a dictator who can’t trust your own people in the military, you can still trust a machine to do your dirty work.
A male Ukrainian military soldier with a quadcopter control panel with a joystick and a screen.

AI in war: Can advanced military technologies be tamed before it’s too late?

There is an urgency for countries to agree on common rules about the development, deployment, and use of emerging military tech in war.

The best of the roundtables, 2015

A choice selection of pieces from our Deveopment and Disarmament Roundtable series