The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Search results for autonomous weapon

Defending against “The Entertainment”

Amid the published angst about AI and its hypothetical threats, more attention ought to be given to the threat that AI-enabled entertainment poses to our brains and our civilization.

2024 Doomsday Clock announcement

PRESS RELEASE: Doomsday Clock remains at 90 seconds to midnight

In 2024, the Doomsday Clock remains at 90 seconds to midnight, the closest the Clock has ever been to midnight. This reflects the continued state of unprecedented danger the world faces. The Bulletin of the Atomic Scientists, stewards of the Doomsday Clock, emphasized in their announcement that the Clock can be turned back, but governments and people need to take urgent action. 

Sixty years after the Cuban Missile Crisis, how to face a new era of global catastrophic risks

Sixty years after the Cuban Missile Crisis, with the world again stumbling toward the precipice, policy makers must update Cold War risk-reduction measures for a new era.
A brain computer interface.

Neurotechnology overview: Why we need a treaty to regulate weapons controlled by … thinking

Neurotechnology is advancing rapidly, in part due to funding from military agencies. Although neurotechnology may be useful in designing weapons, there is no international framework describing desirable and undesirable uses of the technology.

Just say no

Hypersonic flight may sound like screaming good fun—but it’s not meant for you. It’s meant for weapons that would probably be used only in the opening salvos of a nuclear war. No nation has yet succeeded in developing non-ballistic missiles that fly long distances at or above Mach 5 (five times the speed of sound), … Continued
Engineers Meeting in Robotic Research Laboratory. By Gorodenkoff. Standard license. stock.adobe.com

Do scientists need an AI Hippocratic oath? Maybe. Maybe not.

Artificial intelligence has benefited humanity, but scientists also recognize the possibility for a dystopic outcome in which AI-powered computers one day overtake humans. Would an ethical oath for AI scientists avert this outcome? Or is such an oath too simplistic to be useful?
Robots standing with a nuclear winter landscape in distance

Today’s AI threat: More like nuclear winter than nuclear war

Instead of a nuclear war analogy, a more productive way to approach AI is as a disruption that more closely resembles a nuclear winter.

Seven of 2017’s freshest perspectives on nuclear weapons, biological threats, and more

A sampling of the year's best "Voices of Tomorrow" essays.

Artificial intelligence: a detailed explainer, with a human point of view

Is artificial intelligence, AI, a threat to our way of life, or a blessing? AI seeks to replicate and maybe replace what human intelligence does best: make complex decisions. Currently, human decision-making processes may include some means of AI as support or backup. But AI could also be “let out of the box” to act … Continued

A picture’s power to prevent

The most important legacy of Hiroshima and Nagasaki is in images that could avert worse wars to come.

2019 Doomsday Clock Statement

Overview Current Time FAQ Timeline Dashboard Multimedia Exhibit A new abnormal:It is still 2 minutes to midnight 2019 Doomsday Clock Statement Science and Security BoardBulletin of the Atomic Scientists Editor, John Mecklin From the President   |   Full Statement   |   Board Biographies   |   About the Bulletin   |   Clock Timeline PDF version   |   Print this page Statement from … Continued
640px-BrahMos.jpg

The argument for a hypersonic missile testing ban

A ban on the testing of hypersonic missiles would place a roadblock in the path of a destabilizing, technology-driven arms race

Which drone future will Americans choose?

The decisions US leaders make now over unmanned aerial vehicles will have enormous consequences.
Terminator cyborg skeleton

How science-fiction tropes shape military AI

Pop culture influences how people think about artificial intelligence, and that spills over to how military planners think about war—obscuring the more mundane ways AI is likely to be used.

New digital journal: Security at sea, and under it

In their article on China’s security agenda in the South China Sea, experts John Lewis and Xue Litai quote Chinese president Xi Jinping: “History and experience tell us that a country will rise if it commands the oceans well and will fall if it surrenders them.” This special issue of the Bulletin of the Atomic … Continued

Hypersonic missiles: Junk nobody needs

Last September in the Bulletin, I proposed a moratorium and eventual ban on hypersonic missile testing. The following month, US Air Force Lt. Col. Jeff Schreiner proposed a similar idea in Stars and Stripes. The principal obstacle to enacting this proposal is just skepticism—the same skepticism that every successful arms control initiative initially encounters. My … Continued

Rising experts at the Bulletin of the Atomic Scientists

In a time of growing global risk, it is more important than ever that rising leaders share their research, find their voices, and test their arguments to create a safer, healthier planet. In our Voices of Tomorrow section, the Bulletin proudly publishes the work of rising experts immersed in the issues central to our core interests—nuclear … Continued

It is 30 seconds closer to midnight

The full text of the Bulletin Science and Security Board 2017 Doomsday Clock statement, which moved the Clock to two and a half minutes to midnight.

Artificial intelligence beyond the superpowers

Much of the debate over how artificial intelligence (AI) will affect geopolitics focuses on the emerging arms race between Washington and Beijing, as well as investments by major military powers like Russia. And to be sure, breakthroughs are happening at a rapid pace in the United States and China. But while an arms race between … Continued
Yoshua Bengio, founder and scientific director of Mila at the Quebec AI Institute, during a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing in July. The hearing was titled "Oversight of A.I.: Principles for Regulation." Photo credit: Valerie Plesch/Bloomberg via Getty Images

‘AI Godfather’ Yoshua Bengio: We need a humanity defense organization

In this interview, AI godfather Yoshua Bengio discusses attention-grabbing headlines about AI, taboos among AI researchers, and why top AI researchers may disagree about the risks AI may pose to humanity.