The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Lucien Crowder | December 21, 2017
Nuclear weapons have threatened humanity for more than 70 years. Anthropogenic climate change, though largely unrecognized until recent decades, had its beginnings in the 19th century. Wouldn’t it be nice if advances in technology stopped throwing new problems at the world? No such luck. Several emerging technological threats could—soon enough—come to rival nuclear weapons and climate change in their potential to upend (or eliminate) civilization.
In 2017, the cyber threat began to seem very serious and real. Experts in artificial intelligence pounded an ominous drumbeat about decision-making weapons and bloodless machines with superbrains. Researchers in biotechnology reported that advances in their field could lead to the deliberate, efficient spread of disease, among a host of mind-boggling dangers. Such thoughts aren’t very cheery this holiday season. But when it comes to treating emerging threats as seriously as their more established cousins, no time beats the present. The Bulletin’s work is to reveal where danger lurks and illuminate paths to safety. In 2017, the articles listed below cast a powerful light.
Project Maven brings AI to the fight against ISIS
By Gregory C. Allen
A crash Defense Department program designed to deliver AI to a combat theater within six months is a smashing success so far. But is the Pentagon ready for the enormous challenges that lie at the intersection of military power and artificial intelligence?
Neuroscience—and the new weapons of the mind
By Robert Bruner and Filippa Lentzos
Will neurotechnologies be deployed on the battlefield soon? Probably not—but pharmaceutical attacks and malevolent brain-brain networks are emerging risks that bear watching.
“Netwar”: The unwelcome militarization of the Internet has arrived
By Jonathan Zittrain
Once, the digital world was free from sustained state exploitation. Now it is suffused with it, and coping with this new intrusion will require new strategies.
“As much death as you want”: UC Berkeley’s Stuart Russell on “Slaughterbots”
Interview of Stuart Russell
If you never dreamed that toy-like drones from the big-box store could be converted—with a bit of artificial intelligence and a touch of shaped explosive—into face-recognizing assassins with a mission to terminate you—well, dream it.
Why “stupid” machines matter: Autonomous weapons and shifting norms
By Ingvild Bode and Hendrik Huelss
Does the future threat of hyperintelligent autonomous weapons require adjustments of legal and regulatory norms? Maybe—but dumb autonomous weapons are altering norms right now.
Growing threat: Cyber and nuclear weapons systems
By Page Stoutland
Cyberattacks on nuclear weapons systems are all too plausible. Can nuclear deterrence survive when the weapons themselves are at risk?
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.