The authoritative guide to ensuring science and technology make life on Earth better, not worse.

To avoid an AI “arms race,” the world needs to expand scientific collaboration

By Charles Oppenheimer | April 12, 2023

The military applications of AIThe military applications of AI

Humans create technology using science and engineering. That process is as natural as the flowers in the field, a consequence of billions of years of the universe expanding and becoming what it is today. “As the ocean ‘waves,’ the universe ‘peoples,’” as British philosopher Alan Watts said. And as they multiply, people create—our cities, roads, boats, and bridges crusting the world, in much the same way that ants build a colony—with the planet now reaching an indisputable anthropocene epoch, as one can clearly see from a nighttime airplane flight.

The arc of our collective evolution came to an inflection point on July 16th, 1945, in the form of a mushroom cloud created by the first atomic bomb explosion over the Jornado del Muerto in New Mexico, a test called Trinity. The atomic bomb wasn’t a singular isolated development that suddenly changed humanity but an indelible step in an ongoing evolution. Now, at this stage in that evolution, humans can control the natural world with their minds and tools—and control it so completely that they can destroy the very fabric of human society if they choose that path.

In 1945, there were those who recognized the change humanity was going through—Los Alamos lab director J. Robert Oppenheimer, Nobel laureate Niels Bohr, Secretary of War Henry Stimson, and Albert Einstein, among many others—and who advocated for a world of cooperation based on science. Some—those officials and bureaucrats who believed in power politics and in protecting budgets more than humanity—did not see the fundamental shift in human affairs that atomic weapons had wrought. Their simplistic understanding drove us-versus-them policies that echoed their neolithic ancestors’ tribal fears. So in the aftermath of World War II, the world got a nuclear arms race instead of a new level of human collaboration.

The scientists who discovered the physical reality that allowed for the creation of atomic bombs were forced to consider what they should do about their extremely dangerous scientific and technological advance. On November 2nd, 1945, pouring his heart out to the scientists he led to build the bomb in Los Alamos, Oppenheimer said: “If you are a scientist you believe that it is good to find out how the world works; that it is good to find out what the realities are; that it is good to turn over to mankind at large the greatest possible power to control the world and to deal with it according to its lights and its values.” The same considerations are being pondered today about about other technological threats, including those posed by climate change and artificial intelligence.

RELATED:
Drink the Kool-Aid all you want, but don’t call AI an existential threat

History shows that humans will push science in new directions, regardless of whether some of those directions are dangerous. Even if an area of scientific inquiry and advance were simply too dangerous to pursue, past example makes clear that the advance couldn’t be stopped by a moral, political, or regulatory decision put forward by one group. If the world couldn’t put the brakes on something as purely evil as a thermonuclear weapon 1,000 times more powerful than the atomic bomb used on Hiroshima, it’s laughable to assume there will be any stopping the development of the way a computer outputs sequences of characters. If the research that advances AI isn’t done in the United States, somebody else will do it.

So if humanity will create technology, despite its level of danger, how will we manage it? That is always the question, and it is a question of human relations more than technical science. Our science may have advanced to new heights, but inside, human beings remain, to a significant degree, the tribal apes who grew together for millions of years in natural competition and conflict. There are, of course, some modern and evolving forms of cooperation, and of new consciousness. The question is whether humans can fundamentally change their ways of relating and create forms of international cooperation that are more akin to science-based policy than ancient tribal warfare.

With the benefit of hindsight, it’s clear the policy suggestions scientists made in mid-1945 through 1947 in regard to dealing with nuclear weapons—placing them under international control, among other things—could have worked and prevented an arms race. It’s not surprising US and other world leaders didn’t choose to work together collaboratively back then. It’s only surprising that choosing to go into a wasteful and dangerous nuclear arms race hasn’t killed us all. Yet.

RELATED:
Why a misleading "red team" study of the gene synthesis industry wrongly casts doubt on industry safety

So what should we do now about artificial intelligence and other advances in technology that could pose catastrophic risks? The same thing we should have done in 1945, and what the smartest and wisest people in modern history advised doing: Expand scientific collaboration, instead of trying to use national borders and secrecy to grab power from our “enemies.” American, Chinese, and Russian scientists can get along, even if politicians in those countries foment fear and conflict.

With climate change, the way forward is clear: The solutions must be global and focused on producing carbon-free energy and driving energy innovation with Manhattan Project scale and urgency to meet our common climate challenge. Similarly, we could and should form new international bodies to deal with AI on a scientific rather than merely commercial basis. By forging and then expanding such productive alliances, humans could eventually unwind the cataclysmic threats they face—long before some humanity-threatening form of advanced AI is released.

Our technology has already proven it can kill us. It will always increase in power and scope. The collaboration and cooperation in managing the effects of technological and scientific advance is the area humans need to improve on, to focus on, to invest in.

The best time to share and collaborate on dangerous technology is before trust erodes, and before an arms race begins. But since it’s no longer 1945, as the Chinese proverb goes, the second-best time to cooperate on managing our technological threats—by sharing scientific knowledge, instead of hoarding it in secrecy for a projected advantage—would be now.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John Wefel
John Wefel
1 year ago

Yes, good article ….. but HOW? UN? IAEA? Pugwash conference model? Person-to-person (oops internet cut-off in certain countries) or Other ?

A painted Doomsday Clock surrounded by text snippets and illustrations from the Bulletin’s magazine archives appears beside text that reads, “Discuss the US elections, geopolitics, space, and more at the Bulletin’s annual gathering. On November 12, join 250 attendees and members of Bulletin leadership—including those who set the Doomsday Clock—at our annual gathering in Chicago.” Below it, a button that reads, “Get my ticket.”

RELATED POSTS

Receive Email
Updates