How science-fiction tropes shape military AI

By Ian Reynolds | March 28, 2023

Terminator cyborg skeleton Terminator. Photo by Daniel Oberhaus (2017), Creative Commons license

“Pop culture has this huge power to shape peoples’ thinking,” says Timnit Gebru, a leading AI researcher working on the implications of bias in artificial intelligence. Science-fiction movies, TV shows, and literature have conjured images of all-knowing robots acting for good or for ill, and these pop-culture representations have influenced public perceptions of intelligent machines.

The Terminator series of films and other media, for example, began with a sci-fi film about a humanoid killing machine sent back in time by a hostile artificial-intelligence network of the future. For almost 40 years, it has helped shape public perceptions about artificial intelligence gone astray.

But it is not only the general public whose perceptions of AI are influenced by pop culture. The US defense bureaucracy also plugs into these stories. References to pop culture can function as “rhetorical repertoires” that defense officials use to explain the stakes, risks, and military uses of AI. By envisioning AI-enabled war as a world of Terminators, these repertoires may mask the more practical ways AI will broadly shape conflict and security in the near term—including what some may consider “mundane” applications of AI in data processing, analysis, and decision support.

For example, the defense technology company Palantir recently claimed its software is being used to inform targeting decisions in Ukraine. If so, this is an indication that the data-processing capabilities of algorithmic systems are already being incorporated into the workflow of military decisions.

Pop culture seeping into military practices. Some work has shown that elements of pop culture and science fiction have had a direct influence on political outcomes and practices by shaping how policy makers conceived of Cold War security problems or by serving as narratives for advocacy groups attempting to ban “killer robots.” Because stories of humans living with or fighting against intelligent machines are common in science fiction, it’s worth investigating how these stories mediate between the technology of AI and the practices of war.

For example, the 1977 sci-fi film Star Wars became associated with the Strategic Defense Initiative, a Reagan-era defense project focused on a combination of highly advanced technologies for missile defense. The program was thought to be impossible by the technological standards of that era. David Parnas, a member of the initiative’s panel on computing, resigned in 1985 over concerns about the software components, including elements of AI, not being trustworthy enough.

The Terminator series has also proved influential. In 2016, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva discussed what he and media organizations called the “Terminator conundrum.” He wondered: “What happens when that [machine] can inflict mortal harm and is empowered by artificial intelligence?…How are we going to know what is in [its] mind?” Selva also wondered what adversaries would do with the technology.

RELATED:
Spotting AI-generated content is too hard. Look for credible sources instead.

Then-Deputy Secretary of Defense Robert Work also referred to the series: “If our competitors go to Terminators … and it turns out the Terminators are able to make decisions faster, even if they’re bad, how would we respond?” Work framed the integration of AI-enabled autonomous weapon systems into war as a serious problem for US defense policy.

Others in the US defense community have taken a different tack by broadly pointing to science fiction as a yardstick for advances in AI and machine learning. For example, articles in military journals have suggested that autonomous weapons and robots are “progressing” from science-fiction movies to the domain of war. The 2021 National Security Commission on Artificial Intelligence Report made similar claims, arguing that “AI will not stay in the domain of superpowers or the realm of science fiction.”

Military officials have also expressed anxieties that some may not realize the capabilities of AI with respect to military practice. As now-retired Air Force Lt. General John Shanahan noted in congressional testimony on the military applications of AI, “absent somebody getting to play with AI, it’s science fiction.” Later in his testimony he referenced science fiction again, this time as a counter to possible skeptics, arguing “nobody believes it yet, because they haven’t the benefit of actually seeing it work … We have to have people believe it’s real and not just science fiction.” Shanahan wanted people to believe that a technological world once existing only in the domain of the fictional had crossed over to the real.

Science fiction appears to serve as a point of orientation, at least for some in US defense circles, for debates about the integration of AI into military practice. For Robert Work, the Terminator series is a focal point for thinking about possible threats stemming from military technological competition involving the United States and other great powers, namely China and Russia. References to the films can encourage people to imagine a world in which “they” have Terminators and “we” do not.

Selva has similar worries, although he links international competition to the ethical and moral dilemmas of delegating aspects of war to non-humans. Here too, Terminator acts as a rhetorical platform for conveying those worries.

Rhetorical risks. Scholars of international relations have demonstrated the important role that rhetoric can play in circumstances such as great power politics and post-war reconstruction by shaping the terms and conditions in which political processes play out. Analysts have good reason to believe that similar rhetorical repertoires will serve as a type of grammar for discussing the intersection of technology and security policy.

RELATED:
Can't quite develop that dangerous pathogen? AI may soon be able to help

However, using such language can be risky. First, these rhetorical repertoires can focus too much attention on the implications of future super-intelligent machines, while overlooking the effects of AI technologies that already exist. In a Twitter thread last year, AI ethicist Giada Pistilli wrote that “focusing on these sci-fi issues only perpetuates the collective panic that exists around these technologies while neglecting their actual risks.”

Such worries should apply to a sector with violence at its core: war. Invoking Terminators as a rationale for competition in military technology risks a “collective panic.” Moreover, it rhetorically links to technological fetishism that has a long, but contested, tradition in the US military. Calls to worry over “them” having Terminators, and thus more advanced military technology, invoke sci-fi portrayals of AI, both negative and positive, that can “create unreasonably lofty expectations,” warns David C. Benson, who until early 2023 taught Strategy and Security Studies at the School of Advanced Air and Space Studies.

Such rhetoric can also suggest that a faster pace of military technological development is the route to greater security for “us,” playing into narratives that technological sophistication can lead to either “easy” war or international peace through technological dominance. If a war broke out, who would want to be left without the Terminators on their side?

This is not to say that exploring the fictional is always bad, or that the defense officials quoted above had any wrong intent. Fictional scenarios can help analysts imagine new worlds and break out of codified ways of thinking, a practice the US military is experimenting with.

However, references to sci-fi depictions of military AI can also mask the more practical, and even mundane, ways that AI and machine learning will intersect with war and broader security ecosystems in the short term. These include AI integration into targeting practices, intelligence analysis, command and control, cyber operations, and other areas such as border security and law enforcement.

Military AI efforts such as the US Defense Department’s Joint All-Domain Command & Control and Project MAVEN appear to be focused on the acquisition, processing, analysis, and dissemination of data that will shape lethal decision making—delegating elements of war traditionally conducted by humans to algorithmic systems. These efforts are not aimed at building superintelligent Terminators, but they are nevertheless reconfiguring the ways in which the US military will fight war.

There may be no red glowing eyes behind a titanium-alloy face, but AI technologies appear set to shape decisions and actions that have life-or-death consequences.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments