The authoritative guide to ensuring science and technology make life on Earth better, not worse.

A new military-industrial complex: How tech bros are hyping AI’s role in war

By Paul Lushenko, Keith Carter | October 7, 2024

The current debate for the implications of AI on warfighting discounts critical political, operational, and normative considerations that imply AI may not have the revolutionary impacts that its proponents claim. Image: TSViPhoto via Adobe Stock

Since the emergence of generative artificial intelligence, scholars have speculated about the technology’s implications for the character, if not nature, of war. The promise of AI on battlefields and in war rooms has beguiled scholars. They characterize AI as “game-changing,” “revolutionary,” and “perilous,” especially given the potential of great power war involving the United States and China or Russia. In the context of great power war, where adversaries have parity of military capabilities, scholars claim that AI is the sine qua non, absolutely required for victory. This assessment is predicated on the presumed implications of AI for the “sensor-to-shooter” timeline, which refers to the interval of time between acquiring and prosecuting a target. By adopting AI, or so the argument goes, militaries can reduce the sensor-to-shooter timeline and maintain lethal overmatch against peer adversaries.

Although understandable, this line of reasoning may be misleading for military modernization, readiness, and operations. While experts caution that militaries are confronting a “eureka” or “Oppenheimer” moment, harkening back to the development of the atomic bomb during World War II, this characterization distorts the merits and limits of AI for warfighting. It encourages policymakers and defense officials to follow what can be called a “primrose path of AI-enabled warfare,” which is codified in the US military’s “third offset” strategy. This vision of AI-enabled warfare is fueled by gross prognostications and over-determination of emerging capabilities enhanced with some form of AI, rather than rigorous empirical analysis of its implications across all (tactical, operational, and strategic) levels of war.

The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities.

To the extent analysts do explore the implications of AI for warfighting, such as during the conflicts in Gaza, Libya, and Ukraine, they highlight limited—and debatable—examples of its use, embellish its impacts, conflate technology with organizational improvements provided by AI, and draw generalizations about future warfare. It is possible that AI-enabled technologies, such as lethal autonomous weapon systems or “killer robots,” will someday dramatically alter war. Yet the current debate for the implications of AI on warfighting discounts critical political, operational, and normative considerations that imply AI may not have the revolutionary impacts that its proponents claim, at least not now. As suggested by Israel and the United States’ use of AI-enabled decision-support systems in Gaza and Ukraine, there is a more reasonable alternative. In addition to enabling cognitive warfare, it is likely that AI will allow militaries to optimize workflows across warfighting functions, particularly intelligence and maneuver. This will enhance situational awareness; provide efficiencies, especially in terms of human resources; and shorten the course-of-action development timeline.

Militaries across the globe are at a moment or strategic inflection point in terms of preparing for future conflict. But this is not for the reasons scholars typically assume. Our research suggests that three related considerations have combined to shape the hype surrounding military AI, informing the primrose path of AI-enabled warfare. First, that primrose path is paved by the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed.

What AI is and isn’t. Automation, autonomy, and AI are often used interchangeably but erroneously. Automation refers to the routinization of tasks performed by machines, such as auto-order of depleted classes of military supplies, but with overall human oversight. Autonomy moderates the degree of human oversight of tasks performed by machines such that humans are on, in, or off the loop. When humans are on the loop, they exercise ultimate control of machines, as is the case for the current class of “conventional” drones such as the MQ-9 Reaper. When humans are in the loop, they pre-delegate certain decisions to machines, which scholars debate in terms of nuclear command and control. When humans are off the loop, they outsource control to machines leading to a new class of “killer robots” that can identify, track, and engage targets on their own. Thus, automation and autonomy are protocol-based functions that largely retain a degree of human oversight, which is often high given humans’ inherent skepticism of machines.

Although AI could be implicated in automated and autonomous systems, the functionality, purpose, and scope of these systems are different. First, AI represents the integration of data, software, and computing power that are not reducible to protocol-based functions. Similarly, AI is designed to perform higher-order cognitive tasks, like reasoning, that are normally reserved for humans. Finally, observers often privilege a narrow (weak) rather than generative (strong) form of AI. The latter consists of generative pre-trained transformers, such as those used in ChatGPT. These stack algorithms to create artificial neural networks that improve probabilistic reasoning to classify objects and forecast outcomes based on representative data, which is the veritable oil of AI-enabled warfare. Narrow AI is tailorable if not constrained. It is deliberately designed for limited tasks, such as optimizing targeting workflows based on algorithms trained on representative data. In both cases, AI provides nothing more than predications of anticipated human behaviors. Artificial intelligence cannot peer into the future nor shape the future.

The primrose path of AI-enabled warfare is paved by a new military-industrial complex. Countries typically acquire military technologies, such as drones, for reasons that relate to supply, demand, and status considerations.

If countries have the technical wherewithal, they can manufacture and export military capabilities to supplement their gross domestic product (GDP), which is a measure of overall economic activity. In the context of coalition operations, which characterize countries’ preferred approach to the use of force abroad following the Cold War, foreign military sales also enhance interoperability across countries that enables burden-sharing. Assuming countries have the financial resources and absorptive capacity, defined as the technical competence to integrate novel technologies into their militaries, they may also purchase new technologies to offset perceived capability gaps. Even in the case of nations that lack the technical competence to meet their militaries’ needs, they may still purchase novel technologies to signal parity with other countries. While the intent of these defense acquisition approaches may differ, they are unified in terms of their responsiveness to requirements outlined by military leaders.

The political economy of the primrose path of AI-enabled warfare is different. It flips these defense acquisition processes on their heads such that industry drives, rather than responds to, militaries’ requirements for new capabilities. This approach reflects the United States’ historical preference for technology standards that are based on a “bottom-up, laissez-faire corporate-led strategy,” which emphasizes the anticipated economic advantages of leading-sector innovation.

These industry drivers consist of businesses that are funded by venture capitalists, including Anduril, Black Cape, Inc., Clarifai, CrowdAI, and ScaleAI; established defense contractors such as AWS, ECS Federal, IBM, Maxar, Microsoft, Palantir, Raytheon, and the Sierra Nevada Corporation; and business magnates like Elon Musk, Palmer Luckey, and Eric Schmidt. Schmidt, for instance, argues that “profits could go to acquire new companies, bolstering the incentive structure for defense start-ups building a different future of American weaponry.” Problematically, and as scholars Terrell Carver and Laura Lyddon note, such wealth accumulation can drive the defense acquisition process “over and above the legitimatizing narratives of national security and patriotic pride.” In one glaring example of this possibility, Luckey, founder of Anduril, promises to “save western civilization…as we make tens and tens of billions of dollars a year.”

Similarly, Musk’s Starlink uses low-earth orbit satellites to provide militaries’ assured communication in expeditionary and contested environments. Earlier in its war with Russia, Musk decided if Ukraine could use the Starlink satellite network, thus shaping the country’s military operations against Russia on the basis of his fears of crisis escalation. Schmidt’s new start-up, White Stork (previously Swift Beat), is designed to develop fully-autonomous drones. Schmidt, capitalizing on his previous roles as Chairman of the National Security Commission on AI and Director of the Defense Innovation Board, also instantiates the new military-industrial complex wherein business leaders are framing the future direction of war, despite their lack of military experience.

At first blush, the military-industrial complex seems to reflect militaries’ broader shift toward dual-use technologies. Since the Cold War, militaries have increasingly acquired commercial-off-the-shelf capabilities, which are cheaper, albeit more expendable than military-grade technologies. In this way, militaries have sought to offset capability shortfalls while recapitalizing cost savings elsewhere, including training and development of exquisite technologies such as advanced fighter jets. Yet there are key differences between the purchase of dual-use capabilities and the military-industrial complex.

Notwithstanding the military’s interest in an “open architecture” to field capabilities, the military-industrial complex is shaped by procurement of propriety systems that can discourage such a plug-and-play approach to modernization. Research also suggests that some of these propriety systems are based on dubious testing and experimentation that does not approximate realistic operating environments, which means purported technological readiness levels can be inflated. This is especially troubling for AI-enabled technologies. These capabilities are data hungry, explaining why White Stork has a satellite campus in Poland to acquire data from the conflict in Ukraine. The workaround is synthetic data. However, existing research suggests synthetic data is also inherently biased.

Opposed to dual-use capabilities, the primrose path of AI-enabled warfare is also shaped by a military-industrial complex that provides technical warfighting solutions as a service, meaning they often do not respond to validated military requirements. Thus, companies’ have hedged their bets, investing billions of dollars into end-to-end AI-enabled technologies that they assume militaries will need to purchase to maintain lethal overmatch of adversaries during future conflict. This also means that businesses, especially their software engineers referred to as field engineers, are embedded within military organizations to an unprecedented degree that may muddle the legitimate use of force, at least for some critics.

One example of the military-industrial complex is the U.S. Army’s 18th Airborne Corps’ experiment with AI-enabled warfare. The 18th Airborne Corps used a series of training exercises, called Scarlet Dragon, to establish an AI-enabled decision support system referred to as the Maven Smart System that enhanced intelligence support to operations in Ukraine. According to one recent report, the Maven Smart System is “interesting because of how its development was managed with flexibility and speed, as well as the participation of numerous software and AI service providers in a development-security-operations cycle that relied first on commercial service providers.”

The primrose path of AI-enabled warfare threatens to discount the normative implications of conflict. The primrose path of AI-enabled warfare is also a cause and effect of a perceived AI arms race between China, Russia, and the United States, wherein these countries have precipitously expanded their defense spending to achieve a comparative military advantage against one another. Most champions of AI-enabled warfare claim that these great powers are attempting to acquire a “first-mover” advantage for AI-enhanced capabilities. Schmidt, along with Harvard professor Graham Allison, recently published a report to “sound an alarm over China’s rapid progress and the current prospect of it overtaking the United States in applying AI in the decade ahead.”

This narrative reflects an assumption that a monopoly over these technologies will result in economic gains that undergird military power and shape the global balance of power. Russian president Vladimir Putin argued that whoever leads the development of AI will dominate the world; President Xi Jinping intends for China to surpass the United States as the world’s leader of AI development by 2030; and the United States is outspending other countries for AI development. US Senator Roger Wicker, for instance, recently introduced the idea of increasing defense spending to five percent of GDP, thus matching 2009 levels during the surges in Afghanistan and Iraq. Further, survey research in the United States shows that support for AI-enabled warfare among both the public and military is strongly shaped by a perceived AI arms race globally.

This perspective has implications for the legal, moral, and ethical considerations that shape countries’ use of force, which scholars emphasize to greater or lesser degrees when characterizing future war. Skeptics caution that AI-enabled warfare will deskill humans and supplant their agency, leading to unintended consequences including crisis escalation, civilian casualties, and accountability and responsibility gaps for these outcomes. Advocates assume that AI will minimize confirmation bias, wherein operators discount disconfirming evidence of their earlier assessments, thus enhancing the perceived accuracy of fully-autonomous capabilities, such as killer robots. Still others, though in the minority, claim that such normative concerns are a distraction, mortgaging military advantages in the interest of evolving global norm-setting for the responsible use of AI that countries will not abide by anyway.

These positions impact public perceptions of legitimate wartime conduct in different ways that have consequences for the durability of operations abroad. If adopted, research suggests that the first and third perspectives will likely undermine the perceived legitimacy of novel battlefield capabilities augmented with AI. Carver and Lyddon further observe that such normative considerations are often “subjected to ruthless marginalization, barely tolerated at best, and they become targets for assiduous co-option by critics.” The second perspective, on the other hand, will likely do the opposite. Existing studies show that heightened precision, especially when capabilities are used lethally, will favorably shape the public’s perceived legitimacy of operations conducted by AI-enabled technologies.

Soldiers do not trust AI. The military-industrial complex, and the narratives of an AI arms race that encourages it, assumes that soldiers will trust human-machine teaming. In a recent opinion piece with Schmidt, Mark Milley, formerly chairman of the US Joint Chiefs of Staff, pontificated that “soldiers could sip coffee in their offices, monitoring screens far from the battlefield, as an AI system manages all kinds of robotic war machines.” Despite this sanguine prediction, it is unclear what shapes soldiers’ trust in AI, thus encouraging them to overcome inherent skepticism of machines. To help inform soldiers’ understanding of AI-enabled warfare, our research addresses two key questions. First, will soldiers trust AI used for different—tactical and strategic—purposes and with varying—human and machine—oversight? Second, what factors shape soldiers’ trust in machines?

These questions can be studied by using survey experiments across the US military, accruing extremely rare insights into soldiers’ attitudes toward AI. By tapping into elite samples at US war colleges, where we polled future generals and admirals that will be responsible for integrating AI across warfighting formations, we have gleaned valuable insights. Similarly, we polled cadets assigned to the Reserve Officers’ Training Corps to assess whether or to what degree generational differences shape soldiers’ trust in AI. Our findings suggest that soldiers’ trust in AI is not a foregone conclusion. Further, we found that trust is complex and multidimensional. Importantly, these findings are consistent across the military ranks.

First, senior officers do not trust AI-enhanced capabilities. To the extent they do demonstrate increased levels of trust in machines, their trust is moderated by how machines are used, either on the battlefield or in the war-room, and with what degree of human oversight. Senior officers are least distrustful of AI used for strategic-level decision-making and with human oversight, though their level of trust under these conditions is still very poor. This skepticism is exacerbated by generational differences across the ranks. Cadets, though they are often referred to as “digital natives,” also maintain a conservative understanding of the appropriate use and constraint of AI. Compared to senior officers, they demonstrate more trust in AI-enabled technologies used for strategic-level deliberations and with human control.

Second, trust in machines is shaped by a tightly calibrated set of considerations, including technical specifications, perceived effectiveness, and regulatory oversight. The use of AI for non-lethal purposes, and with heightened precision and a degree of human control, positively shapes soldiers’ trust. Trust is also favorably shaped by soldiers’ moral beliefs, wherein they attempt to balance harms imposed on civilians and soldiers with mission success. Indeed, these conflicting moral logics exercise the biggest effect on military attitudes of trust in partnering with AI. International oversight also increases trust in AI, suggesting the importance of continued norm-setting for the responsible use of AI in the military. Though these results are consistent with cadets, these trainees are also more tolerant of false positives, or target misidentification, suggesting that they are more willing to accept civilian casualties. This could reflect cadets’ limited exposures to the consequences of operations that go awry. For both senior officers and trainees, trust is further shaped by the interaction between the autonomy and lethality of AI-enabled capabilities, with fully autonomous capabilities used lethally reducing trust.

Visualizing the future of war. Still, it is not only likely, but probable, that AI will shape future warfare in unique ways. As discussed above, the development of AI for commercial applications is re-ordering the defense acquisitions process. During earlier periods of technological innovation, defense systems such as the global positioning system and internet were adopted for civilian use. In the short term, or within several years that roughly align with the US Department of Defense’s “Future Years Defense Program,” it is likely that commercial AI applications will be adopted for military use rather than developed for military use. Surveying the commercial sector offers insights into how AI may be easily co-opted for military use, particularly for cognitive warfare and intelligence and maneuver.

The use of AI to create novel text, images, and video will likely exacerbate the challenge of cognitive warfare. This form of non-lethal warfare explains the social engineering of adversaries’ beliefs, with the overall intent to affect their defense priorities, military readiness, and operations. In this way, countries will attempt to harness AI to produce misinformation and disinformation, which are designed to mislead and deceive opponents, and across the competition continuum ranging from peace to war.

During competition, countries will likely use AI to stoke social, political, and economic grievances among their opponents, such that their defense planning and military readiness are embroiled by increasing levels of partisanship, social unrest, and even political violence. Russia used AI to mislead and deceive Americans during the 2020 US presidential election, and is reportedly attempting to do so again during the 2024 election. These operations are designed to aggravate partisan divisions across the Democratic and Republican parties, as well as delegitimize democratic institutions, which can affect military preparedness. Inconsistent or stalled defense funding authorized by the US Congress undermines modernization timelines and politicizes the military, which has implications for recruitment and retention. Indeed, perceived “wokeness” of the US military has resulted in the worst recruitment shortfalls in over half a century, threatening the viability of the all-volunteer force.

During armed conflict, the confusion created by AI-generated psychological operations will threaten situational awareness required for timely decision-making. In the worst case scenarios, this could cause misidentification of friendly forces, leading to fratricide. It can also fracture alliances due to suspicions of entrapment or abandonment, sow social unrest in theaters of operations—thus diverting attention, personnel, and resources to lower priority missions—or foment domestic protests, which can undermine the public will required for prolonged military operations abroad. Countries could also use AI to mislead and deceive the families of servicemembers, ultimately affecting soldiers’ morale and combat performance. According to one recent study, Ukrainians are using memes, which are simple, catchy, and satirical cartoons, to create confusion and exacerbate cleavages among Russians, especially those conscripted into the military.

On the other hand, AI will likely aid military planning, especially for intelligence and maneuver. Algorithms enhanced with AI, and trained on military datasets, will vastly improve analytic quality and accelerate the production of the underappreciated staff work that supports modern military operations. These algorithms will be able to rapidly conduct terrain analysis using existing geographic and bathymetric data to facilitate maneuver planning and to forecast known, likely, and suspected enemy locations. Using data collected about the enemy through an analysis of their doctrine, the terrain, and their current operations, these algorithms will be able to create situational templates, derive reconnaissance objectives and maneuver collection assets, and provide optimal targeting solutions. The acceleration of analytical support to military decision-making will exponentially increase the rate of lethality and may be adopted without the ethical reflection accompanying discussions about lethal autonomous systems. Indeed, Israel has used AI applications in Gaza to predict the location of a suspected Hamas terrorist, the likelihood that a suspected Hamas terrorist is in a building, or that a suspected Hamas terrorist has entered a compound, thus rapidly expanding the speed of targeting.

In the more distant future, as AI matures, further delegation of military operations would likely go to autonomous systems. This is often referred to as minotaur warfare, such that machines control humans during combat and across domains, which can range from patrols of soldiers on the ground to constellations of warships on the ocean to formations of fighter jets in the air. To achieve this vision of AI-enabled warfare, militaries will undergo radical organizational restructuring. Among other things, new career fields will emerge, different skill sets will be valued, the role of command and control will need to be reimagined, and centralization may usurp decentralization as a guiding principle. Navigating these strategic organizational challenges will be just as important to militaries as the integration of the AI-enabled technologies on the battlefield, as Schmidt and other entrepreneurs emphasize. To manage this transformation, experts must step off the primrose path of AI-enabled warfare and better assess the merits and limits of new technologies.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Text reads, “Give the gift of Bulletin swag. Shop merch designed to raise awareness about nuclear risk, climate change, and disruptive technologies.” Below it is a button that says “Show now.” A man appears wearing a Bulletin T-shirt and smiling.

RELATED POSTS

Receive Email
Updates