AI in war: Can advanced military technologies be tamed before it’s too late?

By Steven Feldstein | January 11, 2024

A male Ukrainian military soldier with a quadcopter control panel with a joystick and a screen.A Ukrainian military soldier with a quadcopter control panel with a joystick and a screen. Image by dvoinik via Adobe Stock

For nearly 14 years, Israeli operatives had targeted Iran’s top nuclear scientist Mohsen Fakhrizadeh, who oversaw a clandestine program to build a nuclear warhead. On November 27, 2020, in a move that stunned the world, Israeli intelligence officials assassinated the scientist. Fakhrizadeh and his wife had left the Caspian coast and were traveling in a convoy of four cars towards their family home in the Iranian countryside. As they approached a U-turn, a cascade of bullets shattered their windshield and struck Fakhrizadeh repeatedly.

The Israeli agent who carried out the assassination didn’t have to flee the scene: The shooter had used a remote-operated machine gun, triggered from more than 1,000 miles away. The Mossad had customized a Belgian-manufactured rifle with an advanced robotic apparatus that fit into the bed of a pickup truck and was outfitted with a bevy of cameras, providing a full view of the target and the surrounding environment. To account for the delay in transmission of signals to the weapon, the Mossad used artificial intelligence software to factor in a time lag, the shaking of the truck caused as each bullet was fired, and the speed of Fakhrizadeh’s vehicle.

Rather than serve as an outlier, the operation has been a harbinger for innovation to come. Nations both large and small are racing ahead to acquire advanced drones, incorporate algorithmic targeting analysis, and develop an array of autonomous land and sea-based weapons, all with little oversight or restriction. As such, there is an urgency for countries to agree on common rules about the development, deployment, and use of these tools in war.

To enhance oversight and predictability, experts and policymakers should consider what steps leading AI powers could take. The United States could lead the way by pledging oversight concerning its own development of AI weapons. It could also team with other nations to create an independent expert monitoring group that would keep an eye on how AI is being used in war. Finally, countries should come to the table to decide norms of use for emerging military tech—before it’s too late.

From Ukraine to Gaza. AI systems relevant to national security span a range of applications but can be broadly classified into upstream tasks (intelligence, surveillance, and reconnaissance; command and control; information management; logistics; and training) and downstream tasks (target selection and engagement). Concretely, AI applications allow militaries greater analytic capacity—to aggregate and analyze battlefield data and to enhance operational capacity—for missile strikes and for the deployment of autonomous AI-powered drones.

Some experts argue that the United States cannot afford to stymie progress towards developing fully autonomous weapons lest the Chinese or Russians surpass their efforts. And to be sure, AI capabilities are rapidly proliferating. As the Ukraine war and the hostilities in Gaza show, without a common framework and agreed upon limitations, states risk a race to the bottom, deploying successively more destructive systems with scant restrictions.

The current war in Ukraine has been described as a “super lab of invention” that has given tech companies and entrepreneurs an opportunity to test new tools directly on the battlefield. The conflict has revealed a major shift in how war is fought. One of the most consequential changes has been the introduction of integrated battle-management systems that offer up-to-the-minute transparency about troop movements and locations—all the way down to basic unit levels. “Today, a column of tanks or a column of advancing troops can be discovered in three to five minutes and hit in another three minutes,” Maj. Gen. Vadym Skibitsky, a senior official in Ukraine’s military intelligence service, cautions. “The survivability on the move is no more than 10 minutes.”

RELATED:
Wargames and AI: A dangerous mix that needs ethical oversight

The Ukraine frontline has been flooded by unmanned aerial vehicles, which not only provide constant monitoring of battlefield developments, but when matched with AI-powered targeting systems also allow for the near instantaneous destruction of military assets. Naturally, both the Russians and Ukrainians have turned to counter-drone electronic warfare to negate the impact of unmanned aerial vehicles. But this has ushered in another development—a rapid push for full autonomy. As military scholar T.X. Hammes writes, “Autonomous drones will not have the vulnerable radio link to pilots, nor will they need GPS guidance. Autonomy will also vastly increase the number of drones that can be employed at one time.”

Military AI is similarly shaping the war in Gaza. After Hamas militants stunned Israel’s forces by neutralizing the hi-tech surveillance capabilities of the country’s “Iron Wall”—a 40-mile long physical barrier outfitted with intelligent video cameras, laser-guided sensors, and advanced radar—Israel has reclaimed the technological initiative. The Israel Defense Forces (IDF) have been using an AI targeting platform known as “the Gospel.” According to reports, the system is playing a central role in the ongoing invasion, producing “automated recommendations” for identifying and attacking targets. The system was first activated in 2021, during Israel’s 11-day war with Hamas. For the 2023 conflict, the IDF estimates it has attacked 15,000 targets in Gaza in the war’s first 35 days. (In comparison, Israel struck between 5,000 to 6,000 targets in the 2014 Gaza conflict, which spanned 51 days.) While the Gospel offers critical military capabilities, the civilian toll is worrisome. One source describes the platform as a “mass assassination factory” with an emphasis on the quantity of targets over the quality of them. There is also the risk that Israel’s reliance on AI targeting is leading to “automation bias,” in which human operators are predisposed to accept machine-generated recommendations in circumstances under which humans would have reached different conclusions.

Is international consensus possible? As the wars in Ukraine and Gaza attest, rival militaries are racing ahead to deploy automated tools despite scant consensus about the ethical boundaries for deploying untested technologies on the battlefield. My research shows that leading powers like the United States are committed to leveraging “attritable, autonomous systems in all domains.” In other words, major militaries are rethinking fundamental precepts about how war is fought and leaning on new technologies. These developments are especially concerning in light of numerous unresolved questions: What exactly are the rules when it comes to using lethal autonomous drones or robot machine guns in populated areas? What safeguards are required and who is culpable if civilians are harmed?

As more and more countries become convinced that AI weapons hold the key to the future of warfare, they will be incentivized to pour resources into developing and proliferating these technologies. While it may be impractical to ban lethal autonomous weapons or to restrict AI-enabled tools, it doesn’t mean that nations cannot take more initiative to shape how they are used.

The United States has sent mixed messages in this regard. While the Biden administration has released a suite of policies outlining the responsible use of autonomous weapons and calling for countries to implement shared principles of responsibility for AI weapons, the United States has also stonewalled progress in international forums. In an ironic twist, at a recent UN committee meeting on autonomous weapons, the Russian delegation actually endorsed the American position, which argued that putting autonomous weapons under “meaningful human control” was too restrictive.

RELATED:
How to better research the possible threats posed by AI-driven misuse of biology

American policymakers can do better, with three ideas worth considering.

First, the United States should commit to meaningful oversight regarding the Pentagon’s development of autonomous and AI weapons. The White House’s new executive order on AI mandates developing a national security memorandum to outline how the government will deal with national security risks posed by the technology. One idea for the memo would be to establish a civilian national security AI board, possibly modeled off of the Privacy and Civil Liberties Oversight Board (an organization tasked with ensuring that the federal government balances terrorist prevention efforts with protecting civil liberties). Such an entity could be given oversight responsibilities to cover AI applications presumed to be safety and rights-impacting, as well as tasked with monitoring ongoing AI processes—whether advising on the Defense Department’s new Generative AI Task Force or offering advice to the Pentagon about AI products and systems under development with the private sector. A related idea would be for national security agencies to establish standalone AI risk-evaluation teams. These units would oversee integrated evaluation, design, learning, and risk assessment functions that would create operational guidelines and safeguards, test for risks, direct AI red-teaming activities, and conduct after action reviews.

Second, the United States and like-minded democracies should push for the creation of an internationally sanctioned independent expert group to monitor the continuing effects of AI tools used in war. For example, if reports are true that “90 percent of the targets hit” in the Gaza conflict are due to AI-generated recommendations, then it behooves policymakers to have a more granular understanding of the risks and benefits of such systems. What are the civilian impacts of these targeting platforms? What parameters are being used and what level of oversight is being exercised over the targeting algorithms? What type of accountability procedures are in place? The purpose of the group would be to spotlight concerning areas of activity and offer recommendations for governments and international organizations about how to redress emerging problems.

Finally, states should agree on establishing a floor for conduct for how militaries will use emerging technologies in war. There is a Wild West quality to how nations are deploying new technologies to advance their security interests. The risk is that countries, particularly non-democratic regimes, will initiate a race to the bottom, using ever more lethal combinations of tools for destructive effect. Governments could agree on basic parameters—borrowing in part from military AI principles the United States and other countries have proposed—to ensure that the use of AI weapons is consistent with international humanitarian law and that safeguards are in place to mitigate the risk of inadvertent escalation and catastrophic failures.

This is hardly the first time that international leaders have confronted the devastating potential of new technologies. Just as global leaders reached consensus post-World War II to create guardrails of behavior through the Geneva Conventions, international leaders should undertake a similar effort for AI technologies. Liberal democracies can play a much greater role in setting norms and baseline conditions for the deployment of these powerful new technologies of war.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John Alic
John Alic
3 months ago

Many years ago Admiral Chester Nimitz told a congressional committee, “no weapon that is effective and efficient has ever been outlawed” (full citation on p. 8 of “The US Politico-Military-Industrial Complex,” https://doi.org/10.1093/acrefore/9780190228637.013.1870). In other words, if the armed forces want something they’ll probably get it. And while there are international treaties intended to ban cluster munitions and (antipersonnel) land mines, the US has not signed on, so far as I recall, and of course both are killing people in Ukraine. So good luck with AI.