That’s the question Mary Cummings of Duke University puts forward in a new paper for the think tank Chatham House. Citing R&D spending in recent years, Cummings argues that companies like Google and Facebook could outpace militaries when it comes to the science of artificial intelligence, which in turn could lead to potentially dangerous technologies going into use before they have been properly tested.
The matter will probably come down to who will be the first to make automated systems truly autonomous—or at least more autonomous then they are now.
As Cummings explains, there’s a range of capabilities when it comes to processing information and making decisions. At one end there’s the pilot reading instrument displays and making quick adjustments at the controls—behavior that eventually becomes automatic. Machines do this well, due to the simplicity of the information coming in and the certainty of the response required. At the other end there’s the veteran pilot facing a complex crisis with no ready-made solution. Before taking action, this pilot must swiftly play out complicated scenarios filled with uncertainties. At the moment only humans can pull this off. The military drones currently in use can navigate on their own, and sometimes land, by reading maps and processing GPS data, but that’s about it. They are not yet capable of, for example, safely making complex, real-time combat decisions based on visual input and moral precepts.
Enter driverless cars. They’re not the only reason private firms have gotten in the AI game, but they’re a big part of it. Driving is more complicated than flying—at least for a computer—and the auto industry has an interest in perfecting this technology, as do the companies making the software. When it comes to research and development, these sectors spend much more than aerospace and defense. That means they’re more likely to draw the expertise and make the breakthroughs. Meanwhile, US military research on autonomous systems must compete against traditional projects and an institutional preference for restricting unmanned vehicles to a supporting role.
It’s unclear where all this this will lead, but Cummings worries about two outcomes in particular—governments coming to rely on private industry for their AI know-how and autonomous systems going into the field without the safeguards and testing that national militaries usually insist upon. Whether these possibilities will encourage Washington to take artificial intelligence more seriously is anybody’s guess.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.