Artificial intelligence beyond the superpowers

By Perry World House | August 16, 2018

Much of the debate over how artificial intelligence (AI) will affect geopolitics focuses on the emerging arms race between Washington and Beijing, as well as investments by major military powers like Russia. And to be sure, breakthroughs are happening at a rapid pace in the United States and China. But while an arms race between superpowers is riveting, AI development outside of the major powers, even where advances are less pronounced, could also have a profound impact on our world. The way smaller countries choose to use and invest in AI will affect their own power and status in the international system.

Middle powers—countries like Australia, France, Singapore, and South Korea—are generally prosperous and technologically advanced, with small-to-medium-sized populations. In the language of economics, they usually possess more capital than labor. Their domestic investments in AI have the potential to, at a minimum, enhance their economic positions as global demand grows for technologies enabled by machine learning, such as rapid image recognition or self-driving vehicles. But since the underlying science of AI is dual-use—applicable to both peaceful and military purposes—these investments could also have consequences for a country’s defense capabilities.

For example, a sensing algorithm that allows a drone to detect obstacles could be designed for package delivery, but modified to help with battlefield surveillance. An algorithm that detects anomalies from large data sets could help both commercial airlines and militaries schedule maintenance before critical plane parts fail. Similarly, robotic swarming principles that enable machines to coordinate on a specific task could allow for advanced nanorobotic medical procedures as well as combat maneuvers. Military applications will have special requirements, of course, including tough protections against hacking and stronger encryption. Yet because the potential for dual-use application exists at the applied science level, middle powers with strong economies but limited defense budgets could benefit militarily from AI investments in the commercial sector.

Middle-power investments and policy choices regarding AI will determine how all this plays out. Currently, many of these medium-sized countries are investing in AI applications to bolster their economies and improve their ability to provide for their own security. While AI will not transform middle powers into military superpowers, it could help them achieve existing security goals. Middle powers also have an important role to play in shaping global norms regarding how countries and people around the world think about the appropriateness of using AI for military purposes.

The other governments investing in AI. Currently, many middle powers are leveraging their private sectors to advance AI capabilities. “AI,” in this context, means the use of computing power to conduct activities that previously required human intelligence. More specifically, most countries are focusing on narrow applications of AI, such as using algorithms to conduct discrete tasks, rather than pursuing artificial general intelligence. (Advances in artificial general intelligence will likely require computing power well beyond the capabilities of most companies and states.) Even though it would be difficult to match the degree of invention taking place in the United States and China, given the massive investment necessary to generate the computational power for the most complex algorithms, many countries believe that incremental advances in narrowly focused AI, based on publicly available information, could prove very useful.

RELATED:
Hollywood, nuclear war, and the art of saving the world

In France, for example, the government is embarking on a broad-ranging new effort to cultivate AI. It is investing $1.85 billion (USD) in the technology, and also aggregating data sets for developers to use. Many AI technologies use algorithms that must “train” against large amounts of information in order to learn and become intelligent, which is why compiling such data sets is particularly important. In addition to these efforts, France is attracting private-sector investment in research centers across the country, and other nations are following closely behind. In the United Kingdom, the government announced a public-private partnership that will infuse $1.4 billion into AI-related development. In Australia, the government recently released a roadmap for developing AI.

Even small but economically and technologically advanced states, such as Singapore, are articulating national strategies to develop AI. These countries, which could never hope to compete with the total research and development spending of large countries like China, are investing in AI directly and attracting investment from the private sector. “AI Singapore” is a $110 million effort to ignite growth in the field. While that level of government funding is modest compared to some national and corporate investments, Singapore uses its business-friendly investment climate and established research clusters to attract companies that want to further their own R&D efforts. One such company is the Chinese tech and e-commerce giant Alibaba, which recently set up its first research center outside of China in Singapore.

In turn, these countries will apply AI to their own security needs. For example, as a center of global trade and the world’s second-busiest port, Singapore will seek advances in AI that boost port security and efficiency. With a population of around 5.6 million, Singapore might also be more likely than a country with a large labor pool to use AI to substitute for some military occupational specialities, for example in logistics. In Israel, a small country long vaunted for its well-developed high-tech sector and its ability to attract private investment, the military already uses predictive analytics to aid decision-making. In addition, the Israel Defense Forces employ software that predicts rocket launches from Gaza, and it began deploying an automated vehicle to patrol the border in 2016.

Middle powers shape global norms. In Europe, some governments have tied their AI investments to broader moral concerns. For example, France’s declared national strategy on AI says that the technology should be developed with respect for data privacy and transparency. For France, it is important not just to develop AI but to shape the broader ethics surrounding the technology.

Other nations in Europe are following closely behind. In Great Britain, a 2017 parliamentary committee report called for the nation to “lead the way on ethical AI.” The report specifically focused on data rights, privacy, and using AI as a force for “common good and the benefit of humanity.” In Brussels, European Union members furthered this vision, signing the “Declaration of Cooperation on Artificial Intelligence” in April 2018. This agreement is designed to promote European competitiveness on AI and facilitate collaboration on “dealing with social, economic, ethical, and legal questions.”These governments believe it is impossible to influence the global debate on AI unless they also participate in its development.

RELATED:
The six best Bulletin magazine articles of 2023

By shaping norms, these nations also can influence some military applications of AI. Middle powers have often been mediators in international discussions about military technologies. Countries such as France, Norway, and Canada can play a critical role in shaping the conversation about military applications of AI, due to their significant role in international institutions like the Convention on Certain Conventional Weapons, a UN agreement under which states party currently hold yearly discussions about lethal autonomous weapon systems.

Private sector progress. Beyond government and military spending, another major factor will influence how AI affects the future global order: The actions of large, profit-driven multinational firms whose investment far outstrips that by most governments.The  McKinsey Global Institute estimates that the world’s biggest tech firms—like Apple and Google—spent between $20 billion and $30 billion on AI in 2016 alone. These companies also possess the rich data ecosystems and human talent required for AI breakthroughs. Furthermore, these firms have the power to transfer knowledge and know-how by placing research centers in particular locations, thus making the private sector a potential kingmaker in picking which countries are the winners and losers of the AI revolution.

Because of the technology’s dual-use potential, private sector behavior will have an impact on international security, but how great an impact is an open question. It depends on the transferability of AI breakthroughs.There is no such thing as a seamless translation of technology. Machine learning algorithms learn to identify patterns and make predictions from datasets, without being explicitly pre-programmed, but data always comes from specific contexts. So for instance, a self-driving algorithm that works on the US road system might not suit the needs of a battlefield, which may be strewn with variables such as broken or non-existent roads, improvised explosive devices, and enemy fighters.

Even if an AI-related advance has only a commercial benefit, though, it will give the host country an economic boost. If it is transferable to military use, the country will further benefit. Either way, government investment in narrow AI plus the ability to attract private investment in the sector could reduce smaller nations’ dependence on larger powers, enabling them to pursue their national interests more effectively. As nations like the United States and China continue to outspend the rest of the world on defense, this area of technology suggests a path for middle powers to influence the future economic and security landscape of the globe.

This column is by Itai Barsade (@ItaiBarsade) and Michael C. Horowitz (@mchorowitz). Barsade is a research fellow at the University of Pennsylvania’s Perry World House, where Horowitz is a professor of political science and associate director. This research was supported in whole or in part by the Air Force Office of Scientific Research and Minerva Research Initiative under grant #FA9550-18-1-0194. The views and conclusions contained in this report are those of the authors and should not be attributed to the US Air Force or Department of Defense.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments