The authoritative guide to ensuring science and technology make life on Earth better, not worse.

How AI surveillance threatens democracy everywhere

By Abi Olvera | June 7, 2024

Illustration of a big eye watching over protestorsBy Thomas Gaulkin / Vectorstock

In 2018, Singapore planned to embed facial recognition cameras in lampposts for nationwide monitoring. But rapid advances in battery technology and 5G networks enabled a pivot to an even more powerful and nimble surveillance system—mobile sensors and cameras capable of observing citizens and catching them in the act of littering, with artificial intelligence handling the data analysis. Around the same time, Malaysia partnered with China’s Yitu Technology to provide police with an AI-powered facial recognition system linked to a central database for real-time identification of citizens from body camera footage.

Around the world, a new breed of digital eyes is keeping watch over citizens. Although mass surveillance isn’t new, AI-powered systems are providing governments more efficient ways of keeping tabs on the public. According to the 2019 AI Global Surveillance Index, 56 out of 176 countries now use artificial intelligence in some capacity to keep cities “safe.” Among other things, frail non-democratic governments can use AI-enabled monitoring to detect and track individuals and deter civil disobedience before it begins, thereby bolstering their authority.

These systems offer cash-strapped autocracies and weak democracies the deterrent power of a police or military patrol without needing to pay for, or manage, a patrol force, explains Martin Beraja, co-author of an AI-powered autocratic surveillance trend analysis and associate professor of economics at MIT. The decoupling of surveillance from costly police forces also means autocracies “may end up looking less violent because they have better technology for chilling unrest before it happens,” he says.

The spread of AI-powered surveillance systems around the world already has empowered governments seeking greater control with tools that entrench non-democracy. To counter the decay of democracies caused by AI-powered surveillance, the international community will need to establish ethical frameworks and define clear limits and controls for these new, efficient tools of control and oppression.

The Bulletin’s 2024 November Magazine Cover appears beside text that reads, “November magazine: Fusion — the next big thing … again? Subscribe to start reading.”

Digital scarecrow. A recent study in The Quarterly Journal of Economics suggest that fewer people protest when public safety agencies acquire AI surveillance software to complement their cameras. The mere existence of such systems, it seems, suppresses unrest. This result could be at least partly attributable to an unfortunate reality: Public security agencies often misrepresent the systems as being more powerful than they are, according to Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace in the Democracy, Conflict, and Governance Program who interviewed personnel at public security agencies worldwide.

The relationship between acquisition of AI-driven surveillance systems and domestic unrest can be seen as a chicken-and-egg situation. Countries are more likely to buy surveillance AI after periods of domestic unrest. Unsurprisingly, countries also appear more likely to import such software when their democratic institutions or civil liberties have eroded. Whether the erosion of democracy happens faster because of the procurement or eroding democracy made leaders want the AI systems in the first place is unclear, based on statistical data on AI surveillance alone.

RELATED:
Who needs a government ban? TikTok users are already defending themselves

Mature democracies did not experience democratic erosion when importing surveillance AI software, even from China, a problematic player in this arena, according to Beraja’s data. But weak democracies exhibited backsliding—a dismantling of democratic institutions and movement toward autocracy—regardless of whether the surveillance technology originated from China or the United States, which is more stringent about its exports.

China, the predominant provider of AI-powered surveillance systems, exhibits a significant bias in exporting these technologies to autocratic regimes—a trend not observed with other frontier technologies like wind turbines. The United States also exports surveillance AI to less-free nations, but, Beraja notes, it lacks China’s systemic tilt toward autocracies.

China’s export agenda. There are two main drivers for China’s exports of surveillance tech: autocracies’ higher demand for control tools and their stronger trade links with Beijing. Moreover, China-based firms have extensive experience tailoring products for repressive purposes, and autocratic governments may trust China more with their data than the United States. While the United States may restrict such sales, China possibly perceives these exports as beneficial, aligning with its 2015 Digital Silk Road initiative aimed at expanding global digital infrastructure.

More than a third of humanity already lives under autocratic rule, and the erosion of fragile democracies that AI surveillance technology can abet is concerning. Gradual democratic backsliding is one of the most common routes to authoritarianism. The Chinese digital communications technology conglomerate Huawei installs surveillance software via “safe city” agreements around the world. Between 2009 and 2018, more than 70 percent of those agreements involved countries with an average rating of “partly free” or “not free,” as judged by Freedom House, a US-based non-profit advocacy group. “Partly free” or “not free” countries had lower scores of political rights or civil liberties than “free” countries.

Historically, integrating countries into the global economy has been widely seen as nudging them toward democracy. China’s surveillance tech exports appear to work against that theory. Compounding the global risks those exports pose, Chinese research is focused on accelerating AI capabilities for surveillance tasks. These include crowd analysis, re-identification—which allows AI to track individuals across different camera views—and face spoof detection, which helps distinguish real faces from fake ones (such as photos or masks). Research on these capabilities grew more than 30 percent from 2015 to 2019, according to research by Georgetown University’s Center for Security and Emerging Technology. China accounted for 56 percent of person re-identification research alone. Left unchecked, this research trajectory could deepen the societal harms of surveillance technology. “The incentives driving research guide where you get innovation,” Feldstein said.

But do these AI-powered surveillance systems truly leverage the advanced methods that companies claim? Feldstein noted that bureaucratic silos often hinder data sharing across security forces, obscuring the actual performance of supposedly sophisticated AI capabilities. “We need realistic, empirical verification of capabilities,” he said, emphasizing the rapid pace of innovation makes statistical assessments difficult.

RELATED:
Trump’s potential impact on emerging and disruptive technologies

Even though he is skeptical of the current abilities of these AI-powered surveillance systems, Feldstein warned that holistic “autocratic learning”—in which national authorities develop and share methods of suppressing citizen discontent—is accelerating. “Autocracies are cooperating intensively, translating geopolitical trends into common strategies,” Feldstein said.

Ultimately, countering authoritarian AI and surveillance trends demands a multi-pronged response. Democracies should establish ethical frameworks, mandate transparency, limit how mass surveillance data is used, enshrine privacy protections, and impose clear redlines on government use of AI for social control. Export controls and investment screenings, which would scrutinize and potentially restrict investments in entities or countries engaged in rights abuses, could cut off rights-violating regimes, though Beraja explained any such initiatives must impose real costs on repressive leaders. For example, mere symbolic sanctions or controls are ineffective against regimes that are not heavily reliant on trade with nations that attempt to enforce sanctions.

Crucially, policymakers should factor in societal impact when setting international standards on artificial intelligence technology—much like accounting for the negative externalities of unethically sourced and polluting products. Otherwise, the loss of civil liberties won’t inform discussions on regulating AI exports.

Global efforts could also prioritize developing AI technologies that uphold democratic values like privacy and human rights, rather than enabling oppressive surveillance and control. As Oxford University researcher Anders Sandberg argues, governments should pursue a “differential technological development” approach—which would entail “speed[ing] up technology development of privacy enhancing technologies, or in this case, technology that protects against surveillance and control.” Examples of safety-enhancing technologies include anonymous browsing or communication enablers, advanced encryption methods for civil society use, and anti-surveillance tools like facial recognition blockers and anti-tracking software. Unlike other efforts, safety-enhancing technologies could be developed even without robust multilateral coordination.

The Organisation for Economic Co-operation and Development (OECD) is an international organization bringing together 38 member countries to develop policies for sustainable economic growth. In an April 2024 report, it called for “trustworthy technology development guided by democratic principles” like equality under the law, public accountability, and advancing the greater good. Innovation should be firmly anchored in liberal democratic values from the outset.

The Stasi, East Germany’s notorious secret police, operated from 1950 until 1990, without AI-powered lampposts to aid its surveillance. But the agency’s human monitoring was pervasive enough to prevent collective action or dissent from taking root for decades. Upholding democratic principles in the age when AI-powered surveillance is pervasive will require vigilance, norm-setting, and a proactive defense of civil liberties. The future of artificial intelligence may define the future of democracy itself.

The views expressed in this piece are the author’s and do not represent those of the US government.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments