The authoritative guide to ensuring science and technology make life on Earth better, not worse.

The future of technology: Lessons from China—and the US

In provinces such as Xinjiang, China, authorities have what is probably the most intensive government surveillance in the world; in some places, there are police checkpoints every 200 meters. But much of surveillance elsewhere is less blatant and overt—if just as pervasive and omnipotent. Screen grab from Human Rights Watch video “China’s Algorithms of Repression” https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass

The future of technology: Lessons from China—and the US

Not a day passes by without some alarming headlines about the Chinese government’s use of technology, whether it is about Huawei building 5G infrastructure around the world, or about the European Union and the US banning TikTok from government phones.

In the United States, public discourse on technology and China is often framed in simplistic, black-and-white terms as a battle between democracy and authoritarianism. According to this logic, a US victory in this technological race is not just self-interested—it is vital to protecting rights and freedoms everywhere. But unfortunately, reality is more complicated. In fact, we would argue that this framing is not just flawed and oversimplified, it is also dangerous for human rights everywhere.

Before we elaborate on this argument, let us state the obvious: There is no doubt that the Chinese government has a terrible human rights record. Together with partner groups and investigative journalists, Human Rights Watch has documented the central role that technology plays in maintaining the power of the Chinese Communist Party—be it through invasive mass surveillance, policing technology, or tight censorship and control of domestic social media companies (Human Rights Watch, n.d.). We have also documented extensively how state surveillance is particularly suffocating in Xinjiang, where the authorities use mobile apps, biometric collection, artificial intelligence, and big data, to pick out anyone it deems vaguely “untrustworthy,” and to arbitrarily detain and subject them to long years in prison, in an effort to control the region’s 13 million Uyghurs and Turkic Muslims (Wang 2019).

But tech-facilitated surveillance in China is often framed in ways that leave out the important role that Western policing doctrines and technology companies have played—and continue to play—in inspiring their Chinese counterparts.

 

Influences

The very idea that intelligence should be at the core of “all strategic and operational decision-making” in policing, for instance, was pioneered in the United Kingdom and in fact acted as inspiration for China’s shift toward more intrusive “intelligence-led policing” and tech-enabled surveillance (Schwarck 2018). The abusive US “War on Terror” has provided a framework for the Chinese government to hang its repression against Uyghurs, which is increasingly portrayed as part of a “global war on terror” (Taylor and Epstein 2022).

Chinese surveillance and policing technology companies are often quite eager to pay homage to US practices: The Chinese big data company Landasoft (Grauer 2021), a Xinjiang police supplier, openly states (Yan and Wang 2015) that it wants to become the “Palantir of China” (St. Vincent 2018)—referring to the controversial US company that is providing intrusive profiling and analytic capabilities to law enforcement and intelligence agencies globally (Brigida 2023).

Similar surveillance technologies are now being used around the world. In many countries, we have seen an alarming proliferation of artificial intelligence (“AI”) to monitor protests, make so-called “predictions” about crimes that will supposedly occur, and profile people from groups most affected by oppression in ways that gravely threaten all of our human rights. Both CCTV cameras and, increasingly, face-recognition technology are widely used by law enforcement globally.

The idea that a US-China technological competition is between democracy and authoritarianism also understates—and risks glossing over—some of the more severe and systemic rights abuses that technology is facilitating in the United States.

After the scope and invasiveness of some of the US government’s surveillance systems were exposed by former National Security Agency contractor Edward Snowden in 2013, it took seven years before core bulk data collection powers were allowed to expire while other mass surveillance programs continue to operate largely unchecked (Wong 2014). Domestically, US mass surveillance and bulk data collection have undermined media freedom, the public’s right to know, the right to legal counsel, and the ability of people to hold the government to account (Human Rights Watch 2015).

Meanwhile, billions of people around the world have come to rely on the services of Big Tech—Amazon, Apple, Facebook, and Google—to exercise their basic human rights. But for many people, both within and outside the United States, the concentration of power in these companies has caused considerable harm, from the spread of hate speech and election-related misinformation to the promotion of ethnic cleansing.

 

The role of surveillance capitalism

The source of many human rights concerns associated with Facebook’s and Google’s services is their surveillance-based business model, which routinely undermines people’s rights by monetizing peoples’ data, and by turning it into ad revenue. Meanwhile, their algorithms are engineered to maximize “user engagement”—clicks, likes, and shares. Studies(Vosoughi, Roy, and Aral 2018) have shown that  divisive and sensationalist content is more likely to drive engagement (Horowitz and Seetharaman 2020).

RELATED:
Searching for nuclear bombs at the Democratic convention

While Big Tech holds a disproportionate amount of data, its commercial exploitation of personal data leaves everyone—from activists to politicians—exposed to security threats, including blackmail. In 2017, for instance, Amnesty International showed that it was possible to purchase data on 1,845,071 people listed as Muslim in the United States for $138,380.

At the same time, the rapid digitization of core government functions, from policing to providing essential public services, means that it is becoming more challenging for independent regulators, media, and civil society to document and challenge rights abuses. Through an accumulation of factors—including governments not declaring the use of surveillance and automation technologies, and the fact that systems are often opaque and complex for people to understand—there is often no access to a remedy or redress for any injustice or abuse. Rights violations are often invisible to everyone—even the people targeted, excluded, or discriminated against—making it difficult to challenge abuses.

What is different between China and the United States is not the way in which specific technologies are used and misused per se, but the radically different legal and political contexts in which they have operated.

In China, the Chinese Communist Party holds all the cards: It controls the laws, the courts, the police, the prosecutors, and the media. Its police and policing surveillance systems—given their role as a major pillar to maintain the Party’s grip on power—have broad and nearly limitless powers to surveil and control the population. All private companies are subject to the Chinese government’s pressure, censorship, and surveillance. Given the lack of the rule of law and the lack of a free press in China, it is exceptionally difficult to obtain information from these companies or hold them accountable for abuses.

The United States, on the other hand, has an extensive network of law enforcement institutions and government agencies that themselves have committed very serious, documented human rights abuse. But they are also ostensibly subjected to various levels of oversight, including through courts and elected representatives, public pressure from a (currently) free press, public opinion, and an active and engaged civil society. US agencies’ surveillance powers, unlike those of China’s, are also more fragmented and less centralized, due to some protections for privacy rights.

But democracy and the safeguards that protect people’s liberties are at risk in the United States. For example, robust civic engagement and the right to protest is increasingly criminalized or curtailed with laws that have expansive definitions of “rioting” and increased criminal penalties (Pedneault 2021). There is also a long history of problematic surveillance, from an extensive FBI program of surveillance and harassment against Martin Luther King Jr., to the New York City Police Department Intelligence Division’s engaging in the religious profiling and surveillance of people perceived to be Muslims in New York City and beyond.

The decline in democratic protections and increasingly widespread surveillance powers on the part of US federal government, state government, and local law enforcement can form a vicious cycle. Over time, this cycle can accelerate the process of chipping away at both the principles and mechanisms that protect democracies from sliding into authoritarianism.

The reality is that people everywhere—including in China and the United States—are living in an increasingly digital world where surveillance has become ubiquitous, and accountability for rights abuses more challenging. It is misleading to characterize the future of technology as one hinged upon a battle between countries that are democratic versus those that are authoritarian. Rather, it is a battle to push back against the way in which technology is violating rights everywhere. Safeguarding and vigorously defending human rights in the development of technology is central to such efforts.

While these problems are complex, some solutions are readily available and long overdue. If the United States genuinely wants to protect human rights at home and abroad, then it needs to act much more boldly when it comes to technology.

 

Possible solutions

There is not one panacea for this human rights situation, but there are some important steps to take.

First, the United States should address the unchecked corporate power and monopolistic aspects of dominant tech companies—and make it easier for people and governments to hold platforms accountable, while creating the conditions for alternative models to emerge.

Second, in an increasingly digital world, the United States should adopt stronger rules that govern how personal data can and cannot be used, in order to protect the right to privacy. The United States needs a strong federal data protection law that regulates the collection, analysis, and sharing of personal data as a matter of urgency.

RELATED:
Beijing is unavailable to take your call: Why the US-China crisis hotline doesn't work

Corporate interests have often trumped the rights of people in the United States and have made these types of changes difficult. A large part of the US tech industry has pushed back hard against any regulation that affects their operations—such as data protection laws—or tried to water them down. Their prime argument is the claim that strong legislation would stifle innovation.

Ironically, the Chinese government may offer some guidance here—at least when it comes to finding the policy tools to rein in companies.

Wrestling with the power of tech companies themselves, the Chinese government has passed a slew of legislation since 2016 that regulates data, including a Personal Information Protection Law that comprehensively regulates private companies’—but not the police’s—data collection, use, and storage. Beijing’s publicly stated reasoning is that it believes that a tech industry guarded by a set of laws is important to promote user trust and therefore the “healthy development” of a data-driven economy (NPC Observer, n.d.).

It appears that the Chinese government believes that regulating the private companies’ handling of personal data is encouraging—not stifling—innovation. The Personal Information Protection Law is modelled after the European Union’s pioneer comprehensive privacy law, known as the General Data Protection Regulation. (Though given China’s lack of the rule of law, the implementation of the Personal Information Protection Law would be much more questionable.)

Third, the United States should also set regulatory limits on deployments of artificial intelligence that unduly restrict human rights, and ensure that AI is only used in ways that are safe, legal, and do not discriminate. This should include banning law enforcement from using facial recognition, because the technology constitutes an unprecedented form of mass surveillance as it allows governments to track and monitor people’s identities and habits (NIST 2019). Biases embedded in facial recognition algorithms also raise concerns that they fuel discriminatory policing practices.

Fourth, the United States should end rights abuses related to national security, and offer remedies for decades of abuse. Congress should start by reforming its national security surveillance laws, such as by repealing Section 702 of the Foreign Intelligence Surveillance Act to end bulk data collection (Human Rights Watch 2017).

Finally, the United States should go further than the minimum of just reining-in the worst abuses. The US government should devote resources to experiment with bolder proposals, such as technological systems that can have a positive impact for democracy. For example, the American nonprofit New_ Public has urged governments to develop “digital public spaces”—publicly owned online spaces designed to maximize public goods, for people to “talk, share, and relate without those relationships being distorted and shaped by profit-seeking incentive structures” (New_ Public, n.d.).

 

What the future holds

Other governments have shown how creative engagement with technology can have positive impacts on democratic values. The government of Taiwan has engaged with a civic hacker collective known as “g0v” (pronounced “gov zero”) to incorporate participatory decision-making processes in its governance. (In a promising sign, g0v collective member Audrey Tang later became the country’s digital minister.) And jurisdictions within the United States don’t have to wait for the federal government to lead, as can be seen by the experiences of other countries. For example, Barcelona’s Decidim is another experiment, at the municipal level, involving a participatory democracy platform.

The bottom line is: As the United States rushes to develop technologies to compete with China, it must impose human rights guardrails to guide these developments, or else it risks losing by winning. This is true for all governments around the world racing to deploy and regulate technologies and tech companies.

We have a very short window of time to ensure that human rights are front and center in emerging law and policy. Regulation and participatory innovation are key factors in ensuring that the technological infrastructure of our future does not come with built-in rights abuses that accelerate a retreat from democratic safeguards and democracy itself. The stakes could not be higher.

 

Acknowledgments

The authors wish to thank Tamir Israel, senior researcher at Human Rights Watch on technology and human rights, for contributing to the writing of this piece.

References

Brigida, A.C. 2023. “How Surveillance Tech Helped Protect Power – and the Drug Trade – in Honduras.” Coda Story.January 31.. https://www.codastory.com/authoritarian-tech/honduras-surveillance-drug-trade/.

Grauer, Y. 2021. “Millions of Leaked Police Files Detail Suffocating Surveillance of China’s Uyghur Minority.” The Intercept. January 29. https://theintercept.com/2021/01/29/china-uyghur-muslim-surveillance-police/.

Horowitz, J, and Seetharaman, D. 2020. “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” The Wall Street Journal. May 26. https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499.

Human Rights Watch. 2017. “Fact Sheet: Impact of Warrantless Section 702 Surveillance on People in the United States.” Human Rights Watch. March 1. https://www.hrw.org/news/2017/03/01/fact-sheet-impact-warrantless-section-702-surveillance-people-united-states.

Human Rights Watch. n.d. “Mass Surveillance in China.” Human Rights Watch. Accessed March 27, 2023. https://www.hrw.org/tag/mass-surveillance-china.

Human Rights Watch. 2015. “US: Reject Mass Privacy Violations.” Human Rights Watch. April 23. https://www.hrw.org/news/2015/04/23/us-reject-mass-privacy-violations.

New_ Public. n.d.“For Better Digital Public Spaces.” New_ Public. Accessed March 27, 2023. https://newpublic.org/.

NIST. 2019. “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software.” NIST, December 19. https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.

NPC Observer. n.d. “2020:10:21 Public Consultation Explanation.” NPC Observer. Accessed March 27, 2023. https://npcobserver.files.wordpress.com/2020/10/20201021-public-consultation-explanations.pdf.

Pedneault, J. 2021. “US States Take Aim at Protesters’ Rights.” Human Rights Watch. February 16. https://www.hrw.org/news/2021/02/16/us-states-take-aim-protesters-rights.

Schwarck, E. 2018. “Intelligence and Informatization: The Rise of the Ministry of Public Security in Intelligence Work in China.” The China Journal 80 (July): 1–23. https://doi.org/10.1086/697089.

St. Vincent, S. 2018. “SBH458 33RD18050916040 – Human Rights Watch.” Human Rights Watch, May 3, 2018. https://www.hrw.org/sites/default/files/supporting_resources/airforce_hve_foia_hrw_redacted.pdf.

Taylor, L, and Epstein, E. 2022. “Legacy of the ‘Dark Side.’” Human Rights Watch. January 9. https://www.hrw.org/news/2022/01/09/legacy-dark-side.

Vosoughi, S., Roy, D. and Aral, S. 2018. “The Spread of True and False News Online.” Science 359, no. 6380. (March 9): 1146–51. https://doi.org/10.1126/science.aap9559.

Wang, M. 2019. “China’s Algorithms of Repression.” Human Rights Watch. May 1. https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass.

Wong, C. 2014. “A Clear-Eyed Look at Mass Surveillance.” Human Rights Watch. July 25. https://www.hrw.org/news/2014/07/25/clear-eyed-look-mass-surveillance.

Yan, J., and Meiting Wang. 2015. “蓝灯科技:要做 ‘中国版Palantir.’” Yicai Daily. April 24. https://web.archive.org/web/20230308090334/https:/finance.sina.cn/2015-04-24/detail-icczmvup0222227.d.html?from=wap.

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

ALSO IN THIS ISSUE

RELATED POSTS