The authoritative guide to ensuring science and technology make life on Earth better, not worse.

It’s time to address facial recognition, the most troubling law enforcement AI tool

By Trenton W. Ford | November 10, 2021

A police officer with a camera.A police officer holding a video camera at a protest. Credit: Mobilus In Mobili. CC BY-SA 4.0.

Since a Minneapolis police officer killed George Floyd in March 2020 and re-ignited massive Black Lives Matter protests, communities across the country have been re-thinking law enforcement, from granular scrutiny of the ways that police use force to overarching criticism of racial bias in policing. Minneapolis, where Floyd was killed, even held a vote on whether to do away with the police department and replace it with a social-service oriented agency. Amid the push for reform, one trend in policing is in urgent need of overhaul: police departments’ expanding use of artificial intelligence, namely facial recognition, to aid crime fighting.

Police agencies are increasingly deploying advanced artificial intelligence-driven identification to fight crime. AI algorithms are now employed to identify individuals by face, fingerprint, and DNA, with varying degrees of success. Among these AI technologies, facial recognition technology is arguably the most troubling. Studies have documented the racial and gender biases of these systems, and unlike with fingerprint or DNA-analysis algorithms, police are using facial recognition in the field to make on-the-spot decisions. It’s already having a corrosive impact on society.

Take Robert Williams, a Black man living in Detroit, Michigan, who was called by the Detroit police and told to turn himself in on a shoplifting charge. “I assumed it was a  prank call,” he told a congressional subcommittee in July, but police later showed up at his house and arrested him in front of his wife and children. They held him for 30 hours. The evidence? A surveillance photo depicting someone else. “I held that piece of paper up to my face and said, ‘I hope you don’t think all Black people look alike,'” Williams said, according to The Detroit News. Around the country, 18,000 police departments are using this generally unregulated technology, the committee chair, Rep. Sheila Jackson Lee, said. “To add untested and unvetted facial recognition technology to our policing would only serve to exacerbate the systemic issues still plaguing our criminal justice system,” she said.

Williams is free; the charges against him were dropped—so were the charges against Michael Oliver and Nijeer Parks, two other Black men arrested on the basis of faulty facial recognition matches. But Williams’s tense encounter with the police could have ended badly, as such moments have for others. “As any other Black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly—even though I knew I had done nothing wrong,” Williams wrote in The Washington Post. In an era of racially biased law enforcement—police killed more than 1,000 people in the year following Floyd’s murder, a disproportionate number of them Black—police continue to turn to largely unregulated facial recognition technology—software known to be significantly less accurate when it comes to identifying Black people and other minorities—to make decisions with potentially lethal consequences.

How facial recognition works. To understand the risks of police use of facial recognition, it’s helpful to understand how the technology works. Conceptually, these systems can be broken down into three main parts: the known-faces database, the algorithm, and the query image.

Known-face images can come from drivers’ license pictures, passport photos, mugshots, stills from CCTV cameras, social media images, and many other places.

RELATED:
AI misinformation detectors can’t save us from tyranny—at least not yet

Facial recognition algorithms are packaged into software by vendors, but the algorithms themselves can come from anywhere. Most often the underlying algorithms are created by researchers within universities, governmental organizations, and companies. But just about any entity can become a vendor by licensing, buying, copying, or developing a facial recognition algorithm and packaging it for easy use.

Query images are often images captured on camera systems built into police cruisers, security cameras, and in police body-worn cameras. Image quality depends heavily on the image capture system used, lighting conditions, distance, and even the pose of the face being captured. These images can be matched with a face in the known-faces dataset.

Under the right set of circumstances, these elements can conspire to produce a false match, underscoring the risk that facial recognition technology poses to civil liberties.

The danger of false positives. Facial recognition errors come in two types: false negatives and false positives. False negatives are instances where a query image, for example, one captured on a police cruiser’s camera system, is an image of a person contained in the known-faces database, maybe a suspect in a crime, but the facial recognition system’s algorithm doesn’t detect the match. False positives, on the other hand, are instances where the algorithm matches a query image with a face from the known-faces database erroneously—potentially matching a person’s face with that of a criminal. These two types of error both could result in negative outcomes for the public, but false negative errors do not introduce bad outcomes that wouldn’t have already happened in the absence of facial recognition systems. False positives, conversely, introduce new dangers to both police and everyday civilians.

So, let’s look at just how bad these false positive rates are and what variables influence these rates.

A 2019 report by the National Institute of Standards and Technology (NIST) on the accuracy of facial recognition technology found that the rate of false positives varied heavily based on factors like the query image quality, the underlying dataset, and the race of the faces being queried. The range of false positive error rates was between 3 errors out of 100,000 queries (0.003 percent) in optimal conditions to 3 errors out of every 1,000 queries (0.3 percent). Moreover, the query image sources that were tested by the federal institute were of better quality than what police in the field are likely to process.

In the same report, researchers tested the accuracy of facial recognition systems given different demographic variables such as sex, age, and race and found that varying the race of the face matching pairs created false positive rates that were often two orders of magnitude greater for darker skinned individuals. Images of women produced higher false positive rates than those of men. Images for East African women produced roughly 100 times more false positives (3/1,000) than images for white men (3/100,000).

These false positive error rates are especially dangerous when combined with the practical differences related to when and where AI systems are deployed. For fingerprint and DNA analysis, AI systems operate on collected samples and are generally performed in a lab setting where these systems’ findings can be reanalyzed by human experts; facial recognition systems, on the other hand, can be be deployed by officers in the field. The final judgement on how accurate a match is can depend simply on an officer’s discernment.

RELATED:
Don’t panic: AI can strengthen democracy too

Varying degrees of accuracy, little oversight. Generally, software vendors combine existing or proprietary facial detection algorithms with user interfaces in software packages for use by law enforcement. Altogether, there are more than 80 software vendors with more than 600 facial detection algorithms that have been tested by the NIST, many of which are available for local law enforcement to purchase. The performance of these algorithms varies widely, and the underlying algorithms are often not made available for code and training data inspection (as with most proprietary software) to protect the intellectual property of software vendors. What’s more, vendors such as Clearview AI advertise directly to local law enforcement organizations, which then are responsible for vetting and procuring the facial recognition systems they deploy. Further, without an overarching framework for the deployment of these systems, many law enforcement organizations are not required to seek guidance before deploying a system locally. Given the variability in vendors and their software and the lack of oversight for departments selecting vendors, it’s little surprise that the accuracy of their systems varies a lot. And it is worth reiterating, varying accuracy of these systems can, and will likely, result in real world harm.

As organizations like the American Civil Liberties Union, academic researchers, and activists raise the increasingly urgent issue of police facial recognition technologies in Congress, state houses, and town halls, a handful of cities, states, and even some countries have moved to ban the use of facial recognition by public agencies. Most recently, the state of Maine restricted the use of facial recognition to a specific set of instances. Elsewhere, however, efforts to reign in the technology have fallen short. In West Lafayette, Ind., Mayor John Dennis recently vetoed a bill written to ban the use of facial recognition by public agencies. Meanwhile, the technology is becoming more powerful. Clearview AI sells a system that expanded the universe of known faces to some 10 billion photos on the public internet.

In written testimony at the congressional hearing with Williams, law professor Barry Friedman at New York University compared the regulatory environment surrounding facial recognition software to the “wild west.” It’s clear that facial recognition technology, however accurate it might be under ideal circumstances, can fall woefully short in practice. This is a failure of policy and technology and it’s likely that Williams won’t be the last person to bear the costs of it.

Editor’s note: This article incorrectly stated that police had killed more than 1,000 Black people in the year following George Floyd’s murder. In fact, according to data from the group Mapping Police Violence analyzed by Al Jazeera, police killed 1,068 people, not all of whom were Black. According to the tracking group, Black people are three times more likely to be killed by police than white people. 


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Brian Whit
Brian Whit
3 years ago

What about the NSO group’s spyware? You don’t even have to click on anything to be infected, then they have your camera, your microphone. Why onEarth would. private company just sell the spyware to governments, and not every player in the information game? I bet they did, and I bet we will never know. Probably every police force that got an MWrap, also got spyware.

Trenton W. Ford
Trenton W. Ford
3 years ago
Reply to  Brian Whit

That’s an interesting question. I’m just reading into the details of the allegations against this company. I’m interested in who their clients were. Thanks for reading and engaging. Please suggest any other topics that you might like to see covered in the realm of disruptive technologies.