The authoritative guide to ensuring science and technology make life on Earth better, not worse.

A new voice chimes in on regulating facial recognition and big-tech

By Matt Field | January 30, 2019

Microsoft CEO Satya Nadella. Credit: World Economic Forum / swiss-image.ch / Valeriano DiDomenico CC BY-NC-SA 2.0.Microsoft CEO Satya Nadella. Credit: World Economic Forum / swiss-image.ch / Valeriano DiDomenico CC BY-NC-SA 2.0.

Big tech companies in the United States have been grappling with criticism about how they guard against misinformation on social media, protect user data, and deal with the ever-growing surveillance capabilities of artificial intelligence. Last year, Facebook CEO Mark Zuckerberg acknowledged the need for regulation on social media companies.

Now Microsoft CEO Satya Nadella has opened the door to regulation of facial recognition software as well, telling a World Economic Forum audience recently that perhaps it’s time for governments to regulate the technology before tech companies compete their way toward bad results.

“We’ve said we’re going to have a set of principles that we will use to both build it and make sure that there’s fair and robust use of this technology and not any of the unintended consequences—I’ll call it self-regulation,” Nadella said. “But at the same time, we’ll also welcome any regulation that helps the marketplace not be a race to the bottom, because if you turn it to private enterprise, what happens, many times, is that we will have a race to the bottom.”

About the time Nadella gave his talk, a team of MIT and University of Toronto researchers released a follow-up on a previous study that found facial recognition software, including a Microsoft product, showed a glaring bias against women and people with darker skin. The new study showed that, in the wake of public criticism, Microsoft, IBM, and Megvii, a Chinese company, improved their software’s ability to identify the gender of politicians of various races around the world. It also showed that Amazon software, which the researchers had not studied previously, misjudged the gender of women with dark skin almost a third of the time.

RELATED:
AI goes nuclear

Amazon has thus far taken a different approach to responding to such critical analysis. The American Civil Liberties Union conducted a test using Amazon’s Rekognition face identification product to compare pictures of members of Congress against a database of police mug shots and found that Rekognition disproportionately misidentified black lawmakers as arrestees. Amazon responded by contending that the civil liberties group had used the wrong settings for the software—in this case the default ones. Amazon also claimed that the MIT and University of Toronto team had used its technology in the wrong way.

Amazon has pitched Rekognition to law enforcement agencies and the US Immigration and Customs Enforcement agency. It’s also submitted a patent application for a doorbell that would incorporate facial recognition–potentially automatically contacting the police about “suspicious” people.

Of course facial recognition isn’t the only disruptive technology on offer in Silicon Valley, and it’s not the only one raising alarm bells. In its continued quest to “self-regulate” itself out from the misinformation bog that is Facebook’s newsfeed, Facebook recently released a proposal for a council of perhaps 40 “experts” who would weigh in on whether to block content.

During the 2018 midterm elections last fall, Facebook deployed thousands of staff to combat the type of misinformation campaigns that plagued the 2016 presidential election. This new effort at self-regulation would empower the board of experts to “review Facebook’s most challenging content decisions.”

 


Publication Name: CNN
To read what we're reading, click here

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments