The authoritative guide to ensuring science and technology make life on Earth better, not worse.

University bullshit experts: Fake face software signals a new era of AI-based BS

By Matt Field | July 19, 2019

Fake faces generated by the StyleGAN algorithm developed by Tero Karras, Samuli Laine, and Timo Aila at NVIDIA via Carl Bergstrom and Jevin West. CC-BY-NC 4.0.Fake faces generated by the StyleGAN algorithm developed by Tero Karras, Samuli Laine, and Timo Aila at NVIDIA via Carl Bergstrom and Jevin West. CC-BY-NC 4.0.

The sale on salt claiming to contain no genetically modified organisms, the little number taken out of context and made to seem like a big number, the faked photo that you know is fake, but that syncs with your own opinions so you share it online anyway: There are countless examples in journalism, marketing, and politics of lies, distortions, and propaganda. To professors Carl Bergstrom and Jevin West, it all falls under one vast and growing umbrella: bullshit. You’ve heard of bullshit artists; meet two bullshit experts. They have been teaching a course for a few years at the University of Washington, Seattle on recognizing lies and misrepresentations. It is titled, of course, “Calling Bullshit.”

This year they’ll also be focusing on a troubling new type of BS made possible by rapidly advancing artificial intelligence technology, one that they predict catfishers and other bad actors will put to use: completely fake but incredibly real looking AI-generated faces. One Associated Press investigation found an example of an AI-generated face being used in an espionage campaign run on the career networking site LinkedIn. To highlight this newer technology, Bergstrom and West created a website called Which Face is Real where people can try to distinguish images of real people from images created by AI algorithms.

The photos are produced by what’s called a generative adversarial network or a GAN. Simply put, GANs are AI setups that pit two computer systems against one another. In the case of fake faces, one system tries to produce authentic-looking fake faces, and another tries to figure out which images are fakes. The GAN used by Bergstrom and West was developed by researchers at graphics chip maker NVIDIA earlier this year. West hopes that with more awareness of this technology the public will eventually be able to look at a fake and say, “Oh, it’s just been GAN generated,” in the same way people these days might look at a doctored image and say “it’s been Photoshopped.”

“It’s in the transitions, where most of the public might not know that the technology exists, that sort of more nefarious things could happen,” West said. “People can be tricked more because they just don’t know it exists. So the faster that we can let the public know at least [that] it exists [the better]. And then we have to deal with it as a society.”

For now, many of the photos on Which Face is Real are somewhat easy to discern as fake. Background patterns are unrealistically repeated. A faint, distorted ring can sometimes outline the faces. The pictures will only get better, West said. The pair initially thought that software would not be able to produce more than one image of the same fake person. Three months later, researchers from the Samsung AI Center developed a way to make video, that is multiple images, of the same fake person based on just one image.

AI-generated faces can have some odd distortions. Face generated by the StyleGAN algorithm developed by Tero Karras, Samuli Laine, and Timo Aila at NVIDIA via Carl Bergstrom and Jevin West. CC-BY-NC 4.0.

Faked faces could prove particularly powerful in terms of influencing viewers. Faces, Bergstrom said, lull people into empathizing with the supposed people they are looking at. “I can imagine what it’d be like to talk to him–imagine, how he might be hoping to have a career in data science, and all of this. And it triggers this capacity for empathy that I have,” Bergstrom said. “And I pour all that into this image, and build all of this up. And then I have to just shake myself and remind myself: ‘That person doesn’t even exist.’ And it’s really weird to have that human capacity for sort of emotional transfer being triggered by someone that’s not even real.”

Bergstrom and West cover a wide range of bullshit—from misleading data visualization to fake news–in their course, which fills up almost immediately during class registration periods. “Each of the week-long modules will explore one specific facet of bullshit,” the syllabus proclaims. And at the end of the course, students should be able to tell a scientist or their “crystals-and-homeopathy aunt or casually racist uncle” why a particular claim is BS. Bergstrom said that he sees evidence that taking the course impacts how students process information.

To Bergstrom and West, the fake face-making software shows that society is pushing further into an era of AI-based BS, a particular strain that will become increasingly difficult to identify.

“So the fact that you can create a human face with software in a matter of minutes using industrial level of GPUs that anyone can access from their home should tell us something about the many other things that you could be doing with this technology, things that aren’t just about mimicking a face, but are about mimicking voice and video and all the other things that, you know, we think are real,” West said. “It’s almost as if this new world we live in in 2019, we almost can’t believe what we see anymore. And I think that sort of coincides with this whole rise in misinformation and distrust and disinformation and increased propaganda.”


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments