Perhaps you’ve heard of the famous 1980 wager between population scientist Paul Ehrlich and business professor Julian Simon. Ehrlich wagered that, due to population pressures, a basket of five metals would increase in price over the subsequent 10 years. Simon bet that, due to countervailing market forces, prices would stay level or go down. They went down—by 50 percent. Ehrlich looked a little silly.
Today, researchers into artificial intelligence and its social impacts are betting on the following proposition: By the end of this year, a “deep fake” video about a political candidate will receive more than 2 million views before it gets debunked. As reported by Jeremy Hsu of IEEE Spectrum, the betters include Tim Hwang, director of an AI-related initiative at Harvard’s Berkman-Klein Center, and Michael Horowitz, associate director of Perry World House at the University of Pennsylvania (and also a Bulletin columnist). At stake are cocktails. People betting “yes” on the proposition stand to win Manhattans. The “no” folks stand to win tropical concoctions of some kind. (If the wager were structured rationally, of course, the losers would have to drink tropical concoctions while the winners would avoid that fate.)
In case you’re unfamiliar with deep fakes, they are AI-manipulated videos that show people doing or saying things that they never did or said. They’ve been in the news because a certain species of internet pervert enjoys deep-faking the faces of Hollywood actresses onto the bodies of pornographic performers. That’s bad enough, but deep fakes have the potential for much wider mischief: undermining social trust or even provoking armed conflict. To appreciate how deceptive deep fakes can be, watch a widely circulated video in which Barack Obama appears to call Donald Trump “a total and complete dipshit.” Obama, of course, has said no such thing (at least not in public); the video is intended to demonstrate the dangers of deep fakes.
As for the wager, time will tell who wins. But in the long run, deep fakes are for real.
By the way, keep an eye out for the July/August issue of the Bulletin’s subscription-based magazine, in which Garlin Gilchrist—executive director of the University of Michigan’s Center for Social Media Responsibility—sits for an interview on deep fakes and the “information apocalypse.”
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Of course this is not an engineering problem. The Center is basically part of the Michigan School of Information, which seems to be what be what they call their IT program.
A more inter-disciplinary approach would be helpful.-