The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Matt Field | October 29, 2019
In February, OpenAI touted a new artificial intelligence program that the research outfit described as “chameleon-like” in its ability to produce coherent paragraphs of text based on a given input. The program known as GPT-2 can write essays or craft poems; it can even answer reading comprehension questions. But it may also be able to do something much more troubling: produce credible fake news at the click of a button. Sarah Kreps, a professor at Cornell University, tested this proposition, and what she and a colleague found alarmed her.
Russia’s effort to disrupt the 2016 US presidential election required, among other elements, hiring a small army of internet trolls. Now it may be possible to automate a significant amount of that work, a technological development that could make it harder to discern truth from fiction.
Editor’s note: Sarah Kreps, featured in this video, has been collaborating with OpenAI to explore the potential impacts of GPT-2.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Keywords: Disinformation, GPT-2, OpenAI, Sarah Kreps, fake news
Topics: Artificial Intelligence, Multimedia