By John Cook, November 17, 2017
Heavily criticized for their role in spreading lies that influenced the 2016 US presidential election, the social media giants have begun to acknowledge what happened. In October, representatives of Google, Facebook, and Twitter testified before Congress and pledged to improve their response to the problem. The companies have even taken action to flag misinformation when it appears on their sites.
This is a welcome development. It is imperative that social media outlets push back against fake news, both as it relates to future elections and on critical scientific issues like vaccines and climate change. But it is also important to keep in mind that with the wrong strategies, the fight against false statements could be ineffective or even undermine its own goals. Efforts to counter misinformation should be informed by scientific research into the efficacy of various approaches. Fortunately, psychological research on misinformation goes back decades. The solution lies in pairing psychology with technology.
Tech tries to undo the damage. There is no question that social media have played a large role in amplifying the impact of misinformation over the last decade. A stunning one-fifth of online discussion about the 2016 US election originated from automated social media bots. (Bots are automated scripts that generate content and interact with people online.) Pro-Trump bots were significantly more prolific than pro-Clinton bots, with the result that most of this automated content supported Trump. To make matters worse, fake news stories were more widely shared than real stories, underscoring the harsh reality that truth is at a disadvantage when competing with often inflammatory fake news.
All these issues remain unresolved. To be sure, the social media giants are attempting to turn the fake news Titanic, developing tools and algorithms to counter the corrosive influence of misinformation. Google now includes authoritative fact-checks in its search results. Facebook has begun labelling links to fake news sources as false, and says it can reduce the spread of an untrue story by 80 percent.
However, efforts to counter misinformation can be ineffective, or worse, even counterproductive. When conspiracy theorists encounter debunking posts on Facebook, they double down by increasing their likes and comments on posts that support their conspiracies. General warnings about fake news cause people to have less trust in true headlines, showing that even well-intended efforts to fight fake news can boomerang unexpectedly.
Further, new research indicates another subtle but dangerous side effect of fact-checking. As fake news labels become more commonplace on social media outlets like Facebook and Google, people are coming to expect fake news to be labelled as such. As a result, when a fake news post is not tagged, people are more likely to believe it is true than if they had not been exposed to fake news labels in the first place. Researchers call this the “implied truth effect”—a reminder of the psychological minefield that is the human mind when dealing with misinformation.
Time for technocognition. While fake news rocketed into public consciousness just over the last year, researchers have studied how to counter misinformation for many years, discovering myriad potential pitfalls along the way. When debunkings are designed poorly, they can be ineffective or even make matters worse by reinforcing the misinformation. For example, if a debunked myth is not replaced with a fact—in the same way that a defense lawyer tries to provide an alternative suspect in a murder trial—the myth is likely to return and continue to influence people. When a debunking threatens a person’s personal beliefs, it can backfire and strengthen the misconception.
Without psychological research, we wouldn’t know any of this. While such research is essential, though, it cannot alone stop post-truthism from swamping society. The tech world needs psychology to design effective, evidence-based strategies, but psychology needs technology in order to reach the masses. This has led to an interdisciplinary approach is known as “technocognition.” The idea behind technocognition is that information architecture should incorporate principles from psychology, behavioral economics, and philosophy to undo the damage and polarization that fake news has inflicted through social media. Technology contributed to the problem, and is an important part of the solution.
One of the most exciting areas of research into misinformation seeks the holy grail of fact-checking: the ability to automatically detect a claim in real-time and instantly assess its accuracy. In a study published in 2015, researchers at Indiana University developed a novel method to computationally assess the accuracy of a given claim. The method involves turning information from Wikipedia infoboxes (the least-disputed sources on the site) into a network of subjects—such as “Socrates”—and objects—such as “person.” Any statement of fact, such as “Socrates was a person,” is represented by a link between the two nodes in the network. Once this network was constructed, researchers were able to assess the veracity of any new claim by measuring how long you would have to travel through the network to join two nodes. For instance, the computer program found a long path length between the terms “Obama” and “Muslim,” so rated the claim “Obama is a Muslim” as having low truth value. (While the scientists built their system based on Wikipedia, they write that they could “leverage any collection of factual human knowledge.”)
Advances in automatically detecting misinformation should be coupled with research into the relative effectiveness of ways to present the results of fact checking. For example, experiments show that explaining the rhetorical techniques used to mislead people is an effective way to counter misinformation, but this strategy has yet to be tested in a social media context. Another useful research finding is that fact-checks are more effective when they come from a friend. Based on this knowledge, a tech company could devise a way to make it easy for people to share the results of fact checking with their social network.
It is still early days, but interdisciplinary approaches incorporating technology and psychological research could yield ways to reduce the damage caused by online misinformation. Scholarly research, unfortunately, moves slowly, and elections over the next few years face a fake-news threat similar to what we saw in 2016. But our understanding is growing. More than ever, we now need collaboration between the social media giants and researchers on misinformation.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.