The authoritative guide to ensuring science and technology make life on Earth better, not worse.

The pandemic, protests, and election are putting social media content policies under a microscope

By Justin Sherman | July 10, 2020

Donald Trump meets with Mark Zuckerberg.President Donald Trump met with Facebook CEO Mark Zuckerberg last fall. The president recently issued an executive order meant to roll back some of the liability protections social media companies enjoy. Credit: White House.

In May, President Donald Trump dramatically escalated his campaign of grievances against social media companies he’s accused of bias against him. The president had tweeted lies about voting by mail and, in the wake of nationwide protests against police brutality and racism, tweeted that the military would shoot looters. In an unprecedented move for the platform, Twitter decided to label the former as election misinformation and the latter as glorifying violence. After Twitter called out the president for misinformation, Trump issued an Executive Order to modify federal law, specifically Section 230 of the Communications Decency Act, which Recode’s Sara Morrison summarizes quite well: it “says that internet platforms that host third-party content—think of tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments—are not liable for what those third parties post (with a few exceptions).”

The Trump/Twitter feud exacerbates a long-simmering conflict about the responsibility and legal authority that social media platforms and governments have to regulate hate speech and misinformation on the internet. It’s a fight that’s playing out against the backdrop of a deadly global pandemic, global protests against systemic racism, and, in the United States, a presidential election.

Many experts were quick to point out that Trump’s executive order is probably legally baseless for several reasons, most obviously because presidents cannot generally modify federal laws—laws drawn and passed by Congress—with executive orders. Instead, the order is more accurately understood as political theater—playing to claims that Facebook and other social media platforms are “biased” against conservative speech—and as a bullying tactic aimed at the platforms (hence, for example, six mentions of Twitter in the order’s text).

But Republicans aren’t alone in their concerns about Section 230. Some Democrats, including former Vice President Joe Biden, are concerned that Section 230 lets platforms get away with not removing disinformation and other harmful content. As a recent CNET  headline pointed out, however, if some Democrats and Republicans feel the law is flawed, “that’s about all they agree on.”

Twitter’s moves against Trump’s tweets put pressure on Facebook and other platforms to reign in hate speech, in particular by reigning in the president. Trump posted the message threatening to shoot looters on Facebook, but the platform declined to follow Twitter’s lead and police the content, sparking widespread criticism and even a work “walkout” by discontented Facebook employees. A range of advertisers have since pulled ads from the site amid its refusals to act against hate speech, and activists behind the #StopHateForProfit campaign are pushing it far beyond Facebook.

The debate over what kind of content to police and how to police it continues, as do attempts to confuse that debate.

RELATED:
Introduction: Securing elections, democracy, and the information ecosystem in a critical political year

Trump and his supporters, including conservative pundits, claim with essentially no factual basis that Facebook is biased against conservatives, and therefore Section 230 of the Communications Decency Act needs to be amended. A 2019 study conducted by a former Republican senator and a law firm found no conservative bias on Facebook. A 2020 study conducted on Facebook data in fact found the complete opposite: Conservative news dominates the platform.

On June 30, when this piece was written, the top-performing links on Facebook were Franklin Graham, Donald Trump for President, Ben Shapiro, Blue Lives Matter, and Dan Bongino—conservative pundits, politicians, and causes, one and all. It was only after public outcry that Facebook recently removed a Trump advertisement that contained Nazi symbols. And reports of the Trump White House asking Facebook, Twitter, and others to remove posts calling for Black Lives Matter protesters to break curfew or topple statues further underscores that the White House has political motives.

A second effort to confuse the debate comes from the social media platforms themselves, which have often given some version of the line, “we don’t want to be the arbiters of speech.” Facebook is among the chief promotors of this theme: Mark Zuckerberg made this exact claim a little over a month ago. There are legitimate concerns about the policies that dictate content takedowns, account suspensions, fact-checks, and how transparent the platforms are in policing content and alerting users of account suspensions. In some ways, the platforms’ argument about not wanting to weigh in too heavily on speech reflect these concerns.

But Facebook, Twitter, and other platforms—YouTube, Reddit, and even TikTok—already make decisions about what content can and cannot stay on their sites: sometimes users report questionable content but it remains up; content is reported and is taken down; content is viewed by employees and might be left up or taken down. For a firm like Facebook to claim it doesn’t want to be an arbiter of speech suggests—falsely—that the platform doesn’t already make content moderation decisions every day. Even independent auditors hired by Facebook criticized the company for prioritizing some idea of free speech over values such as civil rights; the auditors cited, for example, the company’s decision to keep up posts by Trump that violated policies on hate and violent speech.

Given that platforms already make content moderation decisions, the pressing questions should center around what those moderation policies are, how transparent they are, and how those decisions are made by the humans and the algorithms that power the sites.

RELATED:
An existential timeline of the Trump/Pence and Biden/Harris presidencies

Content curation goes beyond individual posts. These platforms shape news feeds through algorithms. On Facebook and Instagram, an algorithm picks the order of posts shown to users, guessing what they’ll like and engage with. On Twitter, which displays tweets chronologically, the platform still curates content by, for instance, deciding which tweets from non-followed-users are shown in a given feed and making recommendations about whom a user should follow in the future.

While many people cite the First Amendment when these realities about content moderation surface, that constitutional protection does not apply to speech on private internet sites. Social media platforms can delete people’s posts, curate their news feeds, suspend their accounts, or ban them entirely—all under the liability shield offered by Section 230. While there may be “free speech” calls when tweets are flagged as misinformation or posts are removed for hate speech, the reality is that platforms could be far more aggressive against hate speech and lies if they desired.

The debate over what kind of content belongs on social media and over the influence the platforms have on elections and other democratic institutions reached a fever pitch after the 2016 presidential election, when Russian operatives shared disinformation and misinformation and amplified lies and divisions on Facebook, Twitter, Instagram, and other platforms to sow discord in the United States and attempt to sway the presidential election. The platforms were criticized for not acting against disinformation and misinformation then and for dragging their feet again after terrorists took to social media to publicize their massacres.

Misinformation on COVID-19—from false cures to conspiracy theories about the virus being manufactured and released on purpose—has flourished on many platforms, including Facebook, Twitter, YouTube, Instagram, TikTok, and Reddit. In response, the platforms have more readily acted to tamp down on the misinformation. They’ve put up links to reputable medical sources and have in some cases relied on third-party fact checkers to flag misinformation.

Those efforts are laudable, but there are still calls for the platforms to do more to prevent circumventions of these checks. And as the coronavirus pandemic continues amid an abysmal Trump administration response and governance failures on state and local levels; as protests against police brutality and systemic racism march on; and as the US gets closer to the presidential election in November, questions about the efforts social media platforms are making to reduce misinformation, hate speech, and calls for violence—including apparent violations of standards by political leaders—will only grow more urgent.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments