Do social media bots have a right to free speech?

By Matt Field | January 9, 2019

Illustration by Matt Field. Based in part on photos by Joe Ravi CC SA-BY 3.0 and Morio CC SA 3.0 via Wikimedia Commons. Illustration by Matt Field. Based in part on photos by Joe Ravi CC SA-BY 3.0 and Morio CC SA 3.0 via Wikimedia Commons.

People who tweet in their jobs—let’s say 21st century journalists, just for example—might say that writing two million tweets represents a daunting challenge. That’s the rough number, Twitter says, that a Russian-linked set of accounts cranked out on the 2016 US presidential race in just the 10 weeks leading up to the election. Of course, in that case, the prolific “authors” were a collection of about 50,000 automated accounts often called bots. A new law in California will soon force bots that engage in electioneering or marketing to declare their non-human identity.

While the Kremlin agents who interfered in the US election likely wouldn’t be beholden to a state-level law in the United States, or deterred by it, domestic political campaigns and businesses might. For at least one constitutional scholar, that possibility raises this question: Do bots, like citizens, have that most sacred right enshrined in the First Amendment to the US Constitution, the right to free speech? Laurent Sacharoff, a law professor at the University of Arkansas, thinks the people programming bots may want US courts to answer that in the affirmative.

Take a hypothetical bot that engages a voter around a shared concern like motherhood, for instance. “If it has to say, ‘Well look, I’m not really a mother, I’m a chatbot mother, a mother of other chatbots. And when I say I feel your pain, I don’t actually have feelings.’ That’s just not going to be very effective,” Sacharoff says.

A court ruling on whether bots have First Amendment free speech rights remains in the realm of conjecture. Meanwhile, bots on Twitter are very real. The Pew Research Center conducted a study in 2017 that looked at 1.2 million tweets that contained links. The researchers found that 66 percent of links were posted by “suspected bots,” with the percentage growing much higher depending on the type of content that was being linked. Bots posted about 90 percent of links to news aggregators, for instance. When a gunman killed 11 people in a synagogue in the US city of Pittsburgh last fall, Robhat Labs, which analyzes bots during big news events, found that bots drove 23 percent of the twitter activity around the issue during one 24-hour period. Bots often amplify extremist views. Twitter claims it is making progress in combating “spammy or automated” accounts and is making it harder to spread fake election content.

RELATED:
How to better study—and then improve—today's corrupted information environment

As court rulings continue to define the scope of the First Amendment, it has grown to include corporations. A 2010 Supreme Court decision in the notorious Citizens United case opened the doors for corporations and unions to spend unlimited money directly on electioneering. In the eyes of the court, the elimination of limits on campaign spending was meant to ensure free speech. Sacharoff says a central point underpinning the ruling was that voters could benefit from hearing the “message of a corporation.” The benefit, he says, is for the audience, but the right lies with the corporation. The same principal could hold for bots.

“Even though bots are abstract entities, we might think of them as having free speech rights to the extent that they are promoting or promulgating useful information for the rest of us,” Sacharoff says. “That’s one theory of why a bot would have a First Amendment free speech right, almost independent of its creators.”

Alternatively, the bots could just be viewed as direct extensions of their human creators. In either case—whether because of an independent right to free speech or because of a human creator’s right—Sacharoff says, “you can get to one or another nature of bots having some kind of free speech right.”

In 1943, the Supreme Court ruled that public schools in West Virginia could not force students to recite the Pledge of Allegiance, finding that the requirement violated the First Amendment. After all, “to sustain the compulsory flag salute, we are required to say that a Bill of Rights which guards the individual’s right to speak his own mind left it open to public authorities to compel him to utter what is not in his mind,” Justice Robert H. Jackson wrote in the majority opinion. The so-called compelled speech doctrine was born.

“These are cases that say that the First Amendment prohibits the government not just from suppressing speech but also from compelling a person to state a message,” Sacharoff says. “And in this case, the message is, ‘I’m a bot, not a human being.’ If the bot or its human being behind it has a free speech right, then they have a right against being compelled to disclose that they’re bot.”

RELATED:
COVID vaccines went 'on trial.' Are routine childhood vaccine mandates at risk?

In previous Bulletin coverage, the author of the new California law dismisses the idea that the law violates free speech rights. State Sen. Robert Hertzberg says anonymous marketing and electioneering bots are committing fraud.

“My point is, you can say whatever the heck you want,” Hertzberg says. “I don’t want to control one bit of the content of what’s being said. Zero, zero, zero, zero, zero, zero. All I want is for the person who has to hear the content to know it comes from a computer. To me, that’s a fraud element versus a free speech element.”

Sacharoff believes that the issue of bots and their potential First Amendment rights may one day have its day in court. Campaigns, he says, will find that bots are helpful and that their “usefulness derives from the fact that they don’t have to disclose that they’re bots.”

“If some account is retweeting something, if they have to say, ‘I’m a bot’ every time, then it’s less effective. So sure I can see some campaign seeking a declaratory judgment that the law is invalid,” he says. “Ditto, I guess, [for] selling stuff on the commercial side.”

In First Amendment cases, a court would consider whether there is a compelling reason to limit the scope of a person’s speech. In the case of electioneering social-media bots, the sanctity of elections would probably suffice, Sacharoff says. But then a court would weigh whether the anti-bot law is too broad or even effective.

“If you can accomplish the same goal some other way, or if this law doesn’t accomplish the goal, then the court will say, ‘We’re going to strike it down. Sure this is a good, worthy goal, but this law just doesn’t advance it.’” Sacharoff says. “So it will be very fact specific.”


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments