In February, the office of California State Senator Robert Hertzberg created a Twitter bot. @Bot_Hertzberg automatically retweets live webcasts of legislative hearings, a fellow bot called Earthquake Robot, and messages containing the hashtag #SB1001. That stands for Senate Bill 1001, which, no coincidence, Hertzberg introduced. If it passes, which could happen this month, it will become the first US law to require automated social media accounts to disclose themselves as non-human. For an example of what disclosure might look like, see @Bot_Hertzberg’s Twitter bio, which reads, “*I AM A BOT.* Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot!”
In recent years, humans have used bots to inject automated messages into social media streams with the intent to sell things, exacerbate political divisions, and sway elections. During the 2016 US presidential election, for instance, social media bots created 19 percent of all tweets regarding the election, some 3.8 million in total. Twitter representatives told a Senate committee that a Russian intelligence agency had 2,752 Twitter accounts, 47 percent of which were bots. The difference between a single individual attempting to stir controversy and what a bot can accomplish is one of scale: A single bot can do what it would take many thousands of humans to do, allowing a lone crank—or foreign political operative—to create the appearance of a groundswell. Still smarting from Russian interference in their presidential election, at least some US voters are highly supportive of bills like Hertzberg’s. Free speech advocates, though, have weighed in with caution and criticism, noting that there are plenty of legitimate ways to use social media bots, including political and artistic speech, and that lawmakers should be wary of abridging the freedoms guaranteed by the First Amendment.
In Hertzberg’s telling, his bill simply aims to stop fraud. With new technology come new ways to con people, for example by creating machines that pose as human, and therefore new laws are required. (Nobody likes an undisclosed fake human—just see Bladerunner or Westworld.) The California bill, which has become narrower over successive drafts, is now very specific about what exactly it curtails. It defines a “bot” as “an automated online account on an online platform that is designed to mimic or behave like the account of a person.” So while in general the term “bot” can refer to many kinds of automated software, in this context it is much more particular. The California bill would ban anyone from using a bot to communicate with another person “with the intent to mislead the other person about its artificial identity” for the purpose of selling something or influencing an election—unless the bot is disclosed as such. A similar US Senate bill, introduced by California Democrat Diane Feinstein in June, avoids the term “bot” altogether, instead defining its goal as to regulate the use of “automated software programs intended to impersonate or replicate human activity on social media.”
The social media leviathans have been less than enthused about attempts to regulate anything they do, but Hertzberg is not easy to paint as a techno-ignoramus who just doesn’t get the future. During a 12-year stint in the private sector he invested in solar energy, before returning to public office in 2014 to represent the San Fernando Valley and parts of Los Angeles. He is currently sponsoring a bill that would let companies issue and transfer corporate share certificates via the blockchain, the technology that keeps a secure record of all interactions with a distributed database. “Government can’t continue to be a dinosaur on this stuff,” he said when he announced that bill.
But this isn’t his first time fighting big tech. Hertzberg was the co-sponsor of a digital privacy bill passed in June that grants consumers more control over the spread of their personal information online. Tech companies hated the bill, but not as much as they hated the alternative, which was to risk a proposed ballot initiative for November that would have imposed even more restrictions on their ability to collect and sell personal data. The bill passed in dramatic fashion, just hours before the deadline to pull the initiative—which was polling with about 80 percent approval—from the November ballot. California’s digital privacy law is now one of the most comprehensive in the United States.
The Bulletin of the Atomic Scientists spoke with Senator Hertzberg about bots and blockchain by telephone in mid-August 2018. The interview below has been edited and condensed.
BAS: How are individuals being hurt by undisclosed social media bots?
RH: People who are creating bots in a fraudulent manner are intentionally trying to create the false impression that the communication is from a person. How does that manifest itself? I’ll give you three examples:
A company says “I’m going to the markets, I’m going to sell my company,” or “I’m going to IPO my company, and I have 10 million followers.” What happens is they defraud the investors, because they only have five million followers. Because five million of those followers are not real people, they’re bots.
Second, there are companies out there that sell their ability to expand your social media platform. They get out there and say, “pay me, because I’ve got you an extra 10,000 followers,” when in fact they didn’t at all. They’re just made up in order to defraud the person into thinking they had real-people followers.
Third and most important, as we know from the headlines, is all about the issue of politics. We know that within hours after the horrible tragedy in Parkland, there were hundreds and hundreds of bots on both sides basically trying to incite people on the right and the left for guns or against guns. The funding mechanism now is: If you have outrageous headlines, and people share those headlines, people get paid. I know people in Bulgaria who were making two or three thousand dollars a day by creating headlines during the [US presidential] election. That’s why the election was so interesting. Not only was it an issue about messing with our system, but it also created this whole false impression of actually talking to people. They’re saying these incendiary things. All of which were not really informed by political speech but by an effort to make money. So you’ve created this whole false political economy. It really is a very, very, very significant challenge, none of which existed 10 years ago.
BAS: Both your bill and the one in the US Senate have prompted people to point out that in the wide world of bots, there are good ones as well as bad ones. For example, there are bots that index web pages or conduct research. With that in mind, could you clarify how specific your legislation is about the kind of bot it aims to regulate?
RH: It’s all about fraud. If you’re indexing, or if you’re a bot answering a question on a web page or something like that, people know that that is artificial intelligence to assist people.
The idea [behind the bill] is where the intent is to defraud. That’s the distinction. This is all about a new kind of fraud. That’s all it is. Before we had checks in America, we had other ways to defraud people, in the form of phony IOUs. Then we came up with the checking system. Then we wrote the laws to make sure people couldn’t write phony checks and falsify checks. It’s the same thing.
It’s just the evolution of the law, of how it deals with the basic underlying principle that has been in existence since the beginning of time, which is, you don’t defraud people. That’s what we care about. You can assist them. You can use tools to help people, but you can’t defraud them. That has always been a core element of human relationships.
BAS: How do you envision compliance looking? I’m wondering, for example, whether bots would identify themselves only in their bios, or in every post?
RH: It’s been challenging, because the industry is very fidgety about this whole thing, and then there are First Amendment people fidgety about it. I just want to be able to have a little Good Housekeeping seal that says, “this is a bot.” You can say what you want. I don’t want to affect someone’s speech. I just want to make sure that they know they’re not dealing with a human being. That’s all.
If it’s a bot that is a good bot that you need to help you index your stuff, fine. But if it’s a bot that’s trying to persuade you to vote for or against gun legislation, then no.
BAS: Under your law, would big social media platforms have to have systems for identifying undisclosed bots on their platforms, so that the companies could then go shut them down?
RH: Well, that’s what I want. [The bill] doesn’t have that. I’ve been fighting these guys. I’m debating what to do about this, because that, to me, would make it meaningful.
Most of these companies can know and easily test whether or not they’re dealing with a human being. Just like when you’re online and you check a box that says “I’m not a robot.” There are clearly ways to do it. But they’re balking pretty heavily. We’re still in the final throes of the negotiations on this issue.
I just spent the last three months dealing with privacy, and it’s been a challenge dealing with all these companies. They argue, on the one hand, “we’re inventing the future.” But it’s hard sometimes to get them involved in a common-sense conversation about what the balances should be in inventing that future.
BAS: When you say they’re fighting you, do you mean that currently there are Twitter and Facebook lobbyists in Sacramento making their case?
RH: Yes, that’s right.
BAS: Do you not think there is legitimacy to the claim that they sometimes can’t identify a bot?
RH: I don’t know. Maybe we should set a standard if they can’t. But they should at least make a reasonable attempt. Anything to get rid of fraud in the system. I can’t tell you how many spam calls I get every day, but at least they are identified by the phone company as spam.
We’ve got these extraordinary tools that we celebrate and embrace. I just ordered a new refrigerator online, and it’s being delivered to my house today by Amazon. How easy is that? There are unbelievable benefits. Every time you’re talking to a kid, they’re looking everything up on Google. A fabulous advancement.
But as we have these advancements, we’ve got to figure out how to make sure that [people] are trustworthy and telling the truth. Because you’re not sitting across the counter from someone, we’ve got to continually be thoughtful about coming up with new ways to make sure that people are getting the truth. I don’t want to hurt the businesses. But if it’s easy to [protect us], the companies should do it. If it’s not easy, they should make some showing as to why they can’t. But they can’t just say, “I don’t want to do it.”
BAS: Your bill, if it passes, could become the first law of its kind in the United States. How would it regulate bots or platforms that are based outside of California?
RH: It doesn’t. The jurisdiction of this bill only extends to the borders of California. But I can tell you as someone who’s been in and around public service for more than 40 years … what happens is, because we have so many people located here, and we have jurisdiction over companies like Twitter and Facebook, certainly we impact their global footprint. Number two, states all across the country adopt a lot of what we do. I’ll give you an example. We changed the regulations in California on how much energy refrigerators can use. Well, because our markets are so big, the refrigerator manufacturers just complied with California law. Same thing with respect to low-carbon fuel when I was here last time as speaker. Because our markets are so big, companies simply conformed all of their aspects to the low-carbon fuel standard. It’s not a perfect system, because we’re in a globalized economy, and the federal government has a little bit of constipation in terms of getting things done. But … I’ve seen these things get adopted, both from a political perspective and more importantly from a market perspective, because we have so much market power.
BAS: The more thoughtful critics of this bill seem concerned that bot disclosure laws could violate free speech rights, especially if the law is too broad. Is that a concern of yours?
RH: Yeah. I’m a constitutional lawyer. My father was a constitutional lawyer. I’ve been in cases before the Supreme Court. Of course it’s a concern of mine. But I think some of the arguments I heard are a little over the top in terms of the constitutional protection. They misconstrue my point. My point is, you can say whatever the heck you want. I don’t want to control one bit of the content of what’s being said. Zero, zero, zero, zero, zero, zero.
All I want is for the person who has to hear the content to know it comes from a computer. To me, that’s a fraud element versus a free speech element. If this is coming from a computer, even if I as Mr. Hertzberg want five thousand computers out there, and I want to use whatever tools I want to use, at least they know it comes from that. It can say, “computer-generated by Mr. Hertzberg,” or whatever. But you are defrauding somebody when you try to create the false impression that I’m talking to a friend down the street when it’s not a friend down the street.
You do have a constitutional right to hide what you’re saying and to be anonymous. You don’t have a constitutional right to falsely represent that one thing is something else. That’s the difference. When the Federalists argued anonymously in The Federalist Papers and didn’t put their names down, fine. They’re anonymous people expressing their ideas. But they didn’t put their names down as somebody else’s name. That’s the difference.
You could also argue, “Well, you have pseudonyms, and people in order to protect themselves argue under a false name.” No problem either. It’s still a human being. But bots have a technological implication where you can create millions and millions of these things as we’ve seen and create the false impression that it’s a human being and have economic or political impact. That’s what I’m trying to get to.
BAS: I want to ask about something else you’re working on that could get back to online identity. You sponsored legislation enabling corporate share certificates via blockchain.
BAS: Why is that important to you?
RH: One of the great challenges with government and technology is government is slow and deliberate. Thomas Jefferson would write letters to John Adams, and it took six weeks. They could think about stuff and whatever. Today, technology’s turning everything on its head.
Most of my colleagues don’t even know what blockchain is. They think it’s a fence around a swimming pool. Even the tech caucus people didn’t know what blockchain was. To the extent people even heard the term, they only hear it in the context of cryptocurrency and the like. It has extraordinary applications. Nine of Forbes’ 50 top fintech companies this year are blockchain startups. I’m trying to introduce a concept in California so we can start thinking about this as a tool.
I was initially going to do what Sweden does, do recordation of land titles. But the title companies went crazy thinking I was going to put them out of business. Then I went to do driver’s licenses, birth certificates, and death certificates on a distributed ledger, and the government said, “this is going to take 20 years” and blah, blah. So I tried to get something that is really simple, low-hanging fruit. Something about which people in Silicon Valley would say, “oh, this is neat. I’ll put my share certificates in a blockchain.” There was an issue in Delaware where they did something like this. Trying to just introduce the concept, so I could stand on the floor of the California Senate and explain to the members what a blockchain was and be able to use this to introduce this. I’m going to introduce a blockchain bill next year and the year after and the year after, so we can continue to drill in and try to look at smart contracts. There are so many different applications.
The bottom line is, so much of government looks backwards. So much of government is about status quo. So much of government is informed by whoever is here in the halls of legislative power to tell you what they want to fight for, and they inform the thinking.
My role is, hopefully … to try to offer some vision and some willingness to take risks on bigger ideas and inform the future. We in California are in this great place where we think we’re this unbelievable leader. Well, we are in so many spaces, but we’re not in others. I’m trying to work on issues like blockchain, like bots, and other things that give people a little constipation and diarrhea at the same time. It moves the ball forward, man. That’s it.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.