The authoritative guide to ensuring science and technology make life on Earth better, not worse.

An interview about the 2024 election with Harper Reed, chief technology officer for Obama 2012

By John Mecklin | September 5, 2024

Nine months before the the start of the Civil War, painter Frederick Church released his painting “Twilight in the Wilderness” to great fanfare in New York City. At a time before modern entertainment, the premiere of this five-foot wide painting was a major media event. Ever since, historians have looked to it as a portent of events that were to come. According to the Cleveland Museum of Art, "the painting's subject can be perceived as symbolically evoking the coming conflagration; indeed, one scholar has memorably described the painting as a 'natural apocalypse. ' " Image courtesy of The Cleveland Museum of Art.

An interview about the 2024 election with Harper Reed, chief technology officer for Obama 2012

By John Mecklin | September 5, 2024

If you Google “Harper Reed,” the first item that search returns is a link to his website[1] and the headline “Harper Reed ★ Probably one of the coolest guys ever.” And Reed has undeniably done many interesting things in his 46-plus years on the planet. They are summarized this way on his home page:

Harper Reed!
Technologist
Entrepreneur
Hacker

More specifically, Reed has been the chief technology officer for Threadless, an online marketplace that has its own level of cool, selling apparel, accessories, and home décor and “constantly searching for more products for our artists to use as canvases for their best, weirdest, nerdiest, most beautiful art.” (Disclosure: The Bulletin sells t-shirts, coffee cups and other merch on Threadless.) He was also the founder and CEO of Modest, a retail solutions startup acquired by PayPal. He has had many interesting side affiliations, from the University of Southern California’s Annenberg Innovation Lab and the MIT Media Lab to the Royal United Service Institute. He claims, as well, to have opened for the rapper Sir-Mix-a-Lot as a juggling act.[2]

But Reed is probably best-known for his work on Barack Obama’s successful 2012 presidential campaign, where he served as chief technology officer and in many ways ushered in the Tech Age of American politics. He and I spoke late in May about the innovations that he and a large contingent of Silicon Valley technologists brought to the Obama campaign, and how subsequent presidential campaigns have borrowed from and built on those innovations. Our conversation was fascinating and often humorous, but subsequently overtaken by events. The interview occurred before President Biden’s shocking debate performance in June and his subsequent withdrawal from the presidential race, the attempted assassination of former President Trump in July, and the Republican and Democratic national conventions. Just the same, Reed’s overview of the technological landscape offers important context for those involved in the last few months of the 2024 election, whether they be candidates, political operatives, or, most important, informed voters.

This interview and many of the other articles in this election and democracy issue will be updated on the Bulletin’s open website[3] in September, in October, and in November as needed to take into account fast-changing events in the US presidential campaign. The interview has been edited and condensed for clarity.

 

John Mecklin: I guess the first question I have to ask you is: Are you working on any political campaigns right now?

Harper Reed: I am not.

Mecklin: But you have in the past. In 2012, you were chief technology officer for the Obama campaign. And I just wanted to hear you talk a little bit about some of the innovations you and your team brought to that campaign.

Reed: I think the primary thing that we brought was a very broad understanding of technology and what we could do with technology. With a campaign, there’s no end to ideas. And campaigns are one of the few places that I have worked where they are very comfortable with metrics. Most businesses say they’re comfortable with metrics. But I think because so much is at stake with an election, it’s just a different level of comfort. And so they’re very open to experimentation, to trying things out—as long as the metrics align, or as long as they’re metrics-based. Jim Messina, the campaign manager, is famous for saying and told me constantly, “If there’s not a metric, it doesn’t exist.”

What that meant is, when you bring a bunch of tech people in there, the campaign people could say, for instance, “We’re going to do X,Y, and Z.” And we’re like, “Okay, we can do it so that it’s much quicker, or much more effective.” And then they would say, “Great, we’re going to measure it and make sure it’s more effective.”

I think oftentimes in business—and in tech—things are a lot more performative and a lot less purely metric-driven. And so bringing a whole bunch of high-profile tech people in from Silicon Valley—they kind of flourished in that environment. And then we had a lot of really good expertise, obviously, from the political side that helped guide and define the products. And then we were just able to build very, very, very fast.

Mecklin: And what were some of the products, because ordinary people won’t know what you’re talking about.

Reed: In the US, especially at that time, the real key within elections was about turnout. It wasn’t necessarily about persuasion. So most, if not all, of the technology we built was about getting people to turn out to vote. A huge part of a campaign from the Dem[ocratic] side, where they send millions of volunteers to go knock on millions of doors, is just to remind people to vote. So our job was to make that process as effective and quick as possible. What we like to say is that we were a force multiplier. And to be honest, if people know that they’re using our technology, we’ve done a bad job; we want to blend into the background.

[Campaigns] are organized much like an emergency response, with a very top-down [approach] that has a person that’s developing the strategy on the top, and then they have volunteers on the ground, doing the work and knocking on doors and whatnot. So we just needed to make sure that the technology we built allowed people to do that. We built things like a call tool that allowed the volunteers to make calls more effectively, various door-knocking tools that allowed people to knock on the right doors, because since it was all about turnout, we didn’t knock on any Republican doors. The point is to just remind the Democrats to vote.

Now obviously there are examples of people doing persuasion during that time, but for the most part, our focus was on turnout.

Mecklin: So obviously, you did a good job. Obama won.

Reed: I think Obama did a great job. We did a fine job, but the one thing I learned is that the candidate matters more than the tech. We were good at bolstering Obama, but Obama was the one that really did the job to win. He could have won without us; he just wouldn’t have won as well.

Mecklin: Okay, then, in 2016 Donald Trump happens. Did you happen to notice or become aware that any of the techniques—things you and your team had worked on for Obama—were borrowed or continued into that campaign?

Reed: That’s a good question. And the answer is a little nuanced, in that a lot of what we invented—almost for lack of a better word, just because no one had done it before, so I’ll say “invented”—for 2012 was used somewhat [to spread] misinformation by lots of people before the election in 2016. A lot of it was about just how do you message to millions of people as fast as possible using social media—using Twitter, using Facebook, and whatnot. For instance, Cambridge Analytica[4] obviously got a lot of press; a lot of [it] was continuation of the work that we did in 2012. And I think of it as not so much that they borrowed as much as that’s just the table stakes. The table stakes were that you would go and work to communicate with as many people as possible via social media.

The big difference was we did it so early; the social media platforms didn’t yet have a policy around whether or not it is appropriate to do so. They then developed that process. So for instance, Facebook banned access to the data that we used, and that’s one of the reasons why Cambridge Analytica had to use a third-party to get that data, and so on and so forth. How I like to talk about this is: The core technology was largely the same; there were obviously innovations that the Trump campaign did specifically around some psychometrics and ad targeting and Facebook targeting and whatnot.

But the real thing to remember about campaigns is to reference a quotation from [Democratic political operative] David Axelrod, which was, “You’re never as dumb as they say you are when you lose; you’re never as smart as they say you are when you win.” Hillary lost by such a small amount that if she would have won, we would have been talking about app strategies instead of Facebook strategies, would be talking about all this other technology that I think was very good. But the candidate obviously was not as good, and the American people did not vote for her. So I kind of want to decouple that.

But with that said, the real thing that continued [after the 2012 Obama campaign] was use of social media to deploy targeted communications to people that they thought would do something. In our case, it was turnout; I think in [Trump’s] case, a little bit, it was persuasion. You know, maybe in some cases, negative persuasion, trying to get people to not vote, or try and get people to do something else.

Mecklin: So, I’ll ask you to get your crystal ball out here. What do you foresee this year in this presidential election, technology-wise, that is different or that is evolving from the past couple.

Reed: My prediction is a little bit of a reflection on the past, which I think all the best predictions are. One of the main things that happened in 2012 that was very different than before was that Obama 2012 hired all of these people onto the campaign. And I don’t recall the number, how much money was spent on technology. You can just look at FEC [Federal Election Commission] reports if that’s interesting to you. But hundreds of millions of dollars were spent on technology, whether it’s just ad tech, our team, data work, analytics, etc. It was a lot of money. And a lot of that money was spent on employees who worked for the campaign. That’s very different than what we have seen since then.

So Hillary hired a big tech team, and that didn’t work. Well, Hillary didn’t work, so therefore that didn’t work. Trump had lots of contractors, not a big tech team. Biden in 2020 did not have a big tech team and used lots of contractors and vendors. And my guess is that we will see the same thing around the vendors and contractors—that there won’t be a big team [of employees]. That means that there’s going to be a lot of people who are acting on behalf of the campaign, people hiring them to use their tool or paying them to do door-knocking, or what have you, to kind of help broaden that out. There’s also a lot of advocacy organizations on both sides that are going to be doing work that normally in my experience would have been on the campaign.

I don’t have necessarily a prediction about how data will be used. But I do think it’s going to be a little bit of “by any means necessary” on both sides. And that’s going to appear by using lots of different vendors and lots of different contractors to do pretty much anything. The benefit here is that you have a huge diversity of opportunities. You don’t have to have that expertise internally. So for instance, someone could say: “I’m very good at figuring out where to place the TV spots so they actually matter the most for the outcomes that are important, which is voting.” You can hire that vendor, instead of having that internally on the campaign, whereas, in 2012, that person worked in the campaign. And what this means is that you can hire five of them, probably, for the same costs.

But the complicated part, I think, is that this also could lead to some of these vendors acting a bit differently than would be if you actually worked for the president. Because they are not beholden to the same requirements that maybe they would be if they work for President Biden, or even former President Trump. I’m not saying they would act unethically. I’m just saying that they have a little bit less oversight. For instance, whenever we were launching something, I spent many hours on the phone with a lawyer making sure that not only was [it] legal from an FEC standpoint and legal from an election or legal standpoint, but was very, very much within the scope of what the Obama campaign wanted to do from an ethics standpoint. When your collection is just vendors, that’s much harder to control.

Mecklin: I would guess so. There’s been a lot of hand-waving about artificial intelligence. There’s a story in The New York Times today about how what was expected—that AI would be messing with the presidential election—doesn’t appear to be happening. I just wonder what you see. Obviously, large language models are more capable than what had been before. Are they good enough yet to have a real impact on this election, or not?

Reed: I want to add a couple things to the last thing I said. I think that these campaigns will use any tool that will get them the votes, and since that is so aggressively metric-based, they will have good intelligence on what tool that is. And with AI right now, I think it’s a viable player for a lot of these tools. I remember in 2012, we didn’t use AI, but we used an awful lot of analytics and some of the predictive analytics. And we did do a lot of automation.

So for instance, we started doing automated polling, using some technology to call people and ask a few questions without having a person involved. That allowed us to get a very real-time view of what’s happening in the field, so that we could push out and deploy our people better. And I know that that’s something that’s happening quite a bit now. And I also know that the Biden campaign—I don’t know much about Trump campaign, just what I’ve seen in the news—the Biden campaign seems to be really leaning into doing some really neat tech stuff, through vendors and whatnot. And if I was in charge of finding the vendors, I wouldn’t be so worried about whether they are using AI or not using AI; it would just be about results.

And then we can just kind of follow that conclusion a bit, and that means that if AI works really well, they will use it. I personally think AI works really well. In the prediction world, I guess that it will be used quite a bit for anything from building polls to helping execute those polls. It could be used to generate content very quickly, because a lot of campaigns react to things very fast. It could be about generating responsive content; you could imagine someone on Twitter says something, and someone wants to respond very fast with a quip or something that is within the guideline of the campaign.

Harper Reed. Image courtesy Harper Reed

When I’m coaching a business, I coach the business to think of AI as an intern; it’s someone who’s very, very, very smart, but just doesn’t have any experience and is [therefore] kind of dumb. And so it’s this person that you can have do almost anything—but just, it might be wrong. If you have an intern helping you out, you would have a close eye on them, but you wouldn’t not give them work. You might not give them the most important work, but you give them a lot of work. You just would check it a couple of times. [Campaigns] are very good at using interns. And if they’re able to just think of AI the same way as they think about interns, I think they can get very far doing a lot of stuff with AI. Good and bad, from the constituents’ standpoint.

Mecklin: One of the Bulletin’s big concerns is internet-based disinformation and misinformation, as a multiplier of all sorts of evil things in the world. And, obviously, mis- and dis-information have been and will be part of political campaigns. All of the experts I’ve talked to sort of throw up their hands about what you can do about mis- and dis- information. Do you have any words of wisdom about how to at least start to get hands around the problem?

Reed: I think one of the main issues with generated content from AI is that it’s very difficult to figure out if it is AI-generated in the first place. And so maybe the scope of disinformation is inclusive of AI, but it’s not exclusive to AI. And what I mean by that is: What is the process to find and disarm misinformation and disinformation, just generally—human-generated, AI-generated, bot-farms-in-Somalia-generated? How does a process do that? Because I don’t think thus far that I have seen an effective way to just solve the AI problem.

And I worry that so many people who have fallen on the wrong side of the AI plagiarism tests where they’ll throw an essay through [some software], and a teacher will say, “This has been copied from AI.” And in fact it was written by the student. So if we just go to the logical conclusion that you can’t tell if it’s AI or not, that means that we have to have the same tools that we would for normal misinformation and disinformation—we’re just going to have to have more.

And in the same way that AI may be able to make those misinformation and disinformation processes more efficient and scale them up, I think that we’re going to have to use AI to build the antidote. And I do say antidote, because I don’t think that we can stop it before it starts. I think the cat is so far out of the bag—whether you’re using open APIs [application programming interfaces], whether you’re running these models locally, or building your own—I don’t think we can stop it before it starts. So I think this is something that we’re going to have to be reactive to. And I think we’re just going to have to do it in the same way we would do anything else. And I worry that that sets us up for a pretty gnarly arms race, to be honest. And I worry that it also sets us up to be cynical about the opportunity, maybe just lean into the bad parts, if that makes sense.

Mecklin: The idea of a somewhat rapid response to disinformation, to try to defang it somehow—I guess what you’re saying is that people should be thinking about employing artificial intelligence, to have the scale to be doing that?

Reed: We have to solve it by whatever means we have in front of us. And at this moment, if you’re not thinking about AI as a solution for some of these problems, I think we’re going to be in trouble. I just don’t think we have enough people.

Mecklin: First of all, thank you so much; I’ve really enjoyed the conversation. I have a sort of a last philosophical question that will let you go on and on at whatever length you wish. The big debate that the Bulletin has been writing about for a long time is whether artificial intelligence is going to reach AGI—artificial general intelligence—take over the world and make us all into pets or kill us all. Obviously, there’s a lot of argumentation on all sides of this. I was just interested in your view of the advance of artificial intelligence and how to make it so it doesn’t do horrible things.

Reed: I completely buy many of the doomsday theories about AI, both as a sci-fi fan and as someone who just watches and kind of can guess the logical conclusions. I’m not so pearl-clutching where I think that we need to protect ourselves from that right now. I think that maybe in the future, we’ll have some existential risk due to a consciousness we don’t understand. But right now, I don’t think we’re near that.

I don’t know what kind of timeframe; I don’t think it’s necessarily helpful to say: “Oh, maybe in 100 years, five years, two weeks, whatever.” Because I don’t think it’s in two weeks, I don’t think it’s in five years. I do think, though, that the risk we have in front of us, happening now, is maybe just as destructive and probably much easier to talk about, which is the centralization of power from all of these AIs in a small number of people. Which is, I think, almost even worse than the centralization of power that the US/Silicon Valley already has around tech. That, plus what I think is going to be a really big destabilizing event over the next two to five years, starting today, which is that a lot of people are going to be out of jobs with no real way to get their jobs back, because their job has been replaced by an AI.

I don’t mean an AI as a robot agent or something sitting in their chair doing their work. I was just coaching a [tech company] founder yesterday; they were talking about hiring a copy editor to get started on doing a bunch of copy and social media and whatnot for their company. And then they just did it with ChatGPT. That doesn’t mean they won’t hire a copy editor to close out. But that means that if [the job is] broken up into to 100 pieces, they got all the way to 80 percent before needing to actually hire someone. That’s a number of people who don’t have jobs, just from this one startup.

I think we have time before this comes to fruition, and the canaries in the coal mines I think we should be watching for are: How do the big accounting firms and consulting firms handle this? Because most companies don’t have a how-much-revenue-per-employee number or metric that they follow. But the big accounting firms have a very good understanding of that, and the big consulting firms have a very clear understanding of that. Their hiring is dictated by that, their promotions are dictated by that, the creation of their offices—where they’re opening, closing, etcetera—are dictated by that.

And as we see these huge companies with hundreds of thousands of employees, start doing layoffs and changing how they’re working, I think that’s where we can really see how it’s going to affect the rest of the knowledge workers within the US, the West, and the rest of the world. I think that’s going to happen much sooner than we’ll have AGI, and I’m much more worried about what happens when you have hundreds and hundreds and hundreds of thousands, if not millions, of knowledge workers suddenly out a job with no chance to get a new job. What happens then, versus AGI? To be honest, AGI will happen maybe. Maybe they’ll take over our brains. Maybe I’ll become like, some internet router in the sky in 100 years. But I’m more worried about what happens when you have a million people in the Midwest who don’t have jobs and can’t get food.

 

Endnotes
[1]
See: https://harperreed.com/

[2] See: https://harperreed.com/about/

[3] See: https://thebulletin.org/#navbar-brand

[4] For an overview of the Cambridge Analytica scandal, see: https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

 

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments