By Kevin T. Greene | May 5, 2025
The closure of the State Department’s Counter Foreign Information Manipulation and Interference office, combined with cutting other measures that study foreign influence operations, can leave the United States in the dark about what is happening in the information environment. Image via Wikimedia Commons
In mid-April, the State Department shuttered the Counter Foreign Information Manipulation and Interference, an office tasked with fighting foreign influence campaigns. The move followed an earlier cancelation of the Defense Department’s Minerva Initiative, which supported research “on areas of strategic importance to the U.S. national security policy.”
Paired with broader proposed reductions in funding for the National Science Foundation (NSF), these recent actions are putting the United States in a precarious situation: They are hindering the ability to study malign foreign influence operations, including attempts to undermine US interests and democratic elections around the world.
Damaging foreign influence efforts can thrive in today’s information landscape. The most recent version of the Online Political Influence Efforts Dataset, a collection of evidence of the covert use of social media to influence politics, documents of more than 100 foreign influence operations, targeting dozens of countries. The activities carried out in these operations are continually evolving. One tactic—increasingly used over the past several years—is the creation of malicious websites that appear to be genuine media outlets but instead push propaganda. This scheme has been used by both Russia—which mimicked news sites such as CNN and Fox News—and China, which has generated hundreds of domains that appear to be local news outlets.
Influence operations are also turning to artificial intelligence, both to generate content and as a novel means of spreading narratives at scale. Recently, a deepfake, allegedly created by Iran, depicted Israeli Defense Minister Yoav Gallant making statements critical of the United States; it was broadcast on Israeli television. This example shows that influence operations can potentially push AI-generated content directly into the mainstream. Further analysis, from the American Sunlight Foundation, a nonprofit ensuring citizens have access to trustworthy information, suggests that influence operations are also attempting to corrupt AI tools themselves. They found that a propaganda network is feeding pro-Russia positions into large language model training datasets. This tactic offers influence campaigns a new means to inject their propaganda into the global information network.
In the past year, foreign influence operations have targeted numerous elections around the globe, attempting to undermine US interests and erode trust in democratic institutions. While evidence about influence campaigns is growing, there are significant gaps in our understanding of their scale and whether they impact the behavior of individuals. To develop effective countermeasures and policy responses, experts need to do more research to fill these gaps. But the US government is hamstringing that research in a variety of ways.
Foreign interference during the “year of elections.” Last year, more than 70 countries held elections around the world. Over the past several months, I have led a team of students and researchers from Princeton University, Harvard University, and University of Texas at Austin in compiling documented cases of foreign interference in each of the elections held in 2024. We define foreign efforts to interfere in elections as coordinated campaigns by a state or state-linked group to impact a democratic election in another country. To be classified as foreign interference, those efforts must go beyond routine diplomatic influence—through disruptions to election infrastructure, undermining trust in the voting process, or attempting to alter political preferences or participation in the period leading up to the election. Thus far, the research team and I have compiled evidence of foreign actions directly targeting roughly a quarter of the elections carried out last year.
Beyond targeting specific political candidates or parties, several of these campaigns have actively aimed to undermine the United States and erode trust in democratic elections around the globe. In Georgia, a Russian-linked effort used Telegram and other online platforms to spread claims that the United States was organizing a coup in the lead up to the election. Similar narratives appeared in the lead up to the election in the Solomon Islands, where both Russia-linked Sputnik and the China-linked Global Times published articles alleging the United States was plotting a coup.
Other efforts focused on undermining the legitimacy of democratic elections themselves. In Taiwan, China-linked efforts promoted claims of ballot stuffing, while state-linked media shared videos on TikTok alleging election rigging. In the days prior to Germany’s February 2025 election, a Russian-linked campaign spread fake videos attempting to challenge the integrity of the election. One video falsely argued that the candidate of Alternative for Germany (AfD)—a far-right, pro-Russia party—had been left off the ballot, while others claimed to show ballots cast for the AfD being shredded.
Our team’s ongoing collection also underscores how hard it is to definitively determine the sponsors of disinformation attacks; for the most part, experts can reliably only attribute influence operations carried out in a handful of countries, primarily located in the geographic West. For instance, while South Africa’s 2024 election featured deepfake videos and AI-generated content claiming that the election was manipulated, the perpetrators who created and disseminated those videos are currently unknown. Similar issues exist in Palau, a United States ally. Despite being the target of Chinese efforts in the past, including an attempt to develop a media group in the country, researchers can for the most part only speculate about Chinese efforts to interfere in the nation’s 2024 election.
Knowledge gaps. Beyond attribution of campaign actions, researchers currently lack fundamental knowledge about influence operations, including their reach and whether they impact individual behavior. For instance, while there have been efforts to estimate the breadth of Russia’s Internet Research Agency-linked content in Facebook and Twitter feeds, research has largely focused on a single campaign, at a single time, on a single platform. Experts have limited assessments that allow for comparisons over time or between influence actors; these types of comparisons are needed to craft effective interventions. Evidence suggests that influence operations are strategic, exploiting societal fractures by targeting their messages toward particular groups. An operation may have little overall reach but be highly effective at drawing engagement within specific segments of the population. At present, experts do not know if engagement patterns differ systematically across subsets of the population.
As with other topics related to the information environment, researchers often make assumptions about potential impacts of influence operations, but in relatively few cases do they have rigorous assessments of those impacts. There are notable exceptions, including studies that found no evidence that Russian influence efforts changed the outcome of the 2016 US presidential election. Experts need to move beyond macro-political outcomes or changes in self-reported attitudes, to examine how influence operations may contribute to mental health issues or increased social detachment. While research on the effectiveness of countermeasures in other areas of the information environment is growing, researchers know very little about their effectiveness against malign attempts to influence public attitudes.
For experts to craft appropriate responses through education, diplomacy, or other countermeasures, they need a deeper understanding of the scale, impacts, and vulnerabilities that influence operations exploit. These insights can aid policymakers to assess national security threats and better identify which threats require a response and which can be left alone by quantifying the relative risks posed by particular actors and operations. This can help shape better policies that target influence activities likely to have the greatest effect. Further, as AI continues to advance, experts need rigorous means to evaluate how this technology impacts the reach and effectiveness of influence operations.
There are legitimate concerns about the role of the government in addressing information operations and the efficient spending of tax dollars. But remaining in the dark about what is happening in the information environment won’t further these goals. Investing in evidence-based research can help provide a cost-effective means of achieving better situational awareness of the information environment and thereby improve the allocation of funds to interventions that are more likely to be effective and avoid those that may be counterproductive. Policymakers may ultimately find that some campaigns have little effect and that countermeasures are not worth the potential infringements on free speech. But making informed decisions requires careful and systematic research—and the resources to undertake it.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Topics: Disruptive Technologies