The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Narrative warfare: How disinformation shapes the Israeli-Hamas conflict—and millions of minds

By Yusuf Can | November 27, 2023

Aftermath of the October 17, 2023 Al-Ahli Arab Hospital explosion. Credit: Tasnim News Agency, CC BY 4.0 , via Wikimedia Commons

Earlier this month, German television channel Welt claimed that a Palestinian Instagrammer had feigned a deathbed scenario in a hospital bed and subsequently posted a miraculous video depicting their well-being amidst the aftermath of a bombing in Gaza. While the actual circumstances diverged significantly from this narrative, Welt propagated the Pallywood—a portmanteau of “Palestine” and “Hollywood—conspiracy theory, baselessly alleging that “amateur actors” were fabricating scenes in Gaza.

It turns out the individuals featured in the two videos were not the same, and the footage from the hospital had been uploaded to social media several months prior to Hamas’s October 7 attack on Israel. But the claim, asserting the identity of the two individuals, had already made the rounds and was also disseminated by Israel’s official X (formerly known as Twitter) account, only to be removed later.

The Israeli-Palestinian conflict is a perennial, deeply entrenched issue transcending mere geopolitics. It is a contest of narratives, a battle where stories and perceptions wield as much power as physical forces. In this intricate struggle, disinformation emerges as a potent weapon skillfully wielded by those with ill intentions. A single Tweet or a brief TikTok video is insufficient to distill decades, if not centuries, of historical background, yet they possess the capacity to shape the minds of individuals, influence their reactions, and even sway policymaking. As such, an examination of how to address rampant misleading information that is shaping the dynamics of human society is necessary.

Nobody is safe. Regardless of one’s stance in this enduring conflict, dominant narratives are often handed down from generation to generation. The remarkable ability of these narratives to mold public opinion attests to their formidable potency. Disinformation becomes a valuable instrument in developing these entrenched perspectives, revealing the vulnerability of individuals when confronted with a barrage of misleading or outright fake information. Notably, even prominent figures with access to vast resources—journalists, politicians, and ironically, CEOs of social media companies—can fall prey to the insidious influence of disinformation.

In October, Elon Musk, the CEO of X, shared an image featuring a map of Iran enclosed by more than two dozen American flags, symbolizing purportedly United States military bases. The accompanying caption on Musk’s post mused, “Iran wants war. Look how close they put their country to our military bases.” Additionally, he appended the graphic with the words, “Oh, the Irany.” While interpreting the true motivations behind the post remains elusive to anyone except Musk himself, this incident serves as a noticeable example of misinformation, if not the more deliberate form known as disinformation. The map in the graphic purportedly denotes 26 American military bases in Afghanistan, Pakistan, and Turkmenistan. But the fact is that these bases did not exist. Musk, after being duly corrected, eventually acknowledged posting an inaccurate graphic. The fact that the CEO of X, notorious for its struggle with disinformation, could not elude becoming trapped in this problematic phenomenon is indeed noteworthy.

Irrespective of his underlying motives, a tech mogul like Musk taking part in the spread of disinformation has a consequential impact on which narratives emerge victorious. It is one thing when a social media user with a handful of followers shares false information. When it is done by one of the most influential individuals on the planet, it shapes millions of minds around the globe. The same issue regarding the magnitude of impact applies to other actors as well, including the mainstream media.

RELATED:
A wargame suggests how a war between Israel and Iran might go nuclear

The episode during which Welt spread the Pallywood conspiracy theory is a stark reminder that media organizations and governments remain susceptible to disinformation despite their substantial resources. These powerful entities play a crucial role in shaping narratives, whether through the unintentional spread of false information or the intentional manipulation of stories. This dual vulnerability, arising from both susceptibility to misinformation and purposeful narrative shaping, highlights the intricate landscape in which these actors operate. When these entities are perceived to employ disinformation to advance their preferred narrative, it not only undermines the credibility of these institutions but also diminishes the public’s trust in the information they provide. This erosion of trust creates a void that malicious actors can exploit, often to the detriment of innocent individuals.

Beyond social media. Following Hamas’s attack last month, it has become nearly impossible to avoid images of destruction and pain. Online, it was already tricky to sift through the bombardment of disinformation, recycled footage from past conflicts, images from video games, and contradictory narratives to determine what is actually happening on the ground. Now, generative artificial intelligence tools are adding a new layer of complexity to an already growing problem with synthetic media. AI-generated images, videos, and audio related to the ongoing conflict are running rampant. Fake images of dead children to trigger emotions, hate-fueled memes targeting Jewish people, and intentionally manufactured efforts to mislead the public can be found in many corners of social media platforms.

For decades, tech moguls promised a future in which the internet and artificial intelligence would enhance and improve the quality of human life. If there was ever a moment where the overstated promises of such technologies could be put to the test, the Israel-Hamas conflict is one of them. Without a doubt, the ever-evolving technology has myriad benefits to human life. However, the creation and dissemination of disinformation clearly indicate the limitations, failures, and potential harm of tech utopianism.

Disinformation and the risk of apathy. Disinformation regarding Gaza can include incorrect details about the nature of the crisis, the affected areas, and the actions that need to be taken, leading people to make uninformed decisions. For example, emergency responders rely on accurate information to plan and execute efficient and effective responses, such as delivering food. Disinformation can divert resources to areas that do not need immediate assistance or delay the deployment of resources to areas that urgently require help. Such delays can have serious consequences, especially when time is of the essence.

But the dangers of disinformation are manifold and can have even more profound and long-term consequences.

Disinformation doesn’t simply get people to believe a false thing is true; it also convinces them to think a true thing is false. That’s the contagion that disinformation spreads into the atmosphere. Not only does disinformation erode a person’s knowledge base, but it also erodes trust in other people to tell the truth when it comes in the form of a conspiracy theory. Consider the bombing of a hospital during a conflict. In that case, determining who is to blame for the explosion has real, global, legal, and humanitarian consequences, and it takes time to examine the evidence and determine the facts. But in a society where millions of people can access a myriad of unvetted information in only a few seconds, ill-intended actors take advantage of this confusing and convoluted influx of information to move public opinion to trust or lose trust in a particular actor.

RELATED:
AI and the A-bomb: What the analogy captures and misses

In other words, one of the riskiest aspects of disinformation is that it can make individuals cynical because it plays right into the ill-intentioned actor’s hands. Such actors can convince people to believe their narrative, and even if they can’t convince, they can make one question the narrative they believed so far and eventually demoralize them. Finally, they will make people feel that even trying to solve a problem is a useless attempt, making individuals apathetic.

Moving forward. The use of digital technology in politics has a relatively short history, although deception in warfare—and influencing a country’s politics is a form of warfare—goes back a long time. Yet the scale of deception and use of digital technology seen in today’s world is dramatically more effective and drastically harder to control. The instances involving influential figures like Elon Musk and media organizations like Welt underscore the vulnerability of even those with substantial resources to the insidious influence of false information. Much of the rest of internet consumers are merely easy prey.

As technology, including artificial intelligence, intertwines with the Gaza conflict, the promise of a tech utopia is tested against the stark reality of disinformation’s harmful consequences.

The ongoing crisis in the Middle East exposes the limitations and potential harms of the tech world’s overstated promises once again. The rapid spread of disinformation on social media platforms erodes knowledge and undermines trust in information sources, including governments. The dangers of disinformation extend beyond misinformation; it can lead to cynical perspectives, demoralizing individuals and fostering apathy toward problem-solving.

I need not clarify the critical need for global efforts to address the pervasive disinformation issue. In the context of emergency responses, the impact is tangible, diverting resources and impeding timely assistance. Other effects are more general and diffuse. The current state of technological developments, coupled with a lack of regulation and an international consensus, exacerbates the spread of unreliable information, conspiracy theories, and real-life harm. As humans grapple with the multifaceted challenges of today’s world, from climate change to great power competition, steps have been taken to address rampant disinformation, but those efforts are still in their early stages.

It may be impossible to completely counter rampant disinformation in real time, given how the internet and social media platforms are structured. Even if that is the case, the global community needs a consensus on how to approach this stark threat before collectively deciding on next steps to at least limiting its most malign impacts.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments