INTELBRIEF

October 26, 2023

IntelBrief: AI-Powered Disinformation in the Israel-Hamas War and Beyond

AP Photo/Tsafrir Abayov

Bottom Line Up Front

  • The ongoing war between Israel and Hamas has brought with it a slew of false or misleading information — including AI-powered imagery.
  • Disinformation and other forms of manipulated information have proliferated online, compounded by fake media accounts, the reusage of old imagery, deliberate disinformation campaigns by state-backed/aligned actors, and other malign information efforts.
  • An overall increase in AI-powered disinformation has been noted since the public gained widespread access to large language models (LLMs) and other generative AI tools in late 2022, with malicious actors seeking to run their influence campaigns in a more efficient and less resource intensive manner.
  • AI-powered disinformation will impact several upcoming elections in 2024, including in the U.S., where the creation of false or misleading content with LLMs is likely to lead to high quantities of efficient false content which may have significant offline implications if used for highly vitriolic narratives or calls to violent action.

The ongoing war between Israel and Hamas, which broke out on October 7 following Hamas’ cross-border assault in which over 1,400 people were killed and over 200 taken hostage, has been accompanied by a raft of false or misleading information — including AI-powered imagery. As the fog of war complicates and delays the independent verification of information and is outpaced by the information demand on social media, disinformation and other forms of manipulated information have proliferated online, at times reinforced by fake media accounts, the re-usage of old imagery, deliberate disinformation campaigns by state-backed/aligned actors, and other malign information manipulation efforts. AI-powered imagery, including deep-fakes, is especially insidious in disinformation campaigns as online users tend to trust what they can see, for example, in a seemingly real video. Even prior to the October 7 incursion and subsequent war, there has been a noticeable increase in AI-powered disinformation, which enables malign actors to more efficiently and more resource effectively produce influence campaigns. This has wide-reaching security implications for several upcoming elections in 2024—including the U.S. general election—as well as future conflicts, great power competition, and geopolitics more broadly.

In today’s interconnected and digital world, disinformation, misinformation, and other forms of information manipulation are prominent features of war and conflict. The Israel-Hamas war has been no exception, and analysts, experts, and practitioners have warned against the high volume of false or misleading information proliferating on online platforms, such as Telegram. Similar to Russia’s brutal war against Ukraine, soon after Hamas’ attack, unverified information first appeared on Telegram and was subsequently shared – and rapidly spread – on other platforms. Online users have re-used old or outdated imagery and videos from conflicts in different parts of the world, including from the Syrian civil war, and portrayed the content as if it originated from the recent Israel-Hamas conflict. Accounts on X (formerly Twitter) have falsely claimed to be associated with traditional media outlets, including the BBC, the Jerusalem Post, and Al-Jazeera, to attempt to capitalize on their media legitimacy and further proliferate the false content. For example, a fabricated clip was widely shared on social media purportedly showing a BBC news story falsely claiming that Hamas’ weapons had been supplied by Ukraine. The clip was amplified by Russian-backed/aligned networks. Moreover, a recent investigation by the Atlantic Council’s Digital Forensic Research Lab illustrates how accounts that have been prominent in sharing false information about the Israel-Hamas conflict quickly increased their follower/subscriber count.

The utilization of AI-powered generated images and videos is a concerning trend, particularly as different actors – state and nonstate – attempt to control the narrative of the current war and promote their own agendas and priorities in the geopolitical sphere. Last week, a haunting photo of a distraught baby seemingly crawling through the rubble in Gaza, which spread online, was later analyzed and confirmed to be an AI-generated image. One of the baby’s hands appears to have too many fingers—a tell-tale sign of an AI-generated image. A February 2023 deep-fake video of U.S. President Joe Biden announcing a military draft reemerged on October 14, without any context, on TikTok, where it garnered over 200,000 likes and 11,000 comments and subsequently spread to other platforms. The false or misleading information has frequently been accompanied by graphic, violent, and traumatizing images and videos, which is intended to manipulate viewer’s emotions and humanity, further stoke tensions and divisions, and sometimes call for more acts of violence. The rapid spread of disinformation in the context of the Israel-Hamas war can have wide-ranging security implications, further inflaming an already deadly conflict, producing government miscalculations, and inspiring unrest or acts of violence in the region and beyond.

An overall increase in AI-powered disinformation has been noted since the public gained widespread public access to large language models (LLMs) and other generative AI tools in late 2022. Malicious actors in the information space have seized on these tools to run their influence campaigns to be more efficient and less resource-intensive. Notably, the war in Ukraine has been accompanied by an onslaught of Russian deepfakes seeking to discourage Ukrainians from resisting the Russian invasion or seeking otherwise to confuse international audiences on the events in Ukraine. China-aligned actors have also been spreading deepfakes, seizing on tense U.S.-China relations to bolster China’s strategic interests. In the United States, political messaging and campaigning is already changing due to generative AI. The Republican National Committee, for example, created a deepfake depicting an apocalyptic scenario resulting from Joe Biden's re-election, which was spread widely after it was posted. In Poland, the main opposition party, PO, published a deepfake of the voice of Prime Minister Morawiecki which criticized the internal politics of the current ruling party PiS. A deepfake audio of the leader of the Progressive Slovakia party, purportedly showing him plotting to rig the elections, was spread just days before the voting booths in Slovakia opened. However, not just those with (geo-)political agendas have been misusing generative AI. Profit-driven trolls use LLMs to easily populate content farms with fake news bound to generate clicks, highlighting the economic incentives of misusing generative AI. The FBI has furthermore alerted the general public to the increase in extortion cases through the spread of deep fake explicit images and videos, i.e., “sextortion.”

The most prominent three companies in AI (Alphabet, Anthropic, and OpenAI) have been making strides in alignment  the process of steering AI systems to perform as intended by humans in terms of goals, ethics, and preferences  through Reinforcement Learning from Human feedback (RLHF). Despite these efforts, misuse of generative AI will continue to influence the information landscape. This has important implications for the upcoming U.S. 2024 general elections, where the creation of false or misleading content with LLMs is likely to lead to high quantities of efficient false content, which may have significant offline implications if used for highly vitriolic narratives or calls to violent action. There are also several other high-stakes elections scheduled to be held in 2024, including in Taiwan, India, Indonesia, South Africa, and the European Parliament. Additionally, geopolitical conflicts or tensions will likely see less human-generated disinformation and more AI-generated content in the future, which may cause automated bot accounts or cyborg accounts to work more efficiently. Meanwhile, law enforcement organizations have raised alarm about the potential use of deepfakes in serious crimes, such as the creation of non-consensual pornography. Content regulation is likely to become an important and contested issue as the negative impacts of poorly regulated generative AI tools become increasingly apparent to citizens.

SUBSCRIBE TO INTELBRIEFS