INTELBRIEF

December 7, 2023

IntelBrief: U.S.-EU Content Moderation Divide Crystallizes with Hamas-Israel Conflict

AP Photo/Gregory Bull, File

Bottom Line Up Front

  • The onslaught of disinformation related to the Hamas-Israel conflict has highlighted the distinct approaches of the European Union (EU) and the United States regarding content moderation on social media platforms.
  • The EU’s landmark Digital Services Act, which came into force for major online platforms in August, may aid the European Commission in combatting disinformation related to the conflict by keeping platforms accountable for illegal and harmful content.
  • The U.S. tradition of affording greater protections to free speech has manifested with less stringent content moderation compared to Europe amid the uptick in conflict-related mis- and disinformation.
  • While they attempt to shield their own populations from foreign information operations, China and Russia are accused of actively sowing division in the Middle East conflict with Western audiences.

Since the October 7 assault by Hamas on Israel, disinformation regarding the conflict, including AI-powered imagery, has rapidly spread and created further tensions and confusion. Distinct responses from the United States and European Union (EU) to this uptick in false or misleading narratives demonstrate their different approaches to moderating content on social media platforms. While U.S. tech giants, operating in a country where stringent constitutional free speech protections more severely limit the government’s ability to intervene, have primarily taken conservative approaches to policing hate speech, the EU has been more stringent in combatting hate speech as well as disinformation and misleading content online.

The landmark Digital Services Act (DSA) of the EU required that 17 designated “very large online platforms” (VLOPs) as well as two “very large online search engines” (VLOSEs) comply with its regulations beginning in August. These categories target sites with at least 45 million monthly active users and include familiar names such as Google, Facebook, TikTok, Twitter (now “X), YouTube, and Alibaba. Beginning in February, all platforms will be subject to DSA regulations. This sweeping EU law seeks to regulate online intermediaries and platforms, specifically focusing on ensuring users’ safety, protecting fundamental rights, and creating a fairer and more open ecosystem of online platforms and intermediaries. As part of the new regulations, the EU deploys “trusted flaggers” –approved based on proven expertise as well as independence from the targeted tech firms – to report illegal content online. Targeted companies are required to create mechanisms for users to flag such content, as well, and will have to respond swiftly to citations from both sources. The law also requires companies to undertake annual risk assessments regarding illegal content and disinformation and, conversely, how efforts to curb this problematic content impact free speech. Particularly impactful, U.S. tech firms will also need to comply with the DSA, as the regulation applies to all providers offering their services to EU citizens, regardless of where they are headquartered.

Last month, the European Commission commenced its second-ever investigation into VLOPs for disinformation under the DSA, demanding that Meta – which owns Facebook and Instagram – provide more information within one week about how its platforms are approaching the significant increase in disinformation and illegal content related to the Hamas-Israel conflict. The commission made a similar demand of TikTok, though it did not reference the conflict specifically, and threatened both firms with fines for non-compliance. Last week, the commission made a similar demand of X, which is already under investigation regarding its compliance with the DSA. Under the DSA, VLOPs’ failure to comply could result in companies being fined six percent of their annual global revenue. However, the strength of enforcement of the DSA is yet to be tested.

The crackdown by the EU on harmful content stands in stark contrast with the current U.S. stance on social media platforms’ content moderation practices. While some U.S. representatives have called on technology platforms to do more to remove harmful or misleading content regarding the war, no U.S. government efforts have been initiated to bring greater accountability to platforms for false or hateful content related to the Hamas-Israel conflict. U.S. law – specifically, Section 230 of the Communications Decency Act – stipulates that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This clear division in approaches between the United States and EU to content moderation is closely related to how free speech is viewed in both societies with respect to the government’s responsibility to prevent incitement to violence, the spread of falsehoods, and other issues germane to a health information ecosystem. While the United States has historically proven to be far more reluctant than most countries to curb free speech, and Section 230 has given social media companies an air of invulnerability against accountability for the content users post on their platforms, Europe has seen greater value in imposing limits on free speech, as exemplified by Germany’s ban on holocaust denial and other forms of hate speech in many EU countries.

Meanwhile, the Chinese and Russian positions have largely been two-pronged on content moderation. While Russia and China have adopted highly restrictive measures to prevent foreign disinformation from influencing their own populations, they continue to promote and spread false and misleading content in their strategic interests elsewhere. For example, TikTok, owned by Chinese technology company ByteDance, has been accused of pushing the Chinese position on the Middle East conflict on its platform. Historically Islamophobic content – which originated on the Western far-right – has also transferred over and proliferated on WeChat. This popular Chinese social media platform is widely used among Chinese diaspora communities in the West. In the wake of October 7, Kremlin-linked Facebook accounts have reportedly ramped up their output by nearly 400 percent, with some accounts spreading the false claims that Hamas terrorists are using NATO weapons to attack Israel and that British instructors trained Hamas militants. The French foreign affairs ministry has also accused a Russian-affiliated network of social media bots of amplifying antisemitic images of the Star of David graffiti that were painted on buildings in Paris. The Middle East conflict provides an opportunity for Russian President Vladimir Putin to foment discord in the West over the conflict in Gaza through this type of targeted social media activity – a long-time tactic by the Russian leader used to catalyze real-world tensions, weaken Western democracies, and advance his own geo-political goals. The conflict also provides a convenient distraction from Russia’s war in Ukraine, with public attention diverted away from the Ukrainian theatre and some Western leaders and citizens questioning the extent of support for Ukraine during a time of economic uncertainty, including Slovakian Prime Minister Robert Fico, for example.

SUBSCRIBE TO INTELBRIEFS