IntelBrief: Regulating Content in an Age of Digital Terrorism 

INTELBRIEF

IntelBrief: Regulating Content in an Age of Digital Terrorism 

FILE – Photo illustration, a person types on a laptop (AP Photo/Wilfredo Lee, File)

.

Bottom Line Up Front

  • Over the past decade, there has been a proliferation of terrorist propaganda on the Internet aimed at attracting potential recruits while striking fear in terrorists’ adversaries.
  • The aftermath of the Christchurch attack demonstrates how democracies have struggled with the implications—legal, moral, social—of removing and banning violent content.
  • Australia is just one example of a democratic government pushing to make progress against online hate and extremism.
  • Major online platforms still feature violent content and enforcement is inconsistent and sometimes seems arbitrary rather than targeted.

.

Over the past decade, there has been a proliferation of terrorist propaganda on the Internet aimed at attracting potential recruits while striking fear in terrorists’ adversaries. Governments have long been concerned about the spread of violent extremist ideology, imagery, and detailed instructions on topics like bombmaking and target selection. Well before the digital era, books, images, and other publications were banned for similar reasons. But in the contemporary era, the Internet has had an accelerating pace on the radicalization process. The March 15, 2019 attack against a mosque in Christchurch, New Zealand, in which 51 people were murdered by a single gunman, is only the latest and most high-profile example of terrorism being live-streamed to million of viewers around the world.

The aftermath of the Christchurch attack demonstrates how democracies have struggled with the implications—legal, moral, social—of removing and banning violent content. There have also been a range of technical challenges to removing content, as machine learning algorithms need to be synchronized with takedown efforts. Australia has gone further than most governments by harnessing the power of the state through its ‘eSafety Commission’ to regulate violent extremist content on Australian servers and internet service providers (ISP). Since Christchurch, the eSafety Commission has been tasked with ensuring the prevention of images such as the attack on the mosque online and compelling host companies to remove such content where it does appear. The commission is also responsible for countering cyber-bullying and sexual exploitation. 

The problem that Australia faces as it attempts to regulate content is that effective application of the laws requires context and nuance, neither of which are scalable or easily automated at present. An estimated 300 hours of video are uploaded to YouTube every minute and an estimated 5 billion YouTube videos are watched every day. YouTube is also available in 76 languages, further compounding the spread of violent content. Australia’s commission relies on feedback from its community of users to help determine what is considered threatening or violent content, which is imperfect as what is deemed a threat by one person may not necessarily be defined as such by another online user. In Australia, the problem is not so much a lack of authority—though overreach is always a legitimate concern in these scenarios—but rather one of capability and competency. The commission has shut down some sites while allowing others back online, but it remains understaffed and inundated with content to scrutinize.

Australia is not alone in trying to strike a balance between promoting freedom of expression and regulating content on hateful websites, particularly those that encourage or openly praise violence.  Tech companies are also struggling to navigate this space, especially in the post-Christchurch era. Facebook announced new steps just this week, highlighting efforts to improve detection and offer more transparency reporting, as well as laying out a new definition for what the company considers terrorism. Nevertheless, major online platforms still feature violent content and enforcement is inconsistent and sometimes seems arbitrary rather than targeted. Researchers and human rights advocates have been suspended from prominent social media sites following coordinated campaigns by troll armies supported by authoritarians and their regimes. Meanwhile, white supremacy extremist and misogynistic content can be found almost anywhere online. One of the major obstacles to progress is the issue of context, something that artificial intelligence is intended to help with, though solving the challenge entirely may never be possible. As such, governments, the private sector, and civil society must continue to push for reform and demand that hate and violence find no shelter, even in the most far-reaching corners of the digital domain.

.

For tailored research and analysis, please contact:  info@thesoufancenter.org

Subscribe to IB

Archives