IntelBrief: Social Media on the Political Agenda
Bottom Line Up Front
- Later this month, French President Macron and New Zealand Prime Minister Ardern will co-host a conference with the stated objective of regulating violent extremist content online.
- The conference is in response to the live-streamed and downloadable March 15 terrorist attacks at two mosques in Christchurch, New Zealand.
- A fierce debate continues to rage in Western democracies about the extent to which social media should be regulated, and whether social media platforms do it themselves or if governments must.
- Well-intended efforts in restricting inflammatory and even malicious content tend to have unintended consequences and second and third order effects.
The spread of violent extremist content online continues to pose a serious challenge for Western democracies. On May 15, New Zealand Prime Minister Ardern and French President Macron will co-host a conference called ‘Tech for Humanity,’ which will include the technocrats and officials from the Group of Seven (G7) in Paris. The Ardern-Macron initiative will attempt to get attendees, who include tech giants like Facebook and Twitter, to come together with government representatives to forge an agreement dubbed the ‘Christchurch Call,’ in response to the particular social media element of the terrorist attacks in New Zealand in mid-March. This initiative seeks to have conference parties agree that social media was ‘used in an unprecedented way as a tool to promote an act of terrorism and hate’ and then to ensure that a live-streamed attack can never happen again. Beyond this specific matter, the conference will address the complicated issue of social media platforms continuing to host violent extremist content.
Blocking the live-streaming transmission of a violent crime in progress is not controversial from a free speech perspective, but it remains uncertain exactly how effectively companies like Facebook can be in preventing violent live-streams from occurring in the first place. Further, once the images are uploaded, they are quickly disseminated and downloaded – meaning they are near impossible to completely contain. Companies can remove sites that host that content, but proactive action is difficult, especially given current laws.
While much attention and resources spent on addressing the proliferation of violent jihadist content online, only now is attention being paid to how white supremacist groups are using these same platforms extremely effectively. Censoring this type of ideology, which has a long history in the West and particularly in the U.S., has found less critical support than has taking down violent jihadist content. It remains imperative for governments and private firms alike to aggressively address this growing issue. The ‘Christchurch Call’ initiative is an important first step.
For tailored research and analysis, please contact: firstname.lastname@example.org