INTELBRIEF
May 3, 2019
IntelBrief: Social Media on the Political Agenda
- Later this month, French President Macron and New Zealand Prime Minister Ardern will co-host a conference with the stated objective of regulating violent extremist content online.
- The conference is in response to the live-streamed and downloadable March 15 terrorist attacks at two mosques in Christchurch, New Zealand.
- A fierce debate continues to rage in Western democracies about the extent to which social media should be regulated, and whether social media platforms do it themselves or if governments must.
- Well-intended efforts in restricting inflammatory and even malicious content tend to have unintended consequences and second and third order effects.
The spread of violent extremist content online continues to pose a serious challenge for Western democracies. On May 15, New Zealand Prime Minister Ardern and French President Macron will co-host a conference called 'Tech for Humanity,’ which will include the technocrats and officials from the Group of Seven (G7) in Paris. The Ardern-Macron initiative will attempt to get attendees, who include tech giants like Facebook and Twitter, to come together with government representatives to forge an agreement dubbed the 'Christchurch Call,' in response to the particular social media element of the terrorist attacks in New Zealand in mid-March. This initiative seeks to have conference parties agree that social media was 'used in an unprecedented way as a tool to promote an act of terrorism and hate' and then to ensure that a live-streamed attack can never happen again. Beyond this specific matter, the conference will address the complicated issue of social media platforms continuing to host violent extremist content.
Blocking the live-streaming transmission of a violent crime in progress is not controversial from a free speech perspective, but it remains uncertain exactly how effectively companies like Facebook can be in preventing violent live-streams from occurring in the first place. Further, once the images are uploaded, they are quickly disseminated and downloaded – meaning they are near impossible to completely contain. Companies can remove sites that host that content, but proactive action is difficult, especially given current laws.
The particular matter of live-streamed violent content is part of a much broader conversation about the use of social media by violent extremist groups to spread propaganda to their followers and encourage new people to adopt their ideology. Whether private companies or governments should be in control of social media regulation is a matter of ongoing debate. In the U.S., private sector companies like Facebook and Twitter are not subject as to significant government regulation. Companies have their own Terms of Use and can ban or suspend accounts for a range of reasons. But as the spread of dangerous propaganda and ideology continues, and with disastrous consequences, the appetite to impose government regulation, even in the U.S., grows. And, as social media platforms increasingly operate as news providers, they should be subject to regulation against blatantly misleading information and lies as television and print media is.
While much attention and resources spent on addressing the proliferation of violent jihadist content online, only now is attention being paid to how white supremacist groups are using these same platforms extremely effectively. Censoring this type of ideology, which has a long history in the West and particularly in the U.S., has found less critical support than has taking down violent jihadist content. It remains imperative for governments and private firms alike to aggressively address this growing issue. The 'Christchurch Call' initiative is an important first step.
.
For tailored research and analysis, please contact: info@thesoufancenter.org
[video width="960" height="540" mp4="https://thesoufancenter.org/wp-content/uploads/2019/05/IB-0503.mp4" poster="https://thesoufancenter.org/wp-content/uploads/2019/05/AP_18114051129950.jpg"][/video]