April 10, 2019

IntelBrief: Real Problems with ‘Fake News’

An India man browses through the twitter account of Alt News, a fact-checking website, in New Delhi, India, Tuesday, April 2, 2019. When India's Election Commission announced last month that its code of conduct would have to be followed by social media companies as well as political parties (AP Photo/Altaf Qadri).
  • In India, the world’s largest democracy, politics continue to be disrupted by ‘fake news’ and deliberate disinformation campaigns targeting the nation’s elections.
  • ‘WhatsApp’ is used by more than 200 million Indians and plays a central role in the spread of false news while also feeding hyper-nationalistic rhetoric.
  • Because WhatsApp messaging is encrypted, it poses an even more formidable regulation challenge than other types of social media platforms.
  • Beyond India, there are enormous issues related to privacy and free-speech as it relates to governments and corporate social media platforms.

The upcoming Indian elections are the largest elections in the world, in terms of overall voters. They are also far and away the largest showcase of the voter-influencing power of disinformation campaigns and ‘fake news’ spread through social media and messaging apps. On April 11, India will begin its general election, with nearly 900 million voters eligible to cast votes in a five-week, seven-phase vote that ends on May 19. The votes are then tallied and the results announced on May 23, 2019. The election is to fill the more than 500 seats in the Lok Sabha, the lower house of India’s bicameral parliament. The results will determine if the ruling Bharatiya Janata Party (BJP), led by Prime Minister Narendra Modi, will win another five-year term to lead India. Polls show BJP, a party with ties to Hindu nationalism and which just recently released its so-called ‘Manifesto’ outlining various political objectives, is likely to win over the opposition Indian Congress Party, which had been the dominant party for decades prior to the 2014 election.

The factors driving the elections are as immense and varied as India itself, but one aspect is unfortunately prominent: the impact of intentionally misleading information and images designed to inflame hyper-nationalist or religious fervor. The 2016 elections in the United States were a showcase for how targeted memes and ‘fake news’ could sway and convince groups of voters, creating echo chambers and feedback loops impervious to empirical evidence. India is grappling with the real problem of fake news, spread across the hundreds of millions of Indian social media users. Chief among them is the impact of WhatsApp, the messaging service owned by Facebook, which has publicly struggled with its own efforts to counter or curb the spread of weaponized disinformation campaigns, some of which have been surreptitiously backed by authoritarian states. Some of these same authoritarian states have adopted a  draconian approach to limiting the Internet in their own countries, mostly to stifle anti-government dissent. Facebook has acknowledged the challenges in addressing these issues; it is challenging to effectively flag content, and fact checkers have been frustrated by lagging efforts to regulate the vast amount of data generated on its platform in India, with more than 340 million users. In the lead up to elections in India, Facebook has gone to great lengths to prepare for inevitable attempts at deliberate disinformation. Ajiy Mohan, Facebook’s India managing director and vice president recently outlined steps that the company is taking, which include requiring individuals or groups seeking to run a political ad to confirm their identity and location, while also providing details behind the financing of these ads. Disclaimers will also accompany the ads, which will remain in a searchable database for seven years, in an attempt to ensure transparency.

WhatsApp is particularly difficult to monitor, even for Facebook. The main selling point of the messaging app is its encryption and its claim to value the privacy of users. One of the most attractive aspects of WhatsApp is privacy, so restricting that privacy would render the product useless. As a private company, Facebook and the other social media firms retain the right to determine their respective terms of use and curtail behavior or content deemed objectionable or otherwise unpalatable. Governments can exert pressure on these companies to remove violent extremist content online, although there are emerging seams in the intersecting areas of politics, law, technology, and privacy. In the United Kingdom, the Department for Digital, Culture, Media and Sport (DCMS) just proposed the creation of an independent watchdog to lay out new guidelines for technology companies. Some governments have been even more aggressive, demanding a ‘back door’ to encrypted services like WhatsApp, which risks the potential for devastating security breaches while also generating widespread concerns over privacy and government overreach.

There are numerous examples of ‘fake news’ leading to real-world consequences. Social media has played a role in fomenting violence in countries like Sri Lanka and Myanmar but pointing the finger solely at companies like Facebook and urging them to do more fails to capture the nuances of the immense public policy challenges in this area. Progress in countering the impact of aggressive disinformation campaigns on elections and high-profile policy decisions cannot be made without the full participation of both government and multinational corporations, with significant input from civil society organizations. Public-private partnerships are notoriously difficult to craft but can have an outsized effect if appropriately managed.


For tailored research and analysis, please contact:


[video width="960" height="540" mp4="" poster=""][/video]