To Believe or Not to Believe: Tackling Disinformation

Ardra Manasi
4 min readFeb 4, 2019

In June, 2018, Reuters reported an incident in Madhya Pradesh. A mob of 50+ villagers beat up two men on the suspicion that they were murdering people and harvesting their organs. The source of information was a Whatsapp text message. Weeks following this incident, a Whatsapp hoax about child traffickers inflamed similar incidents — mob lynching and killings of innocent persons — in parts of New Delhi.

With more than 200 million active users in India, the recent fake news incidents have made Whatsapp a black sheep in the social media family. But examples from around the globe suggest that disinformation is a widespread problem and not an isolated incident. During the months preceding the US Presidential elections of 2016, evidence speaks to how far-right groups and white nationalists in America used social media platforms — Facebook, Twitter and Youtube to their advantage in spreading Pro-Trump messages and other conspiracy theories. With their seemingly outward innocuity, Twitter bots and Facebook memes became vehicles for manipulating public perception through their partisan agenda.

While the affordable and participatory nature of internet has opened up new vistas of openness and collaboration, it has also unleashed a culture of hate, violence and misogyny. The freedom of expression exercised under the garb of anonymity through fake profiles on social media and the proliferation of fake news therefore present new challenges. These challenges range from the simple but difficult questions — what is disinformation — to more elaborate projects (“active measures” as the KGB used to call it) run by intelligence agencies.

Inter Action, an alliance organization of NGOs defines Disinformation as “false or inaccurate information that is shared with the explicit intent to mislead.” Disinformation per se is not a new phenomenon but the difficult question here is how it gains legitimacy and how it gets amplified and disseminated through various forms. In one of the measures to understand this phenomena, the Data and Society Research Institute in New York launched an interesting initiative — Media Manipulation Initiative. At the heart of this lies an effort to understand how internet can destabilize social institutions. The media manipulation through disinformation happens in various forms — digital bots, troll armies, doxxing [publishing private information about an individual], gaming trending and ranking algorithms and the use of multiple user accounts to force keywords and topics into our regular internet searches. In addition, journalists and public figures often inadvertently end up being targets for psychological manipulation.

Part of the difficulty in countering disinformation is also the current technological and business environment in which the media and press thrive. For instance, the heavy stress of media analytics on “stickiness” — the number of likes and shares defining the success of an article often drives media outlets to look for readily available sensational content, more than validating the authenticity of it. In some cases, they cover trolls and memes, thus inadvertently propagating extremist agenda.

All of this may lead us to think that we are facing an enemy who cannot be vanquished. However, there continues to be constructive efforts to fight disinformation. For instance, in June 2018, Google India launched an initiative to help train over 8,000 journalists in English and six other regional languages to identify and expose fake news through workshops focused on fact-checking, digital hygiene and online verification. In what seems like a novel educational initiative, “Satyameva Jayate” launched by the District administration in Kannur, Kerala targets students of 150 government schools (from Class 8 to 12) in an effort to train them in verifying and reacting to fake news which they encounter on social media. In responding to over 20 deaths fuelled by recent incidents of fake news in India, Whatsapp has rolled out a new feature in July, 2018 which caps the number of messages that a user in India can forward at once to five. Post the Cambridge Analytica data episode, Facebook has introduced a new feature asking its users to submit a proof of their identity to run ads with political content.

Thus, the war against disinformation requires concerted and multipronged counter-attack strategies by governments, NGOs and the corporate world. For instance, a basic understanding of what a “bot” is or the extent of power wielded by “algorithms” might help equip them to come up with ‘counter-messages’ to contain the spread of a fake news targeting their organization.

In many cases, self-validation of information can go a long way too. There are evolving technology platforms which help us validate the authenticity of information presented before our eyes. For instance, Botometer (launched by Observatory on Social Media) helps us identify a bot on Twitter, based on the profile activity. Similarly, Fake Domain Detective (created by Access Now) helps organizations and individuals to identify fake websites which tries to impersonate them.

Such instances of constructively employing technology to contain the very same monster it unleashed requires both diligence and vigilance. To this end, as an individual, it is good to start asking more questions the next time one sees a forwarded message, a meme or a troll. As author Siri Hustvedt writes in her essay “The Delusions of Certainty”, “Human beings are the only animals who kill for ideas, so it is wise to take them seriously, wise to ask what they are and how they come about.”

--

--

Ardra Manasi

Development practitioner & writer | Interested in labor rights, migration, gender & technology for development.