‘Fake News’ – How Disinformation is Used to Cause Harm

‘Fake News’ – How Disinformation is Used to Cause Harm

Fake news is characterised as ‘falsehoods that are deliberately manufactured and circulated with an intent to mislead’ (Claire Wardle 2017).  The term ‘fake news’ rose to prominence during the 2016 U.S Presidential Election campaign when Donald Trump branded any news story he did not deem favourable ‘fake news’. This term began to be used by many conservative politicians and has spread throughout the world, increasingly being used to spread theories concerning the coronavirus pandemic. Richey argues that the fake news strategy is not to ‘establish falsehoods as true, but rather pollute political discourse such that news information consumers are led to doubt the very concepts of truth and objective political facts’ (2017). Fake news, despite being a recently popularised term, has, however, existed for centuries.

This article will analyse fake news created and used manipulatively to spread hate and cause harm, otherwise termed disinformation. The European Commission defined disinformation in 2018 as ‘verifiable false or misleading information that is created, presented and disseminated to deceive the public intentionally, and may cause public harm…Disinformation does not include inadvertent errors, satire and parody, or clearly identified partisan news and commentary’.  Hate campaigns are a communication strategy that incorporates disinformation (deception), to target and scapegoat minorities to create fear and generate power (Cherian George 2020). Disinformation is used to spread misconceptions, fabrications, lies and misleading contexts. The crucial indicator of disinformation is its intent to deceive.

K Shu (2020) lists eight reasons for deception:

  • Persuade people to support individuals, groups, ideas, or future actions;
  • Persuade people to oppose individuals, groups, ideas or future actions;
  • Produce emotional reactions (fear, anger or joy) toward some individual, group, idea or future action in the hope of promoting support or opposition;
  • educate (e.g., about vaccination threat)
  • Prevent an embarrassing or criminal act from being believed
  • Exaggerate the seriousness of something said or done (e.g., use of personal email by government officials)
  • Create confusion over past incidents and activities
  • Demonstrate the importance of detecting disinformation to social platforms’

In the internet age that we live in, fake news arguably has a more significant potential to cause harm. The internet allows fake news to be produced more readily and spread more extensively across continents. The legal and ethical guidelines produced by, and for, traditional publishers do not exist on the internet and social media. Any person has the ability to create fake news. For example, an ‘ordinary user’, a lone person sat in their home, can make a viral fake news story. In some cases, however, fake news can be sophistically created by ‘disinformation innovators’ who are linked to governments, security agencies, corporations or political parties. Sahana argues that ‘The more horizontal distributions of information agency makes it easier for foreign agents to tap into digital toxicity that transcends national boundaries’(Sahana Udupa et al 2020). Algorithms designed to increase traffic for search engines and social media sites allow fake news to spread rapidly and go ‘viral’. The same algorithms also create echo chambers, whereby a person is only provided with information that fits their prejudiced views. When this person has views harmful to others, these views can become more entrenched through living in an echo chamber of only damaging information. The internet lacks sufficient gatekeepers to filter out harmful and illegal content. There is also a concurrent lack of fact-checkers. 

There is not an internationally accepted definition of hate speech. The UN Strategy and Plan of Action on Hate Speech defines Hate Speech as ‘any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.’ International Law requires that states prohibit ‘any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility of violence’ (Article 20 of The International Covenant on Civil and Political Rights). Not all hate speech reaches this incitement requirement; therefore, not all forms of hate speech are covered by international law. Unfortunately, hate speech is a phenomenon that is increasing in the digital age and continues to have severe consequences. The Secretary-General of the United Nations, António Guterres, stated that Hate Speech is a common ‘precursor to atrocity crimes, including genocide.’

Daniels (2018) argues that hate speech and online aggression are rendered pleasurable and enjoyable in this age. Exclusionary ideologies are spread for fun in many cases, such as using emojis following a racist comment. Another popular approach is to belittle complaints about a hateful comment, for example, stating that the hate speech is ‘a joke that liberals don’t understand’. ‘Banalisation of online hate has become a new enabling ground for exclusionary politics to stabilise, complementing conventional strategies of “serious” appeal and dissemination’ (Sahana Udupa et al 2020). Han (2017) argues that the online community is a ‘digital swarm’ of individuals powered by incessant narcissism and self-interest. Online forums provide a gathering of like-minded people without an actual gathering. The problem with the individual power of online persons is that ‘when individuals become outraged, there is no space to decompress before the communication is created and sent…the outrage is usually self-serving instead of altruistic.’ (M.Ivey 2017)  

Morally it is uncontroversial to state that the majority of individuals would like greater regulation. Logistically, however, this is challenging, with scholars such as Ganesh (2018) arguing that online hate culture is ungovernable. Legally, hate campaigns are difficult to penetrate. Hate campaigns backed by influential actors can be incredibly sophisticated, technologically, structurally, legally and financially. Movements led by ‘ordinary users’ maintain sophistication as they often use coded language to evade content moderation and work in decentralised networks that are extremely difficult to pinpoint legally. Coded language in the form of pseudo-science is also used to add legitimacy to false claims. Lexico defines Pseudo-scientific as ‘falsely or mistakenly claimed or regarded as being based on scientific method’. Pseudo-scientific language is prevalent in fake news surrounding the coronavirus pandemic. 

Legally, much of the language used in hate campaigns do not break the law. For example, internationally hate speech needs to incite discrimination or hostility of violence. Statements are made to be deliberately inoffensive in isolation, only causing harm when read in context. George writes that ‘There is a division of labor, with the most extreme language usually left to minor politicians and anonymous trolls, while the movement’s leaders keep their hands clean’. The lack and diversity of social media giant’s policies governing content, also allows for a lack of moderation of hate speech and disinformation. As soon as a fake news article is removed from one social media platform, it appears on another. If a piece of hate-filled disinformation is successfully removed from more popular social media platforms, the content will move to platforms with enhanced encryption that facilitates anonymity, such as Telegram or Gab. In many cases, it is impossible to pinpoint the source of the content to place legal responsibility.

Additionally, hate campaigns are not limited to a few racial slurs, for example, they can be multi-layered and comprise years of coded, interwoven messages and agendas. This makes it virtually impossible to stop disinformation and hate speech from spreading. An additional issue are the established difficulties in fact-checking and disproving malicious disinformation. George poignantly writes that ‘These stories are curated in a highly selective and often malicious manner, thus creating a distorted picture of the world; but they are not all falsifiable (2020). The campaigns deliberately weave real news among fake news to manipulate consumers into believing all content. In addition, facts are taken entirely out of context to portray a particularly biased picture. This can make it harder for fact-checkers to contest some content, as the facts are true, yet they lack the back story and context. Furthermore, when fact-checkers create content to disprove fake news, this content very rarely gets the same attention that the viral fake news achieved, limiting its aim to disprove. Once the fake news has extensively spread, ‘the train has effectively left the station’.

Ella Allen 

Leave a Reply

Your email address will not be published. Required fields are marked *