Societal Consequences of Deepfake videos

Societal Consequences of Deepfake videos

This article explores the wider societal effects of deepfakes, looking specifically at the potential societal effects. It is essential to first define a deepfake. Deepfakes are defined by European Parliament as…

‘manipulated or synthetic audio or visual media that seem authentic, and which feature people that appear to say or do something they have never said or done, produced using artificial intelligence techniques, including machine learning and deep learning’ (EPRS 2021)

Wahl-Jorgensen and Carlson further explain that…

‘The emergence of deepfakes results from recent technological developments in machine learning in which a program combines two distinct sets of images to create a single fab-ricated audiovisual image that is difficult to distinguish from unaltered video sources’ (Wahl-Jorgensen, Carlson, 2021)

Deepfakes were popularised following the release of an app that allowed individuals to easily fabricate celebrity porn videos. Once the app took told, this technology was used for various purposes, with a variety of public figures being victimised.  Deepfakes can be presented as parody, meaning that there is a lack of intent to deceive. They can also be presented as disinformation, meaning that it is shared ‘with knowledge of its falsity and thus intention to deceive or otherwise do harm’ (Wardle 2020).

The European Parliament reminds that doctored imagery, and media manipulation is not new, and deepfakes can therefore be described as ‘just a new technological expression of a much older phenomenon’ (EPRS 2021). It is essential to recognise that currently, in comparison with more typical examples of fabricated news stories, deepfakes are not as prominent or popular. Therefore, it has been argued that research surrounding the implications of deepfakes focuses on their ‘future threat rather than a realised problem’ (Wahl-Jorgensen, Carlson, 2021).  The impact of a deepfake is however wide-ranging and can cause psychological, financial, and societal harm (EPRS, 2021). A Deepfake video can be targeted at an individual or group and may be used for defamation, bullying and intimidation, causing psychological harm to the victim(s). Financial harm can be caused by utilising deepfakes to commit extortion, theft, fraud, stock market manipulation or reputational damage. The focus of this article is the potential societal impact of deepfakes. A European Parliament report concerning deepfakes lists nine potential societal harms caused by deepfakes:

  1. News media manipulation
  2. Damage to economic stability
  3. Damage to the justice system
  4. Damage to the scientific system
  5. Erosion of trust
  6. Damage to democracy
  7. Manipulation of elections
  8. Damage to international relations
  9. Damage to national security

(EPRS, 2021)

An example of a viral deepfake depicted Barack Obama stating that Donald Trump was a ‘dipshit’. Shockingly, a study conducted by Vaccari and Chadwick, asked viewers whether they believed this video was authentic, and only 50.8% believed it was false despite this being a highly improbable statement (Vaccari and Chadwick, 2020). This statistic highlights the significant potential of deepfakes to create deception.  Whereas this example can be argued to have been created for comedic value, other deepfakes could potentially have dangerous consequences. A non-partisan organisation called RepresentUs created a deepfake showing Vladamir Putin and Kim Jong-Un stating that there would be an impending collapse of American democracy. Global news claimed that these videos were ‘the most realistic fakes video of world leaders to date’ (Link). A deepfake video created in Malaysia that depicted a political aide seemingly admitting to a homosexual relationship with a cabinet minister, led to the destabilisation of the coalition government (Link).  This deepfake example demonstrates the potential of deepfakes in contributing to the manipulation of elections, thus highlighting the dangerous societal impact deepfakes can have. As technology advances, deepfakes become more challenging to detect, particularly by ordinary consumers. A study by Rössler found that people correctly identify deepfakes in 50% of cases which translates statistically as the same as random guessing (Rössler et al. 2018).  They can additionally be utilised to solidify existing disinformation further. For example, a falsified video of a government official confirming that covid vaccines have tracking devices in them.

‘Deepfakes find fertile ground in both traditional and new media because of their often sensational nature. Furthermore, popular visual-first social media platforms such as Instagram, TikTok and Snapchat already include manipulation options such as face filters and video editing tools, further normalising the manipulation of images and videos’ (EPRS, 2021)

It can be seen therefore, how political agendas could be easily swayed, discrediting candidates, elected officials, political parties and consequently influencing peoples voting habits.  The inclusion of senior political figures, as orchestrated by RepresentUs additionally highlights the possibility of influencing foreign policy by manipulating videos of state leaders.  Deepfakes additionally have the potential to have a significant impact on society when combined with political microtargeting (PMT) techniques. PMT is a technique used in political campaigns defined as ‘creating finely honed messages targeted at narrow categories of voters’ found from data ‘garnered from individuals’ demographic characteristics and consumer and lifestyle habits’ (W Gorton, 2016). PMT can be seen as a type of behavioural advertising, tracking behaviours to identify voter behaviours.   Political microtargeting has been described as a ‘form of political direct marketing, in which political actors target personalised messages to individual voters by applying predictive modelling techniques to massive troves of voter data’ (I Rubenstein, 2014). Deepfakes can be used in conjunction with PMT methods, to target political extremes with deepfake content that is easily believed. In the Donald Trump era of politics, certain groups are microtargeted, exploiting and exacerbating echo chambers of like-minded people to influence and solidify political views. This framing technique uses deepfakes for political propaganda, spreading conspiracy theories and other forms of disinformation. Deepfakes, when microtargeted, can potentially manipulate individuals to commit potentially democratically, politically, and societally dangerous acts.

The European Parliament however contended ‘the most worrying societal trend that is fed by the rise of disinformation and deepfakes is the perceived erosion of trust in news and information, confusion of facts and opinions, and even ‘truth’ itself’ (ERPS, 2021). Vaccari and Chadwick’s study found that exposure to deepfake videos’ decreased trust in news on social media indirectly, by eliciting higher levels of uncertainty’ (2020). There is a concern that ‘democratic governance is damaged by the lack of shared trust regarding the veracity of public information’ (Wahl-Jorgensen, Matt Carlson, 2021).  Galston poignantly writes that…

“If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves – without reliable evidence – whom or what to believe’ (Galston 2020)

Scholars have been increasingly analysing the implications of what has been described as ‘truth decay’ on society.  Kavanagh and Rich define truth decay by the following four trends:

  1. Heightened disagreement about facts and analytical interpretations of data.
  2. The blurred line between opinion and fact.
  3. Increased volume and influence of opinion and personal experience across the communications landscape.
  4. Diminished trust in formerly respected institutions as sources of factual information.

(Kavanagh, J., and M. D. Rich, 2018).

One of the main drivers of truth decay has been noted to be the changes in the information system, including the rise in social media and its effect on the dissemination of disinformation ((Kavanagh, J., and M. D. Rich, 2018). The consequence of truth decay is a widespread mistrust of information by citizens, contributing to social and political alienation and disengagement. A loss of sense of facts and reality contributes to the stifling of free public debate, a foundational pillar of democracy. More research is required in to the implications of deepfakes to find methods to reduce their negative impact on society.

Ella Allen

Leave a Reply

Your email address will not be published. Required fields are marked *