How deepfakes are shaping our online reality

How deepfakes are shaping our online reality

A few years ago, the idea of ​​fake was probably associated with the way in which people modelled their image in the real world. Today, the spread of the internet and the ability to easily create and post content makes the concept of “fake” much more related to the digital world. In particular, deepfakes are increasingly widespread and considered in the digital world, both for their possible positive uses and for those, unfortunately, manipulative. Deepfake is a new generation term, which indicates a particular technique for processing images from videos and photos. It is an algorithm based on artificial intelligence that is able to build photos or video frames on the basis of a reference model of a subject, which is then superimposed on the body of another.

Basically, the face and expressions of one person can be glued onto the body of another in a totally realistic way. In addition, the movements of the mouth are manipulable, which makes it easy to build adhoc videos where a politician makes bogus statements and have them pass off as real. For this reason, deepfakes and fake news are strictly correlated, also given that deepfake videos circulate very quickly on social media. The practice was born on Reddit, an entertainment social network, very used in the USA. At the end of 2017, a sensational discovery occurred: a Reddit user managed to create fake videos starting from videos of the same person downloaded from the internet, and creating really impressive “realistic fakes”. From this discovery, bogus videos began to spread as a recreational activity. The first deepfake videos used to replace the faces of actors with those of other people from the entertainment world.

How do deepfakes work?

Source: Google Images

Deepfakes are part of the technological evolution of computer networks. The underlying technology is called deep learning, which is linked to the technology of neural networks and the various automatic techniques able to “train” an algorithm to recognize new situations. The techniques for creating deepfakes are able to “learn” from a sample of similar data and reproduce them better and better. A bit like the Google search engine that refines search results based on the information it acquires. The more information it has, the easier it will be for the search results to be accurate. Facial recognition is also the basis of the techniques for perfectly replacing one face with another. Using a simple mathematical algorithm that detects the contours of the face, a replacement can be applied to any other face, with truly remarkable results. 

But are deepfakes new?

The very first knowledge on this matter dates to the early 2000s, a period in which the first automatic facial recognition techniques were tested. We said that facial recognition is at the basis of everything, so it is not a real novelty. Various open source software – i.e. software which the end user can freely access and decide how to modify, the most famous example is Linux – were already able to carry out the operation even without knowing the theoretical bases. These strategies were able to recognize faces, but they could not go further than that. In fact, face detection only worked on pre-existing images at the time. What deepfakes are able to do is overcome this limitation, and apply it not only to images but also to short videos. They can detect faces, but also paste and glue in order to shape a person’s appearance, and words. The system works similarly to Snapchat filters, but giving a much higher level of credibility to the situation.

What are the consequences of deepfake?

In cinema, this technique has been able to reproduce impressive results: the 2016 Rogue One movie featured a deepfake including a digitized version of younger Princess Leia, very difficult to distinguish from the original version. In the movie the actress appears as she looked in 1977.

Source: Google Images

But, the really disturbing thing is that it is a technology within anyone’s reach, not in the sense of use but of ease of finding such software: the code of these applications is almost all open source, and is often not mentioned openly. Finding open source software to create deepfake is in fact simple: the deepfake software or apps in circulation are accurate and guarantee excellent performance. Deepfakes are therefore a technology capable of manipulating reality, creating realistic fake videos and images and even news based on an algorithm, in a credible and completely automated way. This, in the first instance, should make us reflect on a greater skepticism that each of us should have when we come across news, videos and images that seem real and that could have been created by software.

But is it truly that easy to create a deepfake?

In theory, yes, but in order to work well the deepfake algorithm needs hundreds, if not thousands, of videos of the same person: better think about it before posting hundreds of videos in public that portray us, and leave them available to anyone. In any case, humans blink every 2 to 10 seconds, and each blink takes between one tenth and four tenths of a second. Therefore, it is difficult to find photos of people with their eyes closed on the web. To recognize a deep fake, pay attention to the blinks, which in the case of artificial videos are almost completely absent due to a limit currently inherent in the algorithm.

How can we protect ourselves from online deepfakes?

First of all, it is good practice to consider multiple sources of information before giving a story as true. While the video of a politician speaking online may seem realistic to us, if this source has no other evidence, it is most likely a deepfake. Then, MIT drew up a list of eight criteria to help recognize a deepfake:

  1. Pay attention to facial details, which often produce artifacts and imperfections in the case of deepfake;
  2. Pay attention to any inconsistencies in the foreheads and cheeks of the subjects;
  3. Pay attention to the eyes and eyebrows, which are often very coarse in deepfakes;
  4. Images featuring people with glasses could be reconstructed in a not very faithful way. Light reflection mechanisms are often lacking in the technology in question.
  5. Beards, mustaches and hair in general could be another indication of imperfection.
  6. Facial moles are often another indication of a possible imperfection in the reconstruction.
  7. Blinking is, once again, an aspect often lacking in deepfakes
  8. The color and the size of the lips can be another signal.

Carlotta Sofia Grassi