Deepfake, a social and economic threat ?
What are deepfakes?
To begin with, it is essential to define the context around this technology, which is increasingly in the news.
The concept of modifying an original video and/or audio clip dates back to 1997, thanks to the Video Rewritei program developed by Chris Bregler, Michele Covell and Malcolm Slaney of Interval Research Corporation. Based on artificial intelligence, it allowed the facial expression and more specifically the lip movement of a video sequence to be modified to fit another audio track. The aim was to be able to adapt a rush to a dubbing or synchronize the actor's lip movement with a new soundtrack.
The term Deepfake, on the other hand, first appeared in 2017 on Reddit. A user of the network had named himself as such to broadcast pornographic videos, in which he used AI to replace the original faces with those of celebrities. Moreover, the name is a contraction of Deep Learning and fake.
Deep Learning, which is based on Machine Learning (the ability of a machine to learn autonomously from initial data), is a learning method that is based on artificial neural networks such as the human brain. More specifically, deepfakes are based on the GAN (generative adversarial network) model of algorithms, invented by Ian Good fellow.
The CERT division of the Software Engineering Institute (SEI), defines a deepfake as "a media file, usually videos, images, or speech depicting a human subject, that has been deceptively altered using deep neural networks to alter a person's identity."
How to explain its emergence?
As with every technological advance, malicious uses emerge. In the case of deepfakes, two factors can explain it. Firstly, the acceleration from 2015 onwards of investments by large companies in artificial intelligence. The following year, Forbes revealed estimates from McKinsey, evaluating the investments of tech giants in AI, led by Google and Baidu, at between $20 and $30 billions over one year. Between 2013 and 2016, mergers and acquisitions in the sector increased by around 80%.
As of 2018, the public is experiencing a first case of viral deepfake. In a video published by BuzzFeed, viewers listen to former US President Barack Obama begin a speech with the phrase : "We're entering an era in which our enemies can make anyone say anything at any point in time". This was followed by incoherent words before becoming more belligerent, especially towards his successor Donald Trump Jr. Behind these words is the American humorist Jordan Peele. He appears in the video alongside the ex-president and says that "this is a dangerous time. In the future, we need to be more careful about how much we trust the internet. This is a time when we need to rely on reliable sources of information". If this first video was impressive when it was released, it now seems dated. And for good reason. The @DeepTomCruise account has, in 2021, taken another step forward. The actor's videos are incredibly realistic, fooling Internet users.
Alongside the progress made by professionals, the general public is benefiting from applications that make deepfakes accessible, such as DeepFaceLab, launched four years ago on Github. This democratization of technologies, which are certainly less efficient but easily accessible and inexpensive, has contributed to the resurgence of deepfakes that are sufficiently realistic to cause a great deal of harm.
What are the known consequences, and what can be anticipated?
The first harms caused by deepfakes are easily identifiable but devastating for the victims. In 2019, the cybersecurity company Deeptrace carried out a study which established that more than 96% of the videos developed in deepfakes were pornographic in nature, and almost all the victims were and still are women. The consequences for them, whether public figures.
or anonymous, are particularly worrying from both a personal and professional point of view.
The individual is deprived of her identity, her speech and her actions. Their reputation is immediately damaged and much more difficult to restore than it was to sully.
Importantly, such content goes viral, far more so than a denial, and sharing it is almost impossible to stop. In 2019, Donald Trump shared a video of Democratic House Speaker Nancy Pelosi allegedly drunk. This gesture, which came from a sitting president, is questionable and reinforces the offensive nature of what is becoming a weapon of manipulation.
Difficult to characterize, spreading rapidly and feeding fantasies or conspiracy theories, a deepfake is like a rumor, the harmful effects of which have already been widely studied. Researchers also warn of a threat called the "liar's dividend". It consists of challenging the authenticity or veracity of information that is legitimate by falsely claiming that it is a deepfake.
Beyond seeking to discredit opponents, the use of deepfakes in politics constitutes a threat to the national stability of states as well as to inter-state relations. The production of fake content that is difficult to identify as such, combined with almost immediate mass distribution, can generate or strengthen tensions between leaders and peoples, causing significant economic and social damages.
Deepfakes also represent a threat to economic actors. In a study by Euler Hermes, published at the end of 2021, two thirds of the companies surveyed stated that they had been victims of fraud attempts in the last twelve months.
The study shows that intrusions into information systems account for only 32% of the frauds recorded, while those involving false directors account for 47% of cases and the
impersonation of partners (lawyers, banks, etc.) for 38%. With the democratization of digital collaboration tools, is a new space for fraud based on deepfake opening up? The misadventure of a subsidiary manager who transferred €243,000 after being duped by swindlers impersonating his CEO suggests that this is the case.
Beyond these frauds, economic actors could be confronted with attempts to destabilize them by means of fake stolen videos showing company directors, for example, confessing to manipulation or fraud.
How to fight against deepfakes and their virality?
Faced with the multiplication of falsified videos, some of which are intended to raise awareness among public authorities and users alike, an awareness seems to be emerging in 2019.
Mark Warner and Marco Rubio, two members of the US Senate Intelligence Committee, have instructed the eleven main American tech companies to deploy solutions to combat deepfakes. Twitter quickly announced that it was tightening its policy, and Facebook followed suit by banning videos that use this technology from its platform.
Since 2020, Microsoft has been working on Video Authenticator, a tool that detects blending boundaries and grayscale elements that are undetectable to the human eye. Facebook is developing Reverse Engineering, which detects fingerprints left by an AI model, and Quantum Integrity is marketing an AI-based offering that tries to determine whether images or videos have been manipulated.
While the latter claim an accuracy rate of 96%, no solution is completely reliable in the face of ever-advancing media generation methods.
Given the impossibility of relying massively on technology to combat deepfakes, it is essential to raise the awareness of the public as well as of institutional and economic actors about this growing phenomenon. This fight must be part of a global approach, because as engineer Thomas Scanlon points out, “no single organization will be able to have a significant impact on the fight against disinformation and falsification.