What are deepfakes?
Deepfakes are fakes created by computers using the technique of “deep learning” and artificial neural networks. They are so realistic that they can hardly be distinguished from real videos or pictures. Deepfakes can be used to spread fake news, enable fraud, commit identity theft, and violate privacy rights.
The software used to create deepfakes learns characteristic features of a person from existing data, such as the shape of the nose or the position of the dimples. Subsequently, these learned features can be exchanged with those of a person already present in a video. This process is enabled by advanced algorithms and a large amount of data that train the neural network.
Technology has advanced to the point where individuals can produce frighteningly real-looking videos on their home computers without much effort, expense, or experience. This has led to the fact that deepfakes can no longer be created only by experts, but also by laymen who have access to the appropriate tools.
Deepfakes was developed and named after a Reddit user who posted several videos in the fall of 2017 in which the faces of female celebrities were transferred into pornographic videos. The software that this user used for this purpose is now freely available on the Internet under the name FakeApp.
Recently, the automated creation of images and videos of celebrities, including the generation of voice output, by current AI SaaS (Software as a Service) has brought the relevance of deepfakes to the forefront once again. These services use advanced AI technologies to create realistic images, videos, and audio files that are often nearly indistinguishable from real ones. They have democratized the creation of deepfakes and made them more accessible, which has both positive and negative effects. On the positive side, artists and content creators can use these technologies to create unique and creative works. On the negative side, however, they can also be misused for harmful purposes, such as spreading disinformation or identity theft. It is therefore crucial that we are aware of the potential risks and take appropriate measures to prevent misuse.
Legal problems and consequences
Personal rights violations
The majority of deepfakes available on the Internet today are pornographic in nature. The publication of such manipulated videos can lead to serious personality violations, especially if the appearance of a person is imitated. These types of deepfakes can not only cause emotional and psychological distress to the victim, but also damage their reputation and career.
As technology advances, this issue becomes more relevant as fewer and fewer images are needed to create a convincing imitation. This means that the barriers to creating deepfakes are getting lower and lower, and it is becoming more and more likely that people will be featured in such videos without their knowledge and consent.
It is unclear whether the production or the distribution of a deepfake without the consent of the person depicted is legally impermissible. This is an area where the legislation is not yet clearly defined and needs further clarification. There is also a need for clarification regarding the relationship between the General Right of Personality, the rights under the German Art Copyright Act (KUG) and the General Data Protection Regulation (GDPR).
The KUG could be applicable via the opening clause of Art. 85 DSGVO, in which case the question arises as to whether deepfakes are images within the meaning of Section 22 KUG. If this is the case, individuals who are featured in deepfakes without their consent could potentially face legal action and cease-and-desist letters.
Another risk exists for press portals and other content providers who could fall for misinformation. At a time when “fake news” is becoming more common, deepfakes could help spread false information even faster and further. One example could be forged documents reminiscent of the infamous “Hitler diaries.” With today’s technological possibilities, such counterfeits could be created more and more easily.
At the same time, the effort to detect fake images or information is enormous. It takes specialized knowledge and technology to detect and expose deepfakes. This poses a significant challenge to media companies and other content providers who must rely on the dissemination of accurate and verified information.
It is therefore critical that media companies and other content providers are aware of the risks of deepfakes and take steps to prevent their spread. They must be able to detect and expose falsification in order to maintain and strengthen public confidence in their reporting.
Damage to reputation
In addition to personality violations, deepfakes can also trigger a criminal violation of honor. A violation of honor may arise due to damage to the reputation of the person concerned as a result of the manipulated statements or representations, or if the person concerned is accused of reprehensible behavior in an inaccurate manner. The production and distribution of deepfakes can have criminal consequences. Among other things, protection of the personal honor of the persons concerned via §§ 185 ff. of the German Criminal Code (StGB) as well as protection of the highly personal sphere of life via § 201a II StGB can be considered. In the case of financial loss, fraud under Section 263 of the German Criminal Code may also be considered. From the criminal ancillary laws, § 33 KUG, §§ 106, 108 Copyright Act (UrhG) and § 42 Federal Data Protection Act (BDSG) may be relevant. Justification reasons will regularly not apply to deepfakes. At most, narrow exceptions are conceivable via the artistic freedom of Art. 5 III p.1. Basic Law (GG) for satire, among other things.
Liability issues may also arise. For example, who is liable for damages resulting from incorrect information if a deepfake with false information about a worthwhile investment object circulates and an investor relies on this information? This could be a form of fraud where the creator of the deepfake could be held responsible. However, identifying and prosecuting perpetrators is often difficult because they can hide behind the anonymity of the Internet.
Another potential problem is influencing elections. Just imagine that the day before an election, a video goes viral in which a leading candidate allegedly makes racist remarks or advocates or demonizes drug use – depending on political affiliation. Such a video could discourage voters from casting ballots as scheduled on Election Day, and the individual would have little time to react to this incident. This could not only affect the outcome of the election, but also undermine confidence in the democratic process.
Finally, in the future, the objection that the video, image or sound file presented is a deepfake could be raised more and more frequently during the taking of evidence in court proceedings. This could undermine the credibility of evidence and make it more difficult to conduct court proceedings.
Since people’s faces are recognizable in deepfakes, this is personal data. In the case of pornographic deepfakes, the privacy of the persons concerned is affected, which means that personal data requiring special protection is processed. Therefore, the general processing principles according to the Data Protection Act apply. The use of images or videos of a person to create deepfakes could be considered as processing of personal data in the sense of the General Data Protection Regulation (GDPR).
Another protection possibility comes into consideration via the GDPR. As a result, an affected party will regularly be able to demand, among other things, injunctive relief and also damages. The problem in the end, as is often the case, is the enforcement of the law. Often it will not be known who created a deepfake and published it on the Internet first. Even if the person responsible were known, he or she might be located outside Europe and would therefore be difficult for the German judiciary to seize. This underscores the need for international cooperation and legal frameworks to effectively combat the misuse of deepfakes.
Conclusion and outlook
The originally feared wave of political deepfakes has so far failed to materialize, and whether there will be another such wave at all is disputed among experts. Perhaps more dangerous than deepfakes themselves in the public domain is the general loss of trust that accompanies the emergence of these forgeries.
More important than new laws will be the increasing critical scrutiny of the authenticity of videos, photos and tape recordings. The corresponding learning process throughout society must be continued in order to further increase awareness of this issue.
In this context, it is important to mention the recently proposed Artificial Intelligence Act proposed by the European Commission. This legislative proposal aims to establish harmonized rules for the development, market access and use of AI systems in the EU. It follows a risk-based approach and sets requirements only for AI systems that are likely to pose high risks to fundamental rights and security. For more information on this law, see this blog post.
I plan to publish a separate blog post soon that will discuss in detail the Artificial Intelligence Act and its implications for deepfakes and other AI-related challenges.