- Deepfakes are fakes created by deep learning that are difficult to distinguish from real videos.
- The technology makes it possible for anyone to create realistic videos without great expense or knowledge.
- The creation of deepfakes raises legal questions regarding personal rights and liability.
- Pornographic deepfakes can cause massive reputational damage and psychological stress for the victims.
- The GDPR could be invoked as a protection mechanism for personal data when creating deepfakes.
- The creation of deepfakes poses challenges for the media and trust in fake news.
- A learning process for society as a whole is necessary in order to critically question the authenticity of media content.
What are deepfakes?
Deepfakes are forgeries of images, videos or sound recordings generated using artificial intelligence. The technology is based on deep learning, i.e. the trained behavior of artificial neural networks. These networks analyze and imitate specific characteristics of people – such as facial expressions, voice or gestures – so precisely that the result can hardly be distinguished from real material.
Thanks to publicly available software such as “FakeApp”, “DeepFaceLab” or browser-based SaaS services, the creation of such content is now also possible for less tech-savvy users. The democratization of this technology has led to a sharp rise in deepfakes – with far-reaching implications for personal rights, media trust, the protection of intellectual property and public order.
Originally popularized by deepfake pornography on platforms such as Reddit, the fields of application have now broadened considerably: from fake investment videos to political manipulation and synthetic audio simulation. The line between art, satire, technological progress and digital abuse is becoming increasingly blurred.
Legal risks and regulatory gaps
Personal rights violations
The general right of personality, which has its basis in the Art. 2 para. 1 in conjunction with. Art. 1 para. 1 GG protects, among other things, the right to one’s own image and representation. If people are shown in manipulated videos or images without their consent, this regularly constitutes an infringement. This applies in particular to pornographic deepfakes that violate privacy and human dignity.
Legal bases:
- § Section 22 of the Art Copyright Act (KUG): Protection against publication without consent
- § Section 201a StGB: Violation of the highly personal sphere of life through image recordings
- Art. 6 GDPR in conjunction with. Art. 9 GDPR: Processing of biometric or particularly sensitive data
Particularly problematic: in many cases, the production of such content is not (yet) explicitly prohibited – only its distribution or commercial use can be prosecuted. A punishable regulation of the production of deepfakes would be worth discussing in terms of legal policy.
Damage to reputation and defamation
Deepfakes can be capable of violating a person’s honor, especially if they are portrayed in a compromising manner through manipulated content or if false statements are attributed to them.
Relevant standards:
- §§ Sections 185 ff. of the German Criminal Code (insult, defamation, slander)
- § Section 263 StGB for financial losses caused by deception
- § 33 KUG, § 42 BDSG, §§ 106 ff. UrhG in the event of interference with copyright or data protection rights
Satirical or artistic content could, under certain circumstances, be Art. 5 para. 3 GG The justification by artistic freedom will, however, be subject to strict scrutiny in individual cases and will regularly be ruled out in the case of intent to deceive.
Liability and potential for deception
Deepfakes harbor a considerable liability risk. If, for example, a video is created that falsely suggests that a celebrity recommends a certain financial product, this can lead to investment decisions – with legal consequences.
Questions of liability under civil law according to the principles of tort (§ 823 BGB) or from an indirect deception within the meaning of § 263 StGB arise as well as problems of proof in civil or criminal proceedings.
In addition, the deepfake objection – i.e. the claim that genuine evidence has been manipulated – could make it increasingly difficult to provide evidence in court in future.
Influencing elections and democratic processes
Deepfakes pose a particular danger in a political context. Even a short, manipulative clip – published shortly before an election – can potentially influence voting behavior. This possibility puts the principle of democracy to a new test. In extreme cases, this undermines trust in democratic institutions.
One legislative response could be to extend criminal law to include the offense of deliberately influencing elections through synthetic media – analogous to Section 108 of the German Criminal Code (electoral fraud).
Data protection assessment
Biometric data (face, voice) is regularly processed in deepfakes. As this is personal data within the meaning of Art. 4 No. 1 GDPR, the creation and use of such content is regularly subject to strict conditions. Processing can only take place on the basis of one of the legal bases specified in Art. 6 or Art. 9 GDPR.
In particular, those affected are entitled to:
- Right to information (Art. 15 GDPR)
- Right to erasure (Art. 17 GDPR)
- Claims for damages (Art. 82 GDPR)
However, one practical problem remains: in many cases, authors and distributors are anonymous or based outside Europe. This makes it considerably more difficult to enforce rights. Clear international framework conditions are therefore required, ideally via a multilateral agreement or the planned AI regulation at EU level.
Media, platforms and the responsibility of third parties
Platform operators, hosting service providers or media companies can also become liable if they knowingly or grossly negligently disseminate deepfakes. According to the current legal situation (in particular Sections 7-10 TMG and, in future, the Digital Services Act), platforms must take action if they become aware of illegal content in order to avoid liability.
Media companies, in turn, should invest more in deepfake detection technologies – also for professional reasons – and sharpen editorial standards in order to maintain their role as credible sources of information.
Technical detection and regulation
The development of methods for detecting deepfakes (e.g. using watermarks, hash values or AI-based detection systems) is just as important as their legal assessment. Future regulation could, for example, stipulate that every file created or modified using AI must be labeled as such.
In the European context, particular reference should be made here to the Artificial Intelligence Act (AIA), which pursues a risk-based regulatory approach. Deepfakes could fall under the category of “high-risk AI”, especially if they jeopardize public order, electoral processes or fundamental rights.
Conclusion and outlook
Deepfakes are far more than a technical phenomenon. They affect key legal protections such as honor, privacy, democratic participation and property interests. The legal system is faced with the challenge of applying existing standards to new constellations – and tightening them up where necessary.
The social handling of synthetic media requires not only a legal framework but also education, media literacy and technical countermeasures. The question will not only be what is real – but also what is believed.