Synthetic faces, simulated voices and AI testimonials have arrived in advertising practice. Virtual creators act as brand ambassadors, commercials are localized into several languages using voice clones, and variant tests run almost in real time. At the same time, the legal framework is becoming more stringent. The EU AI Act establishes transparency obligations for artificially created or significantly manipulated content; this is flanked by national civil, media and unfair competition law as well as a criminal law edge against abusive deepfakes. In Germany, the general right of personality, the right to one’s own image in accordance with Sections 22 and 23 KUG, the protection of names (Section 12 BGB), the prohibition of misleading advertising (Section 5 UWG) and the GDPR remain central points of reference. What is needed is not a selective disclaimer, but a consistent system of labeling, consent, rights chain and auditable documentation along the entire creative pipeline. Integrating these elements into the concept and contracts before the first prompt combines creative scalability with legal resilience and brand security.
Regulatory framework and timetable: EU transparency, national civil and criminal law
The EU AI Act acts as an EU-wide set of rules with staggered application. For advertising, the obligation to make interactions with AI and synthetic or significantly manipulated media clearly recognizable is crucial. This includes fully synthetic representations as well as realistically altered recordings. Transparency under EU law overlaps national tort and media law, it does not replace it. In addition, there is the criminal law level via a planned Section 201b StGB, which is intended to criminalize the production and dissemination of deceptively real deepfakes without the consent of those affected. This creates a multi-level framework: EU transparency as a minimum standard; civil law defense and damage regimes for personality and name violations; competition law sanctions for misleading forms of advertising; criminal penalties for abusive extreme cases. The application roadmap of the AI Act provides for the early application of transparency obligations with accompanying concretization through guidelines. It is recommended that label standards be permanently incorporated into processes instead of building temporary solutions on a project-by-project basis.
Labeling in ads and limits under fair trading law
Labeling must take place where the risk of deception arises: in the asset itself or directly at the point of contact. A reference in the general terms and conditions is not enough. Moving image formats bear the responsibility of making the artificial origin visible before the reception is solidified as genuine. On-asset overlays, short, easy-to-read inserts and flanking information in accompanying texts, landing pages and ad libraries form a resilient standard. It must be clarified at the design stage whether fully synthetic content will be created, whether realistic images will be significantly edited or whether an AI system will interact with users. The closer an asset is to real people, the stronger additional protection mechanisms apply. In terms of fairness law, the decisive factor is whether the content is misleading. If a synthetic testimonial is presented as real, this regularly constitutes a breach of the prohibition of misleading advertising. Advertising labeling and AI labeling pursue different purposes and must be considered cumulatively: the first clarifies the commercial character, the second the artificial origin. Both must not disappear in the small print. Beauty or performance refinements may also be subject to labeling requirements if they significantly change the overall impression; the distinction requires an honest assessment of fidelity to reality.
Personal rights, image, voice, name and data protection
The promotional use of a person’s likeness generally requires consent. AI-manipulated material remains legally an adaptation with reference to the original; without consent, it constitutes an infringement of the right to one’s own image and the general right of personality. The integrity component protects against distortions and gross distortions, even if they are technically perfectly simulated. In the case of celebrities, the property rights component of the right of personality continues to apply after death; legal successors can defend against unauthorized advertising exploitation. Despite the lack of a special standard, the voice is protected as an expression of personality. A voice clone may already be inadmissible if it suggests recognizability or an advertising attribution. Without specific, informed consent, the commercial use of a voice double is risky, especially if it has the character of a recommendation. The name is protected against unauthorized appropriation under Section 12 of the German Civil Code (BGB); combinations of name, voice and synthetic image material therefore entail cumulative risks. The following applies under data protection law: as soon as personal data is processed – such as voice samples or facial recordings to create avatars – the basic principles of the GDPR apply. Biometric data is particularly sensitive; consent, purpose limitation, data minimization, deletion concepts and the clear contractual involvement of processors are mandatory. The AI Act creates transparency, while the GDPR remains the foundation. Both levels run in parallel and require documentation that maps both rights chains and data protection compliance.
Contract design and rights chain: consents, creator deals, tool licenses
If real talents are to be synthetically enhanced, classic image use is not sufficient. Explicit consent is required, which expressly includes digital twins, voice clones, de-aging and comparable AI derivatives. In terms of content, this concerns the recording of voice and facial characteristics for the purpose of generation; media, territories and duration of use; editing, scaling, language versions and boundaries in sensitive contexts. Practical release and control mechanisms, such as preview rights and graduated approval processes, reduce conflicts during operation. An appropriate remuneration system for AI derivatives creates acceptance and predictability. Creator and influencer contracts should define in stages whether and to what extent synthetic replicas are permitted: from the complete absence of AI replicas to strictly limited edits, such as lip-syncing for localization purposes, to the use of virtual doubles or fully synthetic voices in return for additional remuneration, strict approvals and precise transfer of purpose. Blanket all-rights formulas harbor general terms and conditions risks; clear purposes and areas of application increase the stability of the portfolio. In the case of fully synthetic avatars from AI tools, the advertising license depends on the tool T&Cs. Not every provider grants clean commercial rights; model releases remain necessary as soon as real reference data has been used. Without a reliable flow of rights, there is a risk of chain problems, especially in international roll-outs and secondary reuse in archives and ad libraries. An AI transparency clause in the production contract, which makes on-asset labelling mandatory, harmonizes legal and creative objectives.
Production process as compliance design: from discovery to post-campaign
Legal certainty arises when compliance is not an end checkpoint, but a design principle. It starts with a clear use case definition: fully synthetic content, significant processing of real material or interaction with users. This is followed by the mapping of the legal layers: AI Act transparency, personal and image rights, GDPR, UWG and, if applicable, criminal law. During sourcing, tool licenses, talent releases, creator terms and conditions, music and trademark rights must be secured; data protection roles are clearly assigned, data flows documented and storage periods defined. In production, on-asset labels are firmly scheduled, prompts and parameters are versioned and archived in secure workflows. Human final checks prevent blind spots, especially with sensitive testimonials. Platform policies must be adhered to on the distribution side; accompanying texts and ad libraries support the disclosure. Complaints and takedown processes ensure that information from the market is processed efficiently; incident response plans define responsibilities in the event of mislabeling or conflicts of rights. Once a campaign has been completed, evidence-proof archiving forms the anchor: approvals, label screenshots, prompt logs, parameter statuses and versions belong in an audit-proof dossier. Typical risk images can thus be mitigated in advance. A generated celebrity clone without authorization and labeling accumulates personality, name and fairness violations and can reach a criminal dimension. Subtle face definitions without disclosure jeopardize trust and conformity. Unclear voice provider licenses or a lack of GDPR documentation take their toll in international use. User-generated ads with third-party AI avatars require clear UGC terms of use, warranties and at least one pre-moderation. In regulated sectors such as healthcare and financial services, mandatory information, warnings and youth protection standards must be ensured in synthetic formats; synthetic children’s voices or avatars require particular caution and robust age filters.
Outlook with guard rails in continuous text
The combination of AI Act labeling, protection of personality under civil law and a possible criminal law standard is noticeably tightening the framework for AI-supported advertising. Tool providers are professionalizing licenses, watermarks and content credentials; technical standards for origin marking will gain momentum. However, the operational core remains unchanged: Disclosure is not a fig leaf, but a design element. A uniform, cross-platform labeling standard in the asset and in the accompanying texts prevents inconsistencies. A modular consent suite covers digital twins, voice clones, language versions, revocation and review. The rights chain and data protection run synchronously, documented via AV contracts, third-party notices and traceable data sources. A clear policy against the use of real people without consent forms the inner guardrail. Finally, professionalizing evidence management – with prompt and parameter logs, versioning, approvals and label evidence – not only avoids disputes, but also speeds up approvals and audits. This results in scalable creative processes that translate legal obligation into creative freedom and measurably increase brand resilience.
























