On subscription platforms such as OnlyFans, AI workflows are becoming the production standard: face and body retouching, style transfer, background swapping, upscaling, voice clones for multilingual clips and even fully synthetic avatars. The leverage is clear: higher production frequency, variant testing, anonymization options, international reach. At the same time, the risk increases if it remains unclear what is “just a beauty filter” and when it is a “synthetic medium” with mandatory labeling; if consent is not formulated in an AI-specific manner; or if edits violate personal rights. The article summarizes the legal guidelines for Germany and translates them into practicable processes for agencies and models who want to publish AI-generated or AI-edited images, photos and videos in a legally compliant manner.
Transparency: when AI processing must be labeled
The EU AI Act requires clear recognizability of artificially created or significantly manipulated content. The distinction is relevant in practice: not every cosmetic correction triggers a labeling obligation. The decisive factor is whether the overall impression of the recording is “substantially” based on AI or has been altered by AI in such a way that the audience assumes a real process that has not actually taken place. Examples of cases that regularly require labelling: realistic-looking face swaps, deepfake videos with synthetic facial expressions/lip sync, fully synthetic avatars, extensive body morphs (body shape, tattoos, scars, facial geometry), synthetic voices in personalized clips.
Labeling must take place where the risk of deception arises: in the asset itself or directly at the publication touchpoint. For videos, a short on-asset note (opening or closing frame) is recommended. For audio snippets, a short spoken note at the beginning is recommended; the note should also be included in the description, caption and paywall preview, if applicable. Formulations should be concise, clear and without marketing phrases, such as: “This clip contains AI-generated or AI-edited elements.” or “Voice/parts of the face are AI-synthesized.” The advertising label (commercial communication) remains separate from this; both transparencies fulfill different purposes.
Personal rights: image, voice, identity proximity
The right to one’s own image (§§ 22, 23 KUG) requires consent for every publication of recognizable persons. AI editing does not change this; it is legally an editing of the source material. Images/videos may not be published without consent, not even “enhanced”. If features are changed in such a way that another real person is suggested, the general right of personality and the right to a name (Section 12 BGB) also come into play: look-alike productions that create recognizability or suggest attributions are legally tricky – especially in sexualized contexts.
The voice is protected as an expression of personality. Voice clones require specific consent. Is a standard model release sufficient for “sound recordings”? Generally not. A voice clone may already be inadmissible without naming the author if it is recognizable or gives the impression that the real person actually recorded the content. Distortions within the meaning of the moral rights of the author (Section 14 UrhG), such as drastic reinterpretations of content, can also trigger claims for injunctive relief and damages.
Historical or prominent personalities are a special case. The asset-value part of the personality right lasts beyond death; the advertising use of a “digital double” without legal succession consent may be inadmissible. Such references are out of place on subscription platforms anyway; even distant allusions have the potential to escalate.
Rethinking consent: the AI rider for model release
Classic model releases regulate the recording, editing and publication of image/sound material. More precise modules are required for AI workflows. A modular AI rider is recommended, which is added to the existing contract and regulates in particular:
– Capture and use of voice and facial features for the purpose of AI generation/editing, including face/body morphing, lip sync, style transfer.
– Media, territories and terms of use; for subscription platforms typically “online worldwide”, but with clear archive and re-upload rules.
– Approvals for editing and context limits: no political statements, no messages harmful to health or business, no disclosure to third parties outside the agreed platforms; option for preview/acceptance.
– Revocation and takedown mechanism: practicably designed, with reasonable deadlines, without blocking publication altogether.
– Optional surcharges for AI derivatives (e.g. additional remuneration for voice clone usage or fully synthetic avatars).
– Labelling obligation: Commitment that the use of AI is made recognizable on the asset/posting in accordance with the AI Act.
It is important to clearly separate the rights to the source material (photo/video/audio) from the newly created AI outputs. Producers should obtain the exclusive rights of use to the outputs or at least obtain a comprehensive, sublicensable license. Anyone who models should know the limits within which the synthetic replica may be circulated. Both sides benefit from predictability instead of general clauses.
Data protection: biometric references, legal bases, deletion concepts
As soon as AI workflows work on material that makes people identifiable, the GDPR applies. No personal data is generated for pure style filters with no personal reference, but it is for retouching and face/voice models. Biometric data is particularly protected if it is processed for unique identification. In practice, this means: consent as the legal basis (Art. 6 para. 1 lit. a GDPR), with special attention paid to necessity and purpose limitation in the case of biometric identification.
Concrete to-dos in production: separate storage locations for raw material and published assets; short storage periods for training/reference data; documented order processing contracts with tool providers when personal data moves to the cloud; prohibition on using third-party data for training without separate permission. Transparent information to models about which tools are used and in what form reduces misunderstandings and strengthens the effectiveness of consent.
Avoid deception: Differentiation between “optimization” vs. “synthetic”
From a legal and reputational point of view, the decisive factor is whether the content is understood as a real documentary. Gentle optimizations – skin smoothing, colour balance, noise reduction – hardly change the documentary content. Deepfake-like interventions become sensitive and subject to labeling as soon as body, facial or situational features are altered in such a way that a real process is simulated. An illustrative example is the creation of a voice message that appears to have been recorded live using a voice clone or the relocation of a scene to a place where the recording never took place. This is where the logic of transparency comes into play: labeling and thus avoiding disappointment – especially in areas where trust in authenticity is part of value creation.
In sexualized contexts, the following also applies: content that falsely suggests minors (e.g. through AI rejuvenation) is strictly taboo; the slightest doubt leads to a takedown. Realistic fakes of real third parties without consent are not only contestable under civil law, but are also in the red zone with regard to planned criminal law standards. The line is clearer than often thought: AI may aestheticize, anonymize and creatively stylize – but not deceive, misappropriate or interfere with the rights of third parties.
Platform rules and DSA mechanics: Notice-and-action as everyday life
Subscription platforms set their own community guidelines for synthetic content. The common denominator: consent from all recognizable persons, a ban on depictions of minors, a ban on deceptive deepfakes and – increasingly – labelling requirements for AI assets. The DSA provides legal support for moderation: reports must be processed efficiently, decisions must be justified and, if requested, reviewed internally. An internal SOP is therefore recommended for professional accounts in order to prioritize reports, attach supporting documents (consents, labeling screenshots, tool evidence) and document decisions in an audit-proof manner. This does not replace a legal review, but it does create speed.
Production pipeline as a compliance design
Pre-production: Determine for each motif whether optimization, significant manipulation or full synthetics are planned. The closer to real people, the higher the requirements for consent and labeling. Choose tools with clear commercial licenses; check AV contracts for cloud services. Have releases and AI riders signed before filming/production; define release and revocation process.
Production: Plan the on-asset label into the templates; establish secure workflows for prompt/parameter logs and versioning; human final check before upload (two-eye principle for face swaps and voice clones). Carefully check sensitive features (tattoos, scars, references to third parties).
Publication: Standardized wording for AI labeling at all touchpoints (asset, caption, landing, preview). No diluting paraphrases. Keep labels consistent for series content.
Post-publication: Process reports and complaints swiftly; take content offline as a precaution if there are substantial allegations, then check and make a new decision with supporting documents. Document all steps: Consents, label screenshots, review notes, decisions.
Archiving: Keep raw data, consents, AI riders, tool licenses, prompt/parameter logs, approvals and published final versions in a structured dossier. This speeds up platform reviews and reduces liability risks.
Typical mistakes and how to avoid them
“The model release automatically covers AI.” Usually not. Without explicit clauses on face/body morphing, voice clones and context limits, the risk is high.
“An AI hint in the caption is enough.” Not if the asset looks deceptively real. The hint belongs in the asset or right next to it.
“No naming, so no problem.” Recognizability is enough. Look-alike productions and characteristic voices can infringe personality rights, even without names.
“AI rejuvenation is just a filter.” Any visual impression that suggests youth must be strictly avoided; the slightest doubt leads to a stop.
“The tool has a ‘commercial license’, so everything is safe.” Only if the license clearly regulates the scope, editing, reuse and any watermarks/attribution – and if no prohibited training data is used. Cloud processing also requires GDPR-compliant AV contracts.
Conclusion: Scale creatively without breaking the law
AI shifts production boundaries – not basic legal values. Those who take transparency seriously, define consents specifically for AI, close rights chains and master the moderation/documentation routine will publish AI-generated and AI-edited content in a legally compliant manner – even on sensitive subscription platforms. The operational difference lies in the preparation: labeling “by design”, AI riders instead of general clauses, clear SOPs for notifications and evidence that can be provided in seconds. This creates reliability for agencies, models and platforms – and turns AI into a competitive advantage instead of a permanent construction site.










































