The rapid development of artificial intelligence (AI) and virtual technologies means that virtual employees and AI influencers are increasingly becoming a reality. Companies are experimenting with digital avatars as service employees, synthetic presenters in videos and even fully AI-generated influencers on social media platforms. Content such as imitated voices (voice cloning) or computer-generated advertising characters are blurring the line between human and machine. These innovations offer enormous opportunities – from 24/7 availability and cost savings to novel marketing strategies – but also raise complex legal questions and ethical gray areas.
This article will provide an in-depth legal analysis of how such virtual actors and synthetic content are currently to be legally classified. The focus is on the legal framework in the EU and in Germany in particular. In this context, media law requirements (e.g. from the Telemedia Act and the Interstate Media Treaty), civil liability issues, personal rights, trademark rights, data protection and competition law will be examined. The applicability of the upcoming EU AI Act (AI Regulation) will also be discussed.
We also compare international developments in other jurisdictions, particularly China – which is known for its use of virtual newsreaders and strict regulation – and the US, where the use of deepfakes, AI voices and digital actors is primarily governed by existing laws such as privacy law and new individual laws.
Finally, we will look at specific areas of application and risks, such as AI influencers on TikTok and Instagram (with questions of liability for misinformation or personality violations), AI-generated content on platforms such as OnlyFans (including virtual models and synthetically generated scenes) and corporate law issues relating to business models based entirely on virtual characters. The aim is to present opportunities and risks in a balanced way – on the one hand, to show the possibilities of modern AI strategies for start-ups, media companies and agencies, but on the other hand, to clearly warn against possible legal violations or ethical pitfalls.
Legal framework in the EU and Germany
In European and German law, there is already a network of regulations that apply to virtual actors and AI-generated content – even if these were often not created explicitly for AI. In many cases, analogous legal principles are applied: An AI avatar or virtual influencer is not a legal object in its own right, but its operators and developers must comply with existing laws. We look at the most important areas below.
Media law requirements (German Telemedia Act and Interstate Media Treaty)
Virtual influencers and moderators typically appear on online platforms or websites. They are therefore subject to media and telemedia law. In Germany, the Telemedia Act (TMG) and the Interstate Media Treaty (MStV) in particular regulate the conditions under which content must be published and labeled on the internet.
1. classification as telemedia or broadcasting:
AI-generated content on the internet – be it a blog post by a virtual employee or a video by an AI presenter – is generally considered telemedia. Telemedia are electronic information and communication services that are not directly broadcasting. For example, an Instagram or TikTok profile of a (virtual) influencer is a telemedia. Broadcasting regulations (such as licensing requirements for broadcasters) would only come into play if a virtual presenter were to offer a linear program with journalistic and editorial content that is accessible to a broad audience according to a broadcasting schedule. A 24/7 livestream with AI avatar news presenters could theoretically be classified as broadcasting and require a license under the MStV. However, the threshold is high: virtual influencers are typically available on-demand and are not organized as a full-fledged broadcast, so they do not require a license. Nevertheless, the state media authorities also monitor telemedia if they violate media law obligations.
2. imprint and provider responsibility:
For business-related telemedia, usually offered for a fee, the imprint obligation according to § 5 TMG applies. This means that behind every professionally operated website or social media presence (including that of a virtual influencer), a responsible service provider must be clearly identified by name and address. An AI influencer cannot be a provider themselves – this role is fulfilled by the company or person operating the account. Likewise, Section 18 MStV requires a person responsible for the content to be named for journalistic-editorial telemedia (which are suitable for contributing to the formation of opinion). This means that if a virtual news anchor or blog author is acting, a real person responsible must be named in the background who is liable for legal violations, for example. Start-ups that work with seemingly autonomous avatars must not succumb to the misconception that they can remain anonymous in the background – transparency about the operator is required by law.
3. identification of advertising and commercial communication:
Both the TMG and the MStV stipulate that advertising content must be clearly identifiable as such. § Section 6 TMG stipulates that service providers must clearly label commercial communication and that the person or company on whose behalf the advertising is made must be identifiable. Similarly, Section 8 (1) MStV requires that advertising must be easily recognizable as such and clearly separated from other content. This means for social media: If a (virtual) influencer presents a product in a post and this is done for advertising purposes for a company, the advertising character must not be disguised. In practice, advertising labeling is therefore necessary (e.g. using clear terms such as “ad” or common hashtags such as #advertising or #ad). This applies equally to virtual influencers and human influencers – an AI figure does not enjoy any special status here.
Influencer marketing has been particularly targeted by recent rulings of the Federal Court of Justice (BGH ) and the legislator. In 2021, the BGH clarified in several decisions (so-called influencer decisions) under which conditions social media posts are unfair as disguised advertising. In addition, Section 5a (4) UWG was introduced in 2022, which expressly stipulates that concealing the commercial purpose of an action is misleading and therefore unfair if the commercial purpose is not clearly recognizable anyway. In practice, this means that if a virtual influencer posts a branded product, for example, because the operator receives something in return or wants to promote their own image/brand, the commercial purpose must be disclosed unless it is immediately obvious to the average user. In the case of purely private, non-sponsored statements (which are rare for a company avatar, however), no labeling would be required. However, as AI influencers are usually created by companies for marketing purposes, the intention to advertise is regularly assumed – the labeling requirement is correspondingly high.
4. separation requirement and staging:
The separation principle is also relevant under media law: editorial content and advertising must not be mixed. Virtual moderators or avatars that read out news, for example, may not subtly mix this news with advertising without it being labeled. Caution is required precisely because AI avatars can appear deceptively real: An avatar in a YouTube video that first provides factual information and then praises a product “on the side” can violate the separation requirement. Companies should ensure clear visual or content-related separation, for example by displaying “advertising” in the video when the virtual character switches from information mode to advertising mode.
In addition, an extremely realistic-looking AI presenter who is mistaken for a human by the audience could raise questions of deception – more on this later under the AI Act. The Interstate Media Treaty requires transparency as to who is behind a medium. As a rule, users will assume that a social media profile of an attractive young person is also operated by such a real person. If it turns out that this is a fictional character, the audience may feel deceived. Although there is currently no explicit obligation to label the artificiality of a character as such, this will become relevant in future under European law (AI Act). Until then, it is advisable to disclose that the avatar is a virtual person for reasons of trust – especially if the avatar interacts with users (e.g. in comments or chats).
Civil liability for AI-generated content
A key issue is liability for the statements and actions of a virtual character. Since an AI or avatar is not a legal entity itself, the question arises: who is liable if an AI influencer insults someone, spreads false claims or commits other legal violations? Under German civil law, this question can be answered relatively clearly: the natural or legal persons behind the AI influencer are liable – i.e. typically the company, agency or operator that uses the AI avatar.
1. own content vs. third-party content:
Under German law (and EU law, e.g. E-Commerce Directive), a distinction is made between own content and third-party content when it comes to online content. Anyone who creates content themselves or makes it their own is generally liable without limitation for its legality. A service provider does not have to check third-party content (e.g. user comments) in advance, but may have to remove it if there are indications of legal violations (notice and takedown). Transferred to AI content: A virtual employee that autonomously generates texts or posts, for example, is considered by the law to be the operator’s own content because the operator uses this avatar as a tool to distribute content. It is not the case that a company can claim that “the AI said this on its own authority, we have nothing to do with it”. The statements of the virtual agent are made in the interest and within the scope of the operator – comparable to a human employee, for whose statements the employer may also be liable (e.g. in the context of vicarious liability under Section 831 BGB or as their own organizational fault).
Although Section 831 BGB (liability for vicarious agents) strictly speaking only applies to human assistants, if an AI figure makes mistakes, the company’s own breach of duty will usually be assumed (e.g. inadequate monitoring of the AI or negligent publication). In short: companies bear the full legal risk for what their AI influencers say.
2. violation of personal rights and damage to reputation:
It is conceivable that AI-generated content is offensive or damaging to reputation – for example, an AI influencer makes a derogatory remark about a real person in a comment, or a virtual news anchor spreads untrue facts about someone. In such cases, the general rules on personal injury and defamation/slander apply (Sections 823 (1), 824 BGB in conjunction with Articles 1 and 2 GG and, where applicable, criminal law Sections 185 et seq. StGB). The injured party can demand injunctive relief and compensation from the responsible party. It is not the “software” that is responsible here, but the person who operates the software or publishes the content. It can be assumed that courts will treat the operator as if he had made the statement himself (or through an employee) in a serious case. Therefore, a company that publishes AI-generated texts or videos must carefully check what is published in advance.
The challenge is that advanced generative AI sometimes chooses its own formulations. If, for example, a language model is given free rein, it could produce unforeseen statements. Legally, however, this does not exempt you from liability – it only increases the risk and the monitoring effort. Companies should take technical and organizational measures to prevent the AI from making false statements (e.g. content filters, manual final checks of important posts, etc.). Otherwise, a cease-and-desist order can quickly be issued, which may be associated with high amounts in dispute (especially in cases involving personal rights) and costs.
3. misleading and incorrect information:
Another aspect of liability: what if an AI avatar gives incorrect information – for example, a virtual customer advisor in the chat gives incorrect legal information or incorrect health advice? Several areas of law come into play here. On the one hand, warranty or contractual liability rules could be relevant: If, for example, a virtual financial advisor gives incorrect investment tips on behalf of a bank, the bank is liable in case of doubt due to errors in advice, just like a human advisor. Or if an AI chatbot concludes binding contracts or submits offers on a website, these declarations are binding for the company if the bot was used by the company in this way (keyword: electronic agent – legally, an automated system can make declarations of intent that are attributable to the operator if it is programmed accordingly, e.g. automatic order confirmation in an online store). A customer can generally trust that the communication is valid, even if they are talking to a machine. However, problems arise when the bot makes mistakes that a human representative would not have made. Exclusions of liability are often used in the drafting of contracts, but the limits are narrow in relation to consumers (see the law on general terms and conditions, Section 309 No. 7 BGB prohibits exclusions of liability for physical injury and gross negligence, etc.).
On the other hand, tortious liability for violation of protective laws can be considered if, for example, an AI system gives dangerous advice (think of a virtual fitness coach who gives tips that are harmful to health – in extreme cases, this could be considered assault by omission if there is a guarantor obligation). This will rarely happen in everyday life, but providers should bear in mind that negligent misinformation can be relevant under liability law, especially if there is a special position of trust (e.g. AI doctor’s assistant in a health app: the provider must make it clear that this is not medical advice or they could be liable).
4. product liability and new EU liability rules:
A special case: If AI software is considered a product, product liability could also be an issue. Until now, product liability (ProdHaftG) seemed to apply more to physical products. However, the EU is planning to modernize the Product Liability Directive so that software and AI are also covered. If an AI system causes damage (e.g. material damage due to the malfunction of an autonomous AI in an application), the manufacturer’s strict liability could apply. However, this is still under development. At the same time, an AI Liability Directive is being discussed, which would make it easier for injured parties to provide evidence when AI is involved. For our constellation – an AI influencer causes damage to the company’s image or spreads false information, for example – this would presumably amount to strict liability, not product liability, as the focus here is on “content” and not physical damage.
The bottom line is that companies need to take contractual and organizational precautions: When AI avatars interact with customers, there should be clear terms of use that regulate obligations and liability issues. Internal monitoring is needed so that a human can intervene if necessary. In legal terms, this can be summarized as follows: AI never acts in a legal vacuum – a legal entity behind it is always responsible and, in case of doubt, is liable as if for its own actions.
Protection of personal rights, trademark rights and data protection
Virtual characters and synthetic content can strongly interfere with the protected rights of third parties. Particularly relevant are personal rights (if the likeness, voice or identity of real people are imitated), trademark rights (if protected trademarks are used) and data protection law (if personal data comes into play during generation or use).
1. general right of personality and right to one’s own image/voice:
The general right of personality protects every person from unauthorized representation of their person. The right to one’s own image is specifically regulated in Sections 22 and 23 of the German Art Copyright Act (KUG): According to this, images of a person may only be distributed or publicly displayed with their consent, unless an exceptional situation (contemporary history, assembly, etc.) applies. The right to one’s own voice is also recognized – the courts have ruled that imitating a person’s distinctive voice for advertising purposes without consent also violates personal rights (example: in the past, a well-known sports presenter was imitated on the radio by an impersonator in order to advertise products; this was prohibited as the listeners attributed the voice to the real person and this appropriation was inadmissible without permission).
Applied to AI content, this means that voice cloning of a real person without consent can constitute a violation of personal rights. Anyone who imitates the voice of a celebrity using AI, e.g. to create the illusion in advertising or videos that the person is speaking for themselves, is encroaching on that person’s right to the spoken word. The same applies to deepfake videos in which the faces of real people are used: The right to one’s own image is affected when these deepfakes are published. A virtual influencer who deliberately looks exactly like a real-life role model (without the latter being involved) would be problematic. There have already been cases abroad in which, for example, the facial features of an actress were transferred to other bodies using AI (especially in the unspeakable form of non-consensual pornographic deepfakes). In Germany, an affected person could defend themselves under civil law – by means of injunctive relief in accordance with Sections 823, 1004 BGB analogously in conjunction with the general right of personality, and possibly compensation for damages (including monetary compensation in the event of serious interference). The dissemination of certain deepfakes can also be relevant under criminal law (insult, defamation, or § 33 KunstUrhG makes unauthorized dissemination of images punishable, but usually only in the case of violation of privacy or commercial intent).
For companies, this means that caution is required when using real people as models. If, for example, you want to create a digital avatar of a well-known presenter, you need to clarify their rights – in the form of a license or contractual agreement. Otherwise you could face legal action. Nowadays, many celebrities have their name, image and voting rights protected by contract (keyword: “right of publicity”, especially in the USA, or in Germany as a valuable part of the right of personality). Less well-known people could also have claims if their identity is recognizably instrumentalized.
Practical tip: It is best to create virtual influencers from scratch, without a 1:1 template. If an existing role model is to be recreated (e.g. “reviving” a former star using AI for advertising purposes), express consent or contractual protection with the person or – in the event of death – their heirs is required. In the famous Marlene Dietrich case, the Federal Court of Justice ruled that the commercial personality right (especially the commercial value of the portrait) can also be exploited by the heirs post-mortem. The same would apply to an AI-generated “digital twin” of a deceased person: Without the consent of the rights holders, one violates the post-mortem right of personality.
2. name rights and misrepresentation of identity:
In addition to image and voice, a person’s name is also protected (Section 12 BGB). If a company were to name an AI influencer exactly “Angela Merkel” and allow her to appear as such, this would constitute an unlawful usurpation of name rights. Of course, no one would be so clumsy; however, it is more realistic that virtual characters may take on the characteristics of real people, which the audience then associates. This can be a case of misrepresentation of identity if, for example, an avatar pretends to be a real person who does not actually exist, but the audience believes in a real person. Fake profiles are a handy example: If someone pretends to be a certain existing person (identity theft on social media), this is illegal. In the case of AI influencers, it is more likely that they do not copy a specific person but represent a fiction – in which case there is no direct victim in terms of naming rights. Nevertheless, it should be clear that avatar “X” is not the real person “X” (if a similar name occurs by chance or intentionally) in order to avoid confusion.
3. trademark rights and advertising figures:
Two constellations are conceivable under trademark law: On the one hand, virtual influencers could become a trademark themselves; on the other hand, they could unintentionally infringe third-party trademarks.
For the first case: Many virtual characters have recognition value – think of “Lil Miquela” (a well-known international virtual influencer) or “Noonoouri” (a virtual avatar from Germany/Switzerland that is used in the fashion world). These names and logos can be protected as trademarks (e.g. word mark or figurative mark at the DPMA/EUIPO), which is recommended to prevent imitations and enable merchandising. Fictional characters also enjoy title protection to a certain extent without trademark registration or can function as a business badge, but formal trademark protection offers clear rights. Start-ups developing a virtual fictional character should therefore consider registering their name and possibly avatar logo as a trademark.
At the same time, you have to make sure that the chosen name does not conflict with existing trademarks. If the invented AI influencer is called “Lexa”, for example, and a protected trademark “Lexa” exists in the entertainment industry, there is a risk of trouble due to trademark infringement. The appearance or slogans of an avatar could also possibly affect third-party rights under copyright or trademark law (if the avatar is too closely based on a well-known character). It is therefore advisable to conduct a trademark search before launching.
Second constellation: Virtual advertising that uses brands. The usual rules apply here: If an AI-generated commercial shows, for example, a car with a recognizable trademark logo without the trademark owner being involved, this may constitute trademark infringement (MarkenG §§ 14, 15) if it is not covered by the so-called trademark exception. In advertising, the use of third-party trademarks is generally critical – except as comparative advertising under strict conditions or with permission. So if an AI influencer praises a certain drink in a post and mentions its brand name, this should ideally be done in consultation with the brand owner (which is usually the case in influencer marketing). It becomes problematic when AI content adopts brand logos or characters without being asked – e.g. an AI-generated comic advertising clip that suddenly incorporates well-known comic figures or brand characters in order to attract attention. This would be an infringement of the brand and often also of the copyright to the characters.
To summarize: Virtual actors per se can be treated like a trademark and enjoy corresponding protection, but they must not interfere with third-party trademark rights without permission. Especially because AI tools sometimes mix content, care must be taken to ensure that no protected third-party elements (logo, product design, name) end up in the generated content. Otherwise a warning for trademark infringement or design infringement could follow. A thorough check of the output also helps here.
4. data protection (GDPR):
Data protection law plays a role on several levels:
Data protection when creating synthetic content: Modern AI models are often trained with large amounts of data, including images, voices or texts that are personal. If, for example, thousands of photos of real people or celebrities were used as training data for an AI presenter, this may constitute the processing of personal data. According to the GDPR, any processing of personal data requires a legal basis (Art. 6 GDPR) and must be transparent. Training AI with publicly available images is currently a gray area. Some AI companies rely on legitimate interest or exceptions for scientific purposes, others do not obtain consent and thus risk legal concerns. For a company that trains its own AI, the following applies: Do not use data from identifiable persons without checking the legal basis. In case of doubt, the consent of the data subjects is required, especially when it comes to sensitive data (biometrics such as faces or voices can be considered a special category, Art. 9 GDPR, as they can be used for identification). The GDPR sets high hurdles – the image of a person is already personal; if you want to use it to create an avatar, you basically need their consent or at least have to anonymize it sufficiently so that there is no longer any personal reference.
Data protection when operating virtual employees: If a virtual employee is used in customer service, they typically process customer data (e.g. names, concerns, perhaps account information). In this case, the operator is the controller within the meaning of the GDPR and must fulfil all obligations: from the duty to provide information (Art. 13 GDPR, privacy policy, which should also mention that an AI answers the inquiries) to data minimization and technical and organizational measures for data security (Art. 32 GDPR). You should also check whether a data processing agreement is necessary if you involve external AI services that process data on your behalf. For example, a chatbot may use a cloud AI API – in this case, customer data flows to this service and GDPR-compliant contracts must be in place.
A special aspect is Article 22 GDPR: This gives individuals protection from fully automated decisions with legal or similarly significant effect, without human involvement. This is less of an issue for virtual influencers, but it is for virtual employees: let’s imagine an AI system decides whether a customer gets a loan (fully automated) or an AI recruitment agent decides on a job application – then Article 22 applies and requires, among other things, a right to human intervention. In our context, however, AI moderators or influencers usually remain in the area of content/marketing, where no such individual decisions are made. However, AI influencers could have an indirect influence on user decisions (e.g. purchase recommendations). This is more relevant under competition law than under Art. 22, as the user does not experience an automated decision against them, but is only courted by AI advertising.
Data protection and personal rights – overlap: Once again on voice cloning: a person’s voice is also a biometric characteristic and therefore particularly worthy of protection. The use of a recorded voice to create a clone is data processing. Without consent, this is likely to violate both the GDPR (processing without legal grounds) and personal rights. As we can see, data protection and personality rights are often intertwined: personal data (image, voice, name) is used by AI → both the GDPR and the protection of personality rights under civil law require the person’s consent. In practice, you will therefore always have to think in two ways: firstly, ensure data protection compliance, and secondly, obtain licenses under personality rights law.
5. exemplary problem case – AI in the gray area:
Suppose a startup creates a virtual avatar for social media from composite real models: The facial features are based on photos of several models to form an “ideal” character. These photos were taken from the internet without asking the models. Even here there are GDPR problems (the models are identifiable in the source photos) and copyright issues (photos have authors). Even if the end result is a unique face that does not correspond to a single real person, the data processing may be unlawful. Moreover, what if the generated face happens to strongly resemble a real person? Theoretically, someone could recognize themselves and make a claim. Such risks need to be weighed up. It is better to fall back on available licensed data sets or only use features that do not allow any conclusions to be drawn about specific individuals.
Competition law: limits of synthetic advertising
An AI influencer may be innovative, but it remains a marketing tool – and marketing in the B2C sector is subject to strict rules of competition law (UWG, Unfair Competition Act, and EU directives against unfair commercial practices). Synthetic advertising must not be deceptive, must not be more aggressive than conventional advertising and must observe certain limits.
1. prohibition of misleading statements:
According to Section 5 UWG, misleading commercial acts are prohibited. Misleading information can be provided through false statements or deceptive design. In the case of AI-generated content, for example, the question arises as to whether the public is being misled as to its authenticity. Is it misleading per se if a consumer believes that a real person is advertising a product when in reality it is a computer-generated avatar? We are in a legal gray area here. So far, such a case is not explicitly mentioned in the law. One could argue: The average consumer primarily pays attention to the product information, not whether the person is real. Nevertheless, a core element of advertising is often the authenticity of the influencer – the feeling that a real person is recommending something out of conviction. If this person does not exist, the consumer might say afterwards: “If I had known that this was just an animation, I would have believed the recommendation less.” Such deception about the nature of the advertising medium could be considered significant in the future.
According to the current legal situation, an approach would be attempted via Section 5 (1) UWG: There, misleading about the “material circumstances” is prohibited, including if the advertiser feigns false identity. If a company deliberately makes an AI influencer appear like a real, independent opinion leader, even though it is a controlled artificial figure, this can be considered misleading – especially if it actively conceals the fact that it is advertising (in this case, Section 5a UWG with labeling requirements applies anyway). Section 5a (6) UWG (Blacklist of Unfair Acts) also expressly prohibits posing as a consumer. A virtual profile that appears to be a normal user but is actually controlled by the company would violate this prohibition. This means that fake testimonials or fake influencers pretending to be satisfied customers are not permitted. Although an official virtual influencer profile does not constitute a “consumer-deceived-consumer” scenario, there is a potential lack of transparency as to who is behind it.
The best approach is therefore to disclose that it is a virtual character operated by company XY. Then the addressee knows that it is not a neutral individual speaking, but ultimately advertising. This transparency is also required by the spirit of the UWG regulations. Under competition law, any concealment of the sender can be considered unfair.
2. surreptitious advertising and labeling requirements (again from the UWG perspective):
We have already discussed labeling under media law. This is flanked by the UWG: Section 5a (4) UWG in the new version requires the commercial purpose to be identified. A violation of this (i.e. surreptitious advertising) is not only a breach of the law in itself, but can also be prosecuted by competitors or consumer protection associations by means of a warning letter. A company that advertises with an AI influencer must therefore observe exactly the same labeling obligations as with human influencers. There is no free pass for “virtual” advertising – whether human or AI, advertising remains advertising and must be clearly recognizable as such.
3. aggressive business practices and targeting children:
AI influencers can be particularly appealing to younger target groups, as they can be trendy, always available “friends” on social networks. This harbors the risk of inappropriate influence. The UWG prohibits aggressive commercial acts (Section 4a UWG), e.g. harassment, coercion or exploitation of coercive situations. A virtual influencer will rarely be openly coercive, but could build up pressure through constant presence or certain psychological mechanisms (“You must have this product, otherwise…”). AI in particular, which analyzes user behavior, could theoretically engage in very targeted behavioral manipulation – for example, personalized avatars that exploit individual weaknesses. Such methods could fall into the area of unfair manipulation. The planned AI Act (see next chapter) also prohibits certain manipulative practices.
Particularly vulnerable groups such as children must not be disproportionately influenced. An example: A cute virtual cartoon influencer advertises sweets on TikTok and many children follow him without understanding that it is advertising. That would be problematic; in such cases, youth protection and competition law apply together. There are already clauses in the EU Unfair Commercial Practices Directive ( UCPD ) and in the UWG that take children’s gullibility into account. Advertising aimed at children must not exploit their inexperience. An AI mascot that is designed like a game character would therefore have to be clearly recognizable advertising so that children can classify it. In Germany, the state media authorities (youth media protection) also ensure that appropriate age limits and labels are adhered to on YouTube or TikTok, for example.
4. unfair competition due to infringements of rights:
If a company infringes third-party rights (trademark, copyright, personal rights) through its AI content, this can have consequences under competition law in addition to civil law claims by the rights holder, provided it occurs in a business context. According to Section 3a UWG, a commercial act is unfair if it violates a statutory provision that is also intended to regulate conduct in the market (so-called market conduct rule). For example, data protection regulations or media law labelling obligations are now recognized as market conduct rules. If AI-based advertising measures deliberately violate data protection (e.g. by illegally using a competitor’s personal data or disregarding consumer rights), competitors could also derive an UWG violation from this. This is a complex topic in detail – but in plain language: unlawful methods in AI marketing can lead to warning letters from competitors. Companies should therefore not only consider the direct risk (of being sued by the affected party), but also the indirect risk (of being sued by competitors for competition law infringements).
5. deception about the nature of the product:
Synthetic advertising must of course not be deceptive in terms of content. For example, if an AI influencer boldly advertises features of a product that it does not have (let’s say an AI-generated car salesperson claims that the electric car charges fully in 5 minutes, which is in fact false), this is classic deception about essential features – strictly prohibited. In this respect, AI advertising is no different from normal advertising. However, the form of presentation could raise new questions: It is conceivable, for example, that a virtual employee in the store could make individualized promises to the customer (“I have a special offer for you today only”). If these promises are unfounded or never intended to be fulfilled, this is also unfair. Companies must therefore ensure that their AI-supported sales avatars make truthful and verified statements. It must not happen that an AI – based on trained sales phrases, for example – promises too much. This content should be strictly controlled and rule-based.
The EU AI Act and upcoming regulation in Europe
At European level, the AI Act (Regulation on Artificial Intelligence) is the first comprehensive set of rules specifically for AI systems. Even though this regulation primarily sets out technical and risk-related rules, it contains a number of provisions that are directly relevant to virtual employees, AI influencers and synthetic content – particularly in the area of transparency.
1. overview AI Act:
The AI Act (also known as the European AI Regulation ) is on the home straight of the legislative process. The EU Commission presented a draft in April 2021; at the end of 2023, the Parliament and Council reached political agreement on a joint text. The regulation could be formally adopted in 2024 and would then enter into force with a transitional period of a few years (probably 2025/2026). The aim of the AI Regulation is to ensure that AI systems in the EU are safe, uphold fundamental rights and respect the values of the Union. It follows a risk-based approach: from prohibited applications (where the risk is unacceptable) to high-risk AI with strict requirements, to limited risk, where only transparency obligations apply, and low risk, where there are hardly any requirements.
2. virtual influencers in the risk concept:
In most cases, an AI influencer or virtual employee is unlikely to be a high-risk AI within the meaning of the AI Act, as high-risk primarily concerns areas such as critical infrastructure, medical devices, personnel-related decisions (e.g. creditworthiness, job selection), justice, biometric surveillance, etc. Marketing avatars are not covered, unless they are used for biometric identification or are part of a job interview selection tool, for example. However, there are also prohibited practices in the AI Act (Art. 5), including the use of AI to subliminally influence or exploit weaknesses in order to change people’s behavior in a way that causes harm to them or others. In theory, you have to be careful here: if an AI influencer were to manipulate consumers with targeted psychological tricks, this would at least be ethically problematic and could fall under such clauses depending on the interpretation. Realistically, however, advertising AI will not be considered a prohibited offense as long as there is no extreme harm.
3. transparency obligation for AI interaction (Art. 52 para. 1 AI Act):
Particularly important for our topic: The AI Act will standardize specific transparency and disclosure obligations. According to the politically agreed text (as at the end of 2023), Article 52 will provide for the following:
- Art. 52 para. 1: AI systems that are intended to interact with natural persons must be developed in such a way that users are made aware that they are dealing with an AI. In other words: If someone is communicating via chat, voice or on a social network with a bot or avatar, they must not be led to believe it is a human – unless this is obvious based on the circumstances. The “obvious” exception must be interpreted narrowly: An example might be a cartoonish bot on a website where it is obvious to everyone that this is not a real person in the widget. But with a highly realistic virtual influencer on Instagram, the aim is precisely to appear real – it is not obvious to the average user that AI is acting here. Consequently, the labeling requirement applies. In practice, it would have to be clearly indicated in the profile or during interaction that this is a virtual AI avatar. Even if the avatar replies to comments or responds to direct messages, he/she would in principle have to make it clear that he/she is not human (e.g. through a note in the profile text or even an automatic message “Hi, I am an AI-controlled virtual assistant…” every time you start a chat).
- This requirement is clearly aimed at chatbots and virtual agents, but would also include AI influencers who get in touch with their followers. The regulation speaks of “interaction”. Even an Instagram comment thread could fall under this, especially if the influencer reacts specifically to individual user responses (it is common for influencers to interact with the community). Public posts alone without direct interaction may not be “interaction” in the sense of the regulation, but as soon as the avatar responds to user input, it applies. And as mentioned above, it is usually not “obvious” that well-made virtual characters are artificial – their aim is precisely to look like real people.
- Art. 52 para. 3 (deepfake labeling): Another relevant clause concerns AI systems for generating manipulated or synthetic content that noticeably resembles real people, objects or events and can give a viewer the impression that it is real – colloquially known as deepfakes. The AI Act stipulates that anyone using such content must disclose that it is artificially created or manipulated content. For virtual influencers, this means that if the AI character imitates or represents a real person, this would undoubtedly be a deepfake and must be labeled accordingly. However, many virtual influencers are fictitious persons and therefore not directly “deepfakes of an existing person”. They do not fall under the narrow deepfake definition because no specific role model is copied 1:1. However, the environment or situation can also be similar to a deepfake – for example, when a virtual influencer uses AI to insert themselves into real video clips (e.g. posing in front of landmarks as if they had actually been there, or “attending” a real event digitally). This could be considered a manipulated representation of real places/events and could therefore be subject to labeling. A practical example: An AI model posts a photo of herself supposedly walking down the catwalk at Paris Fashion Week. In reality, this image was created synthetically. It looks real to the viewer – according to the AI Act, this would have to be declared as manipulated in future so as not to deceive anyone.
- The AI Act contains exceptions for deepfake labeling, e.g. if it is done for satirical, artistic or scientific purposes and with certain protective measures. However, these are unlikely to apply in the commercial marketing environment. Advertising can rarely be justified as artistic freedom, especially as the exception is restrictive (requires respect for the rights of third parties and the necessity of the exception for freedom).
4. consequences of the AI Act for companies:
As soon as the AI Act applies, companies that use AI influencers will be obliged to include transparency notices. It is conceivable, for example, that social media platforms themselves will then stipulate policies or provide tools (similar to the way Instagram already marks paid partnerships). Perhaps a virtual influencer account will have to be officially labeled as “Virtual / AI”. There are already some unofficial profiles that clearly state “Virtual Character” or “Digital Model” in the description text, for example. This could become mandatory in the future. In addition, providers of AI systems that control such avatars will have to meet certain compliance requirements, depending on the risk level (e.g. documentation, risk management in the case of high risk – but as I said, marketing AIs are likely to be mostly non-high risk, so that “only” the transparency regulation is relevant).
Violations of the AI Act could result in severe fines – similar to the GDPR, there could be fines in the millions per violation, graded according to severity. Although the focus is more on AI manufacturers and distributors, users could also be targeted, especially in the case of deepfake labeling (this is aimed at “users of an AI system that generates or manipulates content…”). A company that publishes a deepfake ad without labeling would be liable as a user.
5. synergies with existing law:
The AI Act will not replace the aforementioned obligations (advertising labeling, legal notice, etc.), but rather supplement them. Both must therefore be observed: e.g. labeling an AI influencer as advertising and identifying it as AI. This can be done, for example, by writing “#advertising” and at the same time noting somewhere “#virtualinfluencer” or a formulation such as “AI-controlled figure”. Standardized icons or labels would also be conceivable in the future. All in all, the EU is promoting transparency towards consumers so that they can make informed decisions.
6. other EU initiatives:
In addition to the AI Act, there are other regulatory approaches: The Digital Services Act (DSA) requires large platforms to take measures against disinformation and manipulative content, for example. An obvious use case would be deepfake videos that are intended to influence elections, for example – platforms must flag or remove such content, otherwise they risk DSA sanctions. Although the DSA is aimed at intermediaries (platforms) and not directly at advertisers, the ecosystem for AI fakes is monitored more strictly as a result. The EU Commission has also drawn up codes of conduct against disinformation, where the labeling of AI content is recommended. Discussions are ongoing in copyright law as to whether AI-generated content should be labeled or provided with metadata (keyword: Protecting creators from AI copying). The field is in flux, but one thing is clear: the trend is towards more disclosure obligations and accountability when using AI in public.
We have thus established the European framework. In the next step, we will look outside the box and compare how other countries – above all China and the USA – deal with virtual persons and synthetic media.
International developments in comparison
The legal classification of virtual employees and AI-generated content is an issue worldwide, but approaches vary greatly. While the EU tends to regulate proactively and emphasize fundamental rights, China, for example, relies on strict control in line with state interests. The USA, on the other hand, relies more heavily on existing legal instruments and the courts, with individual states having passed special laws. In the following, we take a comparative look at China and the USA as two prominent examples beyond the EU.
China: AI avatars in the news sector and regulation of virtual persons
China is a pioneer in the use of virtual characters, particularly AI avatars in the media and e-government, in several respects. At the same time, China is stepping in with specific laws to control this technology.
1. use of AI presenters and virtual influencers in China:
A few years ago, China’s state news agency Xinhua caused a stir when it introduced an AI newsreader – an animated likeness of a real anchormen that reads out the news using AI technology. There are now several such virtual newsreaders that can read out news around the clock in different languages without getting tired. AI avatars are also booming in the e-commerce sector: Chinese shopping platforms such as Taobao or JD.com use virtual live streamers to present products as if they were human hosts. At 3 a.m., when real sellers are asleep, such an avatar can still host a sales show – a huge advantage for the 24/7 business. Virtual idols are also popular in the entertainment industry: computer-animated pop stars and influencers who have millions of followers (similar to vocaloid stars in Japan, but partly state-subsidized in China in order to have a malleable, scandal-free alternative to human celebrities).
Opportunity perspective: Chinese authorities and companies see such virtual collectives as an innovation on the one hand, but also as a way of controlling content more closely on the other. An AI presenter sticks exactly to the script, does not get caught up in scandals and can be ideally adapted to the party line. Companies benefit from face-saving brand faces that never produce negative headlines or demand salary increases.
2. regulatory framework in China – state control and new laws:
China is responding to deepfake and AI development with a dense network of regulations, driven by the goal of maintaining social stability and control. Central to this are the “Administrative Provisions on Deep Synthesis of Internet-based Information Services”, which came into force in January 2023 – popularly known as the Deepfake Law. This set of rules requires:
- Clear labeling obligation: Any content generated using deep synthesis (image, audio, video, virtual scenes) must be clearly labeled as AI-generated. This can be done, for example, by watermarks, references in the video or acoustic markers. Techniques such as voice imitation, face swapping etc. are specifically mentioned. The idea behind this: The recipient should be able to recognize immediately if something is not an authentic original. In China, this means, for example, that an AI newsreader must have a notice at the edge of the screen, or that a visible sign is inserted into fake videos of politicians. For companies that advertise with AI, this means that they must add an “AI-generated” label to every AI advertising clip and every synthetic testimonial – otherwise they will be in breach of the regulations.
- Prohibition of deception and fake news: The rules expressly prohibit the use of deepfake technologies to spread rumors or false information. Providers of AI services must have mechanisms in place to recognize and prevent fake news. For example, if an app is offered that allows users to put their face on celebrity videos, the provider must ensure that this is not misused to produce fake news videos. If misuse is detected, the provider must delete the content and report it to the authorities.
- Authenticity verification and registration: Providers of deepfake tools and platforms with virtual avatars often have to undergo official registration and carry out real-name verification of their users (real-name policy). In China, it is common for users to only be able to use AI services or social media accounts if they provide their real name and ID. This is intended to reduce anonymity and increase traceability – so anyone operating a virtual influencer will be identifiable to the authorities.
- Restrictions for virtual idols: While there is no explicit ban on setting up virtual people as influencers (on the contrary, there are successful examples, supported by tech companies), the general censorship and content rules naturally apply. Virtual people must be just as politically compliant as real people. There may be internal guidelines stating that an AI character may not express controversial opinions. Another relevant area is virtual avatars of the deceased: Chinese media recently reported on the trend of creating AI avatars of the deceased so that relatives can “live on” with a simulation of their loved ones. This raises ethical questions as well as data protection issues (China has recently introduced a data protection law, PIPL). It is still unclear whether the state will regulate this, but it shows how far-reaching the virtual person issue is.
- Penalties: Violations of the deepfake regulations can lead to severe penalties in China, ranging from fines to closure of the service to criminal liability, especially when state interests are at stake. China sees such regulations as part of cyber security and public order.
3. virtual employees as company representatives:
In China, public authorities also use virtual assistants – e.g. virtual customer service employees from banks or municipal service providers. The legal requirements are the same as the above: Transparency, no deception. In addition, China’s AI ethics guidelines (published in 2022) call for trustworthiness and avoidance of bias. AI systems should act in accordance with socialist values. A virtual bank advisor, for example, must fulfill all compliance rules and is strictly trained not to make any prohibited statements. Regulation therefore seems to leave fewer loopholes: Where Europe is still debating whether labeling is “necessary”, China has already issued administrative rules that require labeling. The legal freedom that exists in Western countries (keyword: freedom of expression theoretically also allows deepfake satire) is severely restricted in China in favor of order.
4. summary China:
China welcomes AI innovation, uses virtual influencers aggressively in e-commerce and propaganda, but at the same time has imposed ironclad restrictions: Clear labeling requirements, state registration and content control. Virtual persons are not legally independent subjects – they are either avatars of real registered persons or companies. If, for example, a virtual influencer in China were to make statements critical of the regime (whether or not controlled by a foreign company), those responsible would probably be immediately blocked and prosecuted. Western companies that want to use AI content in China need to be aware of this special situation.
Interestingly, China is also pushing ahead with international standards: there have been reports that the country is seeking global cooperation to identify deepfake content using watermarks. Technology and law go hand in hand here – China is increasingly relying on technical solutions, such as fingerprinting AI media to identify their origin.
USA: Deepfakes, AI voices and synthetic actors in social media, advertising and film
The USA traditionally has an approach that is open to technology but at the same time shaped by the legal system. There is not (yet) an all-encompassing AI law at federal level, but various areas of law interlock to form a framework: from strong protection of the right of publicity, copyright and trademark law to the first special legal bans on deepfakes in sensitive areas.
1. right to one’s own image and voice (right of publicity):
In the USA, there is no uniform right of personality as there is in Germany; instead, many states have the so-called right of publicity. This right protects the commercial value of a person’s name, image, voice and even signature or gestures. Celebrities often have very strong rights to this (partly by law, partly by case law). So if someone uses a person’s identity or characteristics for commercial purposes without permission, they can be sued for damages. This is extremely relevant for AI: for example, imitating a celebrity’s voice using AI for a commercial would violate the right of publicity. In fact, there have already been cases in which voice impersonators or deepfake videos have been legally challenged. Actress Scarlett Johansson, for example, has made it clear that any use of her digital likeness without consent will not be tolerated (she was a victim of deepfake pornography, which her lawyers fought against). Many states – California, for example – have explicit laws that also protect the deceased (in California, the right to one’s own likeness continues until 70 years after death, so that, for example, the digital resurrection of Marilyn Monroe or Bruce Lee in advertising is only permitted with the consent of the heirs).
For companies, this means that if you want to use a synthetic actor in a film based on a real star in the USA, for example, you need a contractual license. And this can be expensive. A well-known example: the Star Wars films used the digital resurrection of Carrie Fisher and Peter Cushing; there were consents or agreements with the estate. In 2022, there were headlines that Bruce Willis had licensed his facial data for future advertising deepfakes – although this was later put into perspective (he had allegedly only given one-off permission to use his digital self for a special Russian commercial). However, such deals show that a license-based market could emerge in the USA: Stars sell the rights to use their AI double.
However, if a deepfake is created without permission (e.g. an advertising clip with the face of a celebrity who has never advertised the product), the celebrity can sue. The right of publicity is often the remedy, as well as the Lanham Act (federal law against false designations of origin – can apply if, for example, an advertisement suggests that the star endorses the product, which is a kind of “false endorsement”).
2. defamation and misrepresentation:
There is also protection for ordinary citizens in the USA: if a deepfake video disparages a person (e.g. shows them doing something criminal even though it never happened), this could be classic defamation. However, the hurdles are quite high, especially when it comes to public figures (First Amendment – free speech – requires proof of “actual malice” for defamation claims against celebrities). In the case of private individuals, it is easier to prosecute false statements of fact. So if someone makes a nasty joke with their neighbor and spreads a deepfake in which the neighbor makes xenophobic slogans, for example, this can severely damage the neighbor and could be actionable. In California, there has also been a law since 2019 that prohibits certain deepfakes shortly before elections (politicians are allowed to use it to take action against fake videos that are published in the 60 days before an election with the intention of influencing the result).
3. specific deepfake laws:
In recent years, individual states have enacted targeted deepfake bans, particularly in two areas: non-consensual pornography and election interference.
- Pornographic deepfakes: Virginia and Texas, for example, have laws that criminalize the creation or distribution of pornographic deepfakes without the consent of those depicted. Since 2019, California has allowed victims of such “sexually explicit deepfakes” to sue in civil court if the material is distributed without consent. These laws are a response to the unspeakable phenomenon of faces (usually of women, celebrities or ex-partners) being edited into porn using AI – a serious violation of privacy. Even if there is no federal law yet, there is a consensus that this should be banned.
- Election Deepfakes: As mentioned, California prohibits intentionally misleading manipulative videos in the hot election period. Texas also has a law against deepfake videos that portray a candidate in a false light, committed for the purpose of influencing an election. These laws are on somewhat shaky ground because they potentially run afoul of free speech (what if it’s satire? – but they usually contain exceptions for satire/parody). So far, they have hardly been put to the test.
Nationwide, there is no stand-alone federal deepfake law yet. There have been initiatives such as the DEEP FAKES Accountability Act in Congress, which called for labeling requirements and digital watermarks, but these have not yet been passed. However, US regulators (e.g. FTC – Federal Trade Commission) have made it clear that fraudulent or manipulative use of AI in a business context falls under existing laws. The FTC warned that, for example, fake customer reviews or fake influencer endorsements using AI could be considered fraud – in other words, it would be punishable under the usual misleading consumer protection rules.
4. social media and platforms – self-regulation:
It is interesting to note that in the USA, the major social media platforms have issued their own guidelines against manipulated media. Twitter (now X) introduced a policy whereby deceptive deepfakes are labeled or deleted if they could disturb the public peace, for example. Facebook/Instagram remove deepfake content that is not recognizable as a parody, especially if people are misrepresented. These measures are partly voluntary, partly anticipatory obedience so as not to call lawmakers into action. There are also initiatives such as the Content Authenticity Initiative, which Adobe, Microsoft and others are working on to develop metadata standards that indicate whether a medium has been manipulated.
For virtual influencers in the USA, this means that if they become popular, the platforms are likely to demand that the profile is authentic and does not imitate a real person. Profiles that pretend to be someone they are not would be in breach of terms of use. An original character like Lil Miquela is allowed (she has the Verified checkmark on Instagram despite her “Virtual” status). Her operators have disclosed that she is an art project. In this respect, the platforms tolerate virtual characters as long as there is no identity fraud.
5 AI in film and advertising – industry standards:
In Hollywood, AI is now a hot topic in contract negotiations. The actors’ union SAG-AFTRA has explicitly called for regulations to control the use of AI and digital doubles in the 2023 collective bargaining negotiations. Actors do not want studios to simply use their previous recordings to create new performances without additional payment or consent. The outcome of the negotiations included, for example, that studios are not allowed to digitally mimic an actor’s voice or likeness without covering this in the contract. In other words, the norm is becoming established in the industry that the use of AI must be contractually regulated. Although this does not have the force of law, it shows how private law is being steered here. When a studio hires an unknown actor, it often grants itself the rights to use their likeness digitally at a later date – a development that some warn against, as it could devalue human actors in the long term.
For the advertising industry: The US competition authorities (FTC) have issued guidelines stating that testimonials and endorsements must be genuine or clearly identified if they are actors. Example: An advertisement shows enthusiastic “customers” – then it must be stated in the small print if they are re-enacted scenes with actors. If a completely synthetic “customer” is created, the same principle applies: the impression must not be created that this is a real satisfied customer when it is actually a created testimonial. The FTC would consider this misleading. For this reason, companies in the USA should at least make it clear in the disclaimer or obviously that the AI advertising character is a model/avatar if an independent person is being simulated. On the other hand, if the AI character is introduced as a brand ambassador (so it’s known, this is our virtual ambassador), then it’s like an animated advertising character (similar to Tony the Tiger from Kellogg’s – everyone knows he’s not real, which is okay).
6. generative AI and copyright USA:
Although the question from the user did not explicitly ask about copyright, it should be mentioned in passing: In the USA, the Copyright Office has decided that purely AI-generated works without human involvement are not eligible for copyright protection. This means that if an AI algorithm generates an image or text completely independently, for example, the user cannot receive a copyright claim for it. This could mean for virtual influencers: The images, videos generated by an AI system from the avatar could be in the public domain, unless enough human creativity flows into it (e.g. through post-production, stage directions, etc.). This harbors a risk: someone could theoretically take the images of an AI influencer and use them themselves because there is no copyright. In practice, this could be countered with trademark law (the look/name is protected) or it could be argued that there was human involvement. In Europe, this question is still open, but copyright law is becoming increasingly relevant for AI marketing. So while personality rights and advertising are regulated, the IP protection of purely virtual creations is not yet clear – but this should be taken into account in business models.
Summary USA:
There is no central AI law in the USA, but a network of individual laws and strong case law protects against the misuse of synthetic media. Deepfakes are subject to selective criminal or civil sanctions (especially in porn and elections), the right of publicity protects individuals from unwanted commercialization of their identity, and deception in business transactions is punished through existing advertising rules and fraud offenses. The culture of the right to sue means that missteps quickly end up in court – which certainly has a disciplinary effect. At the same time, freedom of expression is highly valued, which is why general bans are difficult; a satirical deepfake, for example, would probably be covered by the First Amendment as long as it is not defamatory. Companies in the US can use AI avatars, but should avoid imitating real people without permission, disclose when it is staged advertising, and be prepared for the public and media to react critically if an allegation of deception is made.
Now that we have drawn international comparisons, we will turn our attention to specific areas of application in order to shed light on the opportunities and risks as well as any special features under company law.
Fields of application, opportunities and risks in practice
The abstract legal rules gain clarity when they are applied to specific application scenarios. We look at three fields that exemplify the range of virtual actors:
- AI influencers in social media – such as virtual brand ambassadors on TikTok or Instagram.
- AI-generated content on platforms such as OnlyFans – the use of synthetic people in an adult entertainment and exclusive content environment.
- Virtual business models and corporate law issues – such as the organization of a company around an AI influencer and the question of how such business models should be legally classified.
In each case, opportunities (such as new creative possibilities, efficiency gains) and risks (legal and ethical) are presented.
AI influencers on TikTok, Instagram & Co.
In recent years, virtual influencers have celebrated considerable success on platforms such as Instagram, TikTok and YouTube. Accounts such as Lil Miquela (USA), Imma (Japan) and Noonoouri (Germany) have hundreds of thousands to millions of followers, enter into advertising partnerships with major brands and interact with their audience in a similar way to human influencers – with the difference that the person in the photos and videos is not real.
Opportunities from the perspective of companies/agencies:
- An AI influencer is completely controllable. The appearance, the content, even the “private life” can be scripted. Scandals due to ill-considered real statements, breaches of contract or image damage caused by the personal behavior of an influencer (e.g. drug escapades, political gaffes) are practically impossible. The company behind the avatar pulls all the strings.
- The avatar can be active around the clock and theoretically act in different languages and on several platforms at the same time. With sufficient resources, content could be posted at high frequency without anyone actually having to be in front of the camera.
- There are no human limitations: The AI influencer does not age, does not fall ill, does not demand a fee (apart from the costs of the developers and designers). It can be visually adapted at any time to reflect current trends.
- Creatively speaking, the impossible can be made possible: The virtual influencer can appear in fantastic scenarios, change their appearance, take on different roles like a chameleon – things that would only be possible with real people via elaborate special effects.
- For risky advertising segments where people would hesitate (e.g. sensitive political campaigns or product tests under dangerous conditions), you could rather send a fictional character forward – but you have to consider the ethics here.
Risks and legal stumbling blocks:
- Transparency and credibility: As already explained, it must be clear that an AI influencer is involved (mandatory with the AI Act at the latest). If a company pretends to be a real person and this is later discovered, there is a risk of backlash: the public could feel deceived. Even if there is no legal sanction, the loss of trust would be bad for marketing. Example: Imagine if a supposedly authentic beauty influencer turned out to be CGI months later – many followers would be shocked and react negatively because they had built up a personal bond with a non-existent person. It is therefore advisable to brand the virtual character as such from the outset. Many successful AI influencers play aggressively with their artificiality (“they out themselves”). This can even be part of the appeal – the mix of real and surreal.
- Liability for content: The AI influencer needs strict content control (even stricter than for human influencers who can think for themselves about what they post). For example: If a product is advertised, all advertising claims must be true (avoidance of misleading, UWG). If the avatar shares lifestyle tips, these must be harmless. If an AI is used to automatically generate posts (e.g. using a language model), the risk of problematic statements must be averted. In practice, hardly any reputable company is currently likely to give the avatar full autonomous freedom of action – in most cases, people will create or at least approve the content in the background. However, if the avatar reacts to current trends with the help of AI at some point, it will have to follow the same compliance guidelines as a human employee.
- Interaction with followers: Many followers write comments or messages. Human influencers sometimes receive personal responses or “likes”. An AI influencer could theoretically reply to every fan automatically. This has opportunities (high engagement rate, fans feel noticed), but also risks: a careless word can be misunderstood, or an AI generation could be inappropriate in terms of content (for example, a fan shares an emotional confession and the bot replies flippantly, which is taken as a violation). In this case, it is advisable to either limit interactions or script them well. From a legal perspective: If the AI influencer commits an infringement in a reply (e.g. an offensive or discriminatory statement, even unintentionally), the operator is liable. The company may therefore have to have a team on standby around the clock to moderate or heavily filter interactions.
- Lack of human responsibility: The community could ask: Who is behind this? A responsible person must be named for imprint reasons alone, but in terms of communication it should also be clear who speaks for the avatar (a kind of “supervisor” or the company). Otherwise, you have an influencing factor on the Internet without any aspect of responsibility, which causes skepticism. There have been cases in the past where social bots or fake profiles have caused scandals – the public demand was: “Who is responsible?” With AI influencers, we usually know (the company behind it), but the more autonomous and popular the character becomes, the more the operator has to take responsibility.
- Misinformation and fake news: If AI influencers go beyond lifestyle topics – e.g. a politically or scientifically oriented AI influencer (it is conceivable that a virtual political commentator will appear at some point) – the risk of misinformation increases. This is already a problem with real influencers (keyword “influencer spreads coronavirus fake news”). With AI, this can get out of hand more quickly if not controlled. The legal consequences would be similar: in extreme cases, investigation proceedings for incitement to hatred if extreme content is shared, or warnings if false statements are made about a product that are damaging to business. AI influencers should therefore focus on non-critical topics where possible or get expert input when it comes to sensitive areas.
Practical example: An AI influencer on Instagram posts a picture of him taking a certain dietary supplement and writes: “Since I’ve been taking this powder, I feel energized every day!” – This is advertising and must be labeled. The operator must also ensure that the statement is true and not harmful to health. If it were a slimming powder with questionable ingredients, for example, you would have to be careful not to make any promises of healing (Health Claims Regulation in the EU, etc.). When users ask: “Is this safe?” the avatar can only answer with verified information.
Thinking further: Suppose the AI influencer also makes jokes from time to time. An ill-considered “joke” could discriminate against someone or use a stereotype – you would immediately be faced with a shitstorm and possibly legal problems (discrimination law, for example, if an insult is made on the basis of gender or ethnicity, is more a matter of labor and public law, but it does enormous damage to the image). Therefore, the operators should provide the avatar with a guideline of values, similar to a corporate social media code.
Conclusion on social media AI influencers:
They are a powerful marketing tool if used correctly. Companies can use them to reach new target groups and be technically innovative. Legally, however, they are not a carte blanche – all the rules of influencer marketing must be meticulously observed (advertising labeling, no surreptitious advertising, duty of truth, fair business practices). In addition, there are AI-specific obligations such as labeling as AI and more intensive monitoring of communication. The success factor is to maintain authenticity despite artificiality – the followers must accept the concept. This usually works if it is creative and transparent. If trust is abused (for example, if the avatar “feigns” opinions or simulates human closeness that is not genuine just to sell), this can be ethically criticized as exploitative. So you are also moving in ethical gray areas: How far is it permissible to manipulate emotions by creating a cute AI persona that fans grow fond of? Younger users in particular could become very attached to a virtual idol, which raises questions as to whether the company has a particular social responsibility here.
AI-generated content on platforms such as OnlyFans
The use of AI-generated models and content in the erotic/adult sector, for example on subscription platforms such as OnlyFans or on relevant websites, is still a relatively new but discussed phenomenon. OnlyFans is known for the fact that content creators offer exclusive photos, videos or chats for a fee, often in the erotic sector. AI-generated models are now being created – virtual creators who “show” revealing pictures or videos of themselves, even though the person does not actually exist. Even interactive chats could be taken over by bots.
Opportunities and benefits:
- From the perspective of the platform or entrepreneurs: No real person is exposed. This can have advantages with regard to the exploitation and protection of real people. There are many problems in the sex industry such as exploitation, pressure, etc. AI generation offers the vision of creating erotic content without anyone sacrificing their actual privacy. An operator can produce hundreds of AI images without having to pay a model or risk their safety.
- Scalability and diversity: Theoretically, you could generate a “dream type” for every user – AI can create virtual models tailored to preferences (hair color, body type, setting as desired). This is a business model: personalized fantasy fulfillment via algorithm.
- Privacy of creators: It is often creators themselves who use AI to edit or perfect their images, or to replace faces, for example (perhaps someone puts their own face on a more perfect body or vice versa in order to remain anonymous). This allows people to offer content without revealing themselves completely.
- Cost savings: For producers of pornographic content, AI could reduce the need for human performers, which would cut logistics and costs. However, a great deal of technical expertise is required to produce high-quality videos synthetically – at present, it is often images that are used.
Legal risks and problems:
- Personal rights and consent: The biggest stumbling block is when AI pornography is modeled on real people without their consent. This brings us to the topic of deepfake porn, which we have already touched on. Rumors surfaced on OnlyFans that some users are posting AI images that look like celebrities. This is highly controversial: if someone instructs the AI to sexualize images of a certain Instagram model without their knowledge, this is a massive violation of their personal rights. In Germany, this would clearly be illegal and would probably also be classified under criminal law as a violation of the most personal sphere of life (Section 201a of the German Criminal Code – “Violation of the intimate area through image recordings”), even if the image is artificially created, as it shows the person in a sexualized context without consent. In some countries, such acts are already punished (see USA, UK also plans to ban deepfake porn). OnlyFans has guidelines against uploading content that does not belong to you or without the consent of a third party – this would cover such cases.
- Protection of minors and criminal prohibitions: A huge risk area: depiction of minors. Even if no real child is involved, in many jurisdictions (Germany, EU, USA) depictions of a sexual nature that show real-looking minors are illegal. In Germany, § 184b StGB not only punishes real depictions of child abuse, but also “pornographic content that only realistically depicts sexual acts being performed on, in front of or by children” – this would also include computer-generated videos/images if they look realistic enough. So don’t think that AI porn is a way to safely create criminal content. The authorities would crack down in exactly the same way if someone used AI to create images of Lolita fantasies, for example. This is absolutely taboo and punishable by law. Providers such as OnlyFans must therefore take strict care to ensure that AI models appear adult. Even an 18-year-old real model may be shown in pornographic content, but an AI that deliberately looks childish would be illegal.
- Deception of consumers / contractual issues: It is also interesting to note that if an OnlyFans subscriber believes they are interacting with a real person (whether in chat or via personalized content), but actually receives a bot and generated images, they could feel deceived. Contractually, they have paid for “content from X”. If X is not a real person, is the contract voidable due to mistake? OnlyFans Terms of Service will not have foreseen such things. It is conceivable that a paying customer could argue: “I explicitly wanted a personal interaction with a real person, that was part of the deal.” Legally, however, it is tricky in the B2C sector – probably not a claim as long as he has received the agreed content (pictures, videos, chats). There is no legal right “but I want it to be real”, especially if there is no explicit assurance that the person really exists (OnlyFans probably doesn’t have a “all creators are real” clause). Nevertheless, this is an ethical problem: customers pay in the belief that they are maintaining some kind of interpersonal (albeit virtual) relationship. If this becomes a mass phenomenon, it could trigger regulatory debates – such as whether such bots need to be labeled as such (this is where the AI Act comes into play again: in a one-on-one chat, an AI would actually have to identify itself).
- Copyrights and content ownership: Content on OnlyFans is usually created by the creator, and the creator has copyrights to it (as far as photos/videos are concerned, but in the USA this also tends to be ancillary copyright). If AI generates the images, it is unclear who the creator is or whether protection exists at all. For the customer, this potentially means that they receive images that may not be subject to any rights control – so if they redistribute them (which is against the terms of use, but happens), the creator can hardly claim copyright infringement if the images are not protectable. This is less of a problem for the platform, but an economic one for the creator: without traditional IP rights, third parties could steal and imitate content.
- Data protection of subscribers: An AI bot may collect intimate data from users (through chats, preferences). OnlyFans must regulate GDPR/privacy, but not a specific AI topic. Important: If chats are completely AI-automated, the operator should disclose this (AI Act sends its regards). Also out of respect for the users, who might otherwise confide very personal things that no real person could possibly know – some people like this (anonymity), others would find it creepy.
- Platform compliance: OnlyFans & Co have Acceptable Use Policies. At some point, they could prohibit or regulate the use of fully synthetic creators in order to maintain trust. Currently, the trend is new; the reaction of the platforms will be exciting to watch. They may adapt verification processes (currently a creator has to prove their identity to generate revenue – what do you do with an AI creator? The operator would verify themselves, but the “star” is fictitious).
- Possible competition law infringements: If AI profiles pose as real people, it could be classified as an unfair commercial act, as a commercial service (erotic offer) is being marketed under false pretenses. There is still no clear line, but one should at least be careful not to invent false life stories that mislead users. Many OnlyFans models heavily personalize their profiles (“I’m 22, a student, love sports…”). If all this is made up, it would be deception, but OnlyFans fans do expect a certain level of fictionalization. It is difficult to say where the legal boundary lies, as this is a purchased fantasy.
Overall assessment:
AI in the adult content sector is a double-edged sword. Opportunities: protection of real people, creative new offerings, potentially fewer legal complications with actual performers. Risks: high risk of abuse (deepfake porn), strict criminal liability limits (images of children taboo), trust issues. Start-ups or creators who use this type of content should act with extreme caution: always obtain explicit consent as soon as real people are even used as a basis; strict filter systems against unauthorized content; labeling and honesty towards paying users in order to remain credible in the long term. And never slipping into gray areas such as “youthful appearance” – that would be the certain end legally and morally. However, if it is possible to establish purely fictitious adult characters, this could actually be legally exonerating: The “model” cannot make any employment or personality claims, there is no violation of the dignity of real people. The avatar is then treated like an animated character, which is certainly more legitimate than making real people do indecent things under pressure. Nevertheless, we must remain vigilant as to how society and legislators take up this development – after all, these are completely new questions of consumer protection and sexual ethics in the digital space.
Virtual business models and corporate law issues
If virtual influencers or AI employees are not just gimmicks but the core of a business model, the question arises as to how such a company should be structured. Can AI personas themselves be bearers of rights and obligations? How do you form a company around an artificial character? Here we come up against questions of company law and general civil law.
1. legal capacity of virtual persons:
In principle, only natural persons and legal entities (such as GmbH, AG) have legal capacity and can conclude contracts, take legal action, etc. An AI avatar is neither – it is a product or an appearance, ultimately a collection of software and creative elements. This means that a virtual person cannot be a contractual partner themselves. If, for example, “virtual influencer Anna” enters into an advertising contract with Adidas, then this contract must be concluded with the operator, such as “XYZ Media GmbH, acting under the name of the avatar Anna”. The AI cannot make any declarations of intent of its own legal will either – it is always the action of the person/company behind it. This rules out the possibility of an AI influencer becoming, for example, the managing director of a GmbH or the board of directors of an AG, as the law requires natural (or sometimes legal) persons with the appropriate legal capacity. In Germany, for example, a managing director of a GmbH must be a natural person with unlimited legal capacity. An AI does not fulfill this requirement.
There has already been a debate in the EU (initiated by the EU Parliament in 2017) about a possible “electronic person” for advanced AI systems that make decisions autonomously, for example. However, this idea was discussed very controversially and has not yet been implemented. The intention is to leave liability with manufacturers/operators rather than give an AI its own (limited) legal capacity. Accordingly, it is currently unthinkable that a virtual employee could be a rights holder itself – legally speaking, it is a tool and someone’s intellectual property.
2. corporate structure around AI influencers:
In practice, the procedure is often as follows: There is a company (GmbH, UG etc.) that holds the rights to the virtual character and runs the business. This company can conclude contracts (with advertising customers, service providers), generate income (e.g. sponsorship money, product sales) and have expenses (e.g. designers, programmers, marketing costs). Although the virtual influencer is presented as a person in external communication, internally they are managed like a brand or product. Perhaps comparable to a cartoon character: Mickey Mouse, for example, can be imagined as a person, but all business is of course conducted via Disney as the rights holder.
What is interesting from a corporate law perspective is that the shareholders of these companies are often the creatives/programmers, possibly together with investors. Theoretically, the avatar could also be assigned its own value (intangible asset), which is contributed to the capital – for example, by the founders saying: “Our AI character including software and concept has value X”, which is contributed to the company. So far, however, this has tended to be fictitious; money usually flows in the form of seed capital, which is then invested in development.
One point is the limitation of liability: it is wise to set up a corporation upstream so that the individuals are not fully liable. As we have seen, there are a number of liability risks lurking (personality rights claims, competition law infringements, GDPR fines). The GmbH offers external liability protection. Of course, in the event of legal violations such as personality rights, the persons involved are also personally liable in tort – but responsibility can often be shifted to the company.
3. contract design with AI avatars:
If an agency places a virtual influencer with clients, the contract clearly states that the services will be provided by the avatar (or better: by the operator via avatar). E.g. a clause: “The influencer campaign will be carried out with the virtual character XY, which is embodied by ABC GmbH. ABC GmbH ensures that…”. Contracts should also address what happens if the avatar fails (technical problems) or if the audience reacts negatively – just as moral clauses often exist with human testimonials (“if the influencer is involved in a scandal, the contract may be terminated”). Something analogous could be agreed here (“if the virtual figure meets with widespread disapproval or is prohibited by regulation…”). Sounds unusual, but conceivable, as it is a new type of risk.
4. protection of your own virtual business model:
From a business perspective, you want to protect your AI character from imitators. We mentioned trademark law, copyright (if available) and design protection as instruments. Know-how protection could also be important: The technology used to animate the avatar and the specific settings could be guarded as a trade secret (subject to the GeschGehG, Trade Secrets Act). Employees who have access must sign NDAs. If a virtual employee uses special algorithms (e.g. a proprietary AI for dialogs), these should be patented or otherwise secured, if possible.
5 AI influencers as legal entities in their own right – sci-fi or the future?
Although currently ruled out, one can philosophize: Should the law at some point grant rights to AI entities? For example, limited legal capacity to pay for damages themselves (e.g. via a fund fed by the operators). So far, there is little need for this, as it is possible to take the detour via insurance and operator liability. But the discussion could one day become real if AI agents act more and more autonomously. In company law, there used to be the fiction of the “foundation substitute”: you could imagine an AI entity that is entrusted to a foundation. A foundation has no owners, but a purpose and organs. You could theoretically set up a foundation “for the administration of virtual person X”, with the AI as the central asset and a board of directors that is human. But the AI itself would remain an object.
6. employee and labor law aspects:
If companies use virtual employees instead of human employees, this does not currently raise any direct questions under labor law (because the AI is not an employee). Indirectly, however, it could become relevant if, for example, an AI has decision-making power over human employees. For example, an AI manager that approves vacation requests or assigns shifts – works councils might have to have a say here, as the use of such software could be subject to co-determination under Section 87 of the Works Constitution Act (automation of monitoring or performance management). Data protection in the employment relationship also plays a role if AI processes employee data.
Another topic: social security and taxes. Of course, AI does not pay social security contributions. If jobs are replaced by AI, there are debates as to whether companies that do so should pay a “robot tax” to support the social security system. So far, this has not been implemented politically, but it shows that a large-scale replacement of human employees with virtual ones could lead to socio-political reactions. In our context – media/influencers – this is not (yet) a mass phenomenon, but rather individual cases.
7. ethical responsibility of the virtual company:
A business that is based entirely on a virtual identity must also consider the ethical programming of this identity. For example, if a company markets an AI fitness coach as an app, it effectively has a “virtual employee” who has no will of its own, only what is programmed. From a business ethics perspective, the company has full responsibility for what this AI coach advises customers to do (much more directly than with human employees who can make moral decisions themselves). You could argue that it’s easier to keep an AI system correct and inclusive at all times because you set parameters – but the developers are again humans with bias. This is also where the AI Act ties in with obligations (risk assessment, bias testing for high-risk), which can also be done voluntarily for non-high-risk AI in order to avoid scandals.
8. example of a virtual company:
Suppose an agency founds “VirtualStar GmbH”, whose only “celebrity” is the virtual influencer VIKI. VIKI has millions of followers, does advertising, maybe there is even merchandise (T-shirts with VIKI’s avatar, virtual goods like NFTs of her pictures). All of this business is handled by VirtualStar GmbH. VIKI may also be engaged as a long-term brand ambassador for a company. Then VirtualStar GmbH must also fulfill ongoing obligations, e.g. tax returns, accounting. What if VIKI is a huge economic success? The company could distribute profits – to the human shareholders. VIKI itself as a “character” will not receive a salary; however, it could be fictitiously booked as a marketing expense (e.g. the costs for its creation can be capitalized as an intangible asset and amortized over years). This shows how it is treated as an asset from a business perspective.
9. end of a virtual influencer:
Interesting: When the hype is over or the company goes bankrupt, what happens to the avatar? It can be sold like a brand or domain. The rights can be transferred to someone else. This leads to the curious possibility of a virtual influencer “changing agencies” or being taken over by another company without the audience necessarily noticing (apart from the change in style). There are parallels here with the concept of fictional characters such as James Bond – the rights were transferred from Ian Fleming to studios, but the character lives on. Legally it is unproblematic, morally the question could be, does the character have a kind of “integrity” that is preserved, or can the new owner completely re-educate the persona? As long as there are no legal restrictions (such as moral rights – which an AI character does not have, as it is not an author and not human), the owner is completely free to redesign the character. This might irritate fans, but it is legally permissible. Unless there were previous promises that were broken (e.g. if VIKI stood for something in terms of content and then suddenly did the opposite – but promises made by a fictional character are rarely enforceable).
Conclusion on virtual business models:
In principle, the AI person is treated like a product or brand and the company is organized accordingly. It is important to ensure that no pseudo-self-employment or proxy merger arises externally: always conclude contracts in the name of the company, indicate who is responsible in the legal notice, etc. Internally, you should secure your rights and obligations and prepare for future regulations (for example, if AI registers are introduced or special tax rules come into force).
Conclusion
Virtual employees, AI influencers and synthetic content are no longer science fiction, but are finding their way into business and marketing. They offer exciting opportunities: companies can develop new forms of customer contact, be present around the clock, creatively scale content and even rethink sensitive areas such as advertising or eroticism without putting real people at risk. Start-ups and media companies with limited resources in particular can benefit from AI-generated actors in order to present themselves professionally without having to employ large teams. Consumers can also benefit – for example, virtual service staff may be able to provide help more quickly, and creative AI content can be entertaining and innovative.
However, you should never lose sight of the legal framework. In the EU and especially in Germany, what is forbidden offline is also forbidden online – and AI does nothing to change this. If an avatar offends someone, the operator is liable as with any other defamation. If an AI advertising clip makes false promises, competition law and any official sanctions apply in the same way. At the same time, new rules are on the rise that are specifically aimed at AI, above all the AI Act with its transparency regulations. Companies that build in transparency and compliance at an early stage will be at an advantage when these laws come into force.
Particularly sensitive rights – privacy, data protection, intellectual property – must be meticulously observed. Unfortunately, the technologies also invite abuse, be it through deepfakes or unauthorized data processing. A clear warning applies here: the targeted impersonation of real people without permission is legally taboo. The limit is also reached where sensitive legal interests such as the protection of minors are affected. No marketing success justifies crossing such boundaries.
Internationally, we have seen that the approach is different: China has a tough line of regulation and control – anyone who wants to operate there must take labeling and registration very seriously. The USA gives more freedom, but that does not mean an absence of law – on the contrary, courts and individual laws ensure that red lines exist there too (for example, in the case of pornographic deepfakes and commercial exploitation of personas).
Ethical gray areas remain: Is it permissible to deceive people emotionally, even if it is legal? Example: An AI influencer who models an unattainable ideal of beauty for teenagers – this is already criticized with human influencers, but with computer-generated ones, the accusation of artificiality could increase the pressure on young people (“not even influencers are real”). Companies should be aware of this responsibility. In the long term, only those who build trust with their audience can achieve sustainable success. And trust is built through honesty and respect for the rights and interests of users.
In conclusion, it can be said that Virtual employees and AI influencers are by no means operating in a legal vacuum in the current legal system. On the contrary, a wide range of existing regulations apply, from imprint obligations to UWG, BGB, KUG and GDPR. These apply without restriction and must be an integral part of any project planning. In addition, it is important to look ahead – to future laws such as the AI Act or adjustments to copyright and liability law – in order to be compliant at an early stage.
For start-ups, media companies and agencies, this means in concrete terms:
- Get legal advice right from the conception of your AI projects. Legal support is not an option, but a must if you want to avoid trouble.
- Document the development of your AI personas (training data, licenses, decisions) in order to be able to demonstrate that you have not violated any rights in the event of a dispute.
- Develop ethical guidelines for your AI characters: What are they allowed and not allowed to do? What values do they represent? This protects them from slipping up.
- Be transparent with your target group: mark AI content openly and communicate who is behind it. Most people accept AI content as long as they are not taken for fools.
- Continue to monitor regulation: what happens in China can be a trendsetter; what has been decided in the EU (AI Act) will come; the dynamics require constant adaptation.
The virtual revolution in marketing and customer contact is full of opportunities, but also potential legal pitfalls. If you want to take advantage of the opportunities, you have to manage the risks. However, with a clear legal foundation and a sense of ethical boundaries, the use of synthetic employees and influencers can succeed – and perhaps help shape the future of the media and advertising landscape without being on the wrong side of the law or history.