The integration of artificial intelligence (AI) in video games – for example through AI-controlled NPCs (non-player characters) and procedurally generated quests, texts or dialogs – raises new legal questions. At the intersection of IT law, media law and, in particular, games law, developer studios and publishers must take a close look at how automatically generated content is to be classified under copyright law and what contractual precautions are necessary. This article examines the current legal situation regarding AI-generated content (copyright) and shows how contract law and contract drafting should be adapted in order to define clear responsibilities, rights and liability limits when using AI tools. The aim is to present workable best practices that position the author (and readers) as experienced experts in the field of contract drafting for games.
Copyright classification of AI-generated content (Germany, EU, USA, Asia)
Germany/EU: Under German copyright law, a work only enjoys protection if it is a “personal intellectual creation” of a human being. This means that only a human creator can be the author. Texts, images or dialogs generated automatically by an AI do not usually meet this requirement, as the creative decisions were made by the algorithm and are not directly based on human creativity. Accordingly, purely AI-generated content is not protected by copyright in Germany. The German Copyright Act (Section 2 (2) UrhG) and the concept of a work under EU law presuppose human creativity – an AI has neither its own legal personality nor human creativity in order to be an author within the meaning of the law. European copyright law (harmonized by the case law of the European Court of Justice) also requires that the work must be the intellectual creation of a human author.
In practical terms, this means that If a developer has story dialogs or quest descriptions written entirely by a generative AI and these are incorporated 1:1 into the game, there is no copyright protection. There is therefore no human author to whom these texts could be attributed as a work. The consequences are far-reaching – on the one hand, no one can claim exclusive rights to this AI content, and on the other, third parties could theoretically copy and use such content without infringing copyright. This is tricky for the games industry: Characters, dialogues or artwork from a game can normally be protected by copyright against unlicensed use (for example in fan articles, books or other games). If this protection is missing for AI content, competitors could, for example, adopt unique NPC dialogs or quest ideas without publishers or developers being able to take legal action. Anyone who uses AI-generated content in important game components should be aware of this risk.
However, there are gray areas. It is not uncommon for a minimum amount of human creativity to flow into the AI output – be it through the manual selection or processing of the results. In Germany, the so-called creator principle applies (Section 7 UrhG): The author is the person who made the creative contributions. So if a developer subsequently edits and creatively modifies AI-generated texts so that the developer’s personal signature becomes recognizable, the final content can once again acquire the character of a protectable work. In other words: Post-editing can turn a raw AI version into a copyright-protected work. Many studios therefore take the approach of only using AI as a tool for drafts and letting the human team refine important content. This ensures that a human author can be named at the end (such as a narrative designer) who is responsible for the final creative design. In this way, the content remains protected by copyright and can be used exclusively instead of ending up as public domain. For incidental or generic NPC dialog, unfiltered AI text may suffice – for central story elements, human final control is recommended from a legal perspective.
USA: The USA also clearly requires a human author for copyright protection. The US legal system has already refused protection in cases without a human creator – for example, the “Monkey Selfie” ruling, in which a photo of a monkey was not eligible for protection due to the lack of a human photographer. Applied to AI, this means that content created purely by AI is not covered by US copyright. The US Copyright Office clarified in 2022/2023 that works generated entirely by an AI are not registrable due to the lack of human authorship. In practice, this means that AI-generated graphics or texts, for example, do not enjoy copyright protection as long as there is no significant human creative contribution. However, hybrid works may be partially protected: If a human has selected, arranged or creatively modified the AI results (for example, in a comic book where the AI images are integrated into a story conceived by the author), US copyright protects the human parts (story, selection, arrangement) but not the raw generated elements. The trend in the USA is therefore the same as in Europe – no copyright without human touch. Developers and publishers there cannot rely on AI content being exclusive either, unless it has been significantly influenced by humans.
Asia: There is not yet a uniform approach in Asian legal systems, but there are interesting developments. China, for example, has attracted attention in recent years because Chinese courts have deemed some AI-generated content to be worthy of protection. One high-profile case was the so-called “Dreamwriter” case: a news article written by Tencent’s AI program was recognized by a Shenzhen court as a copyrightable work. The simplified reasoning was that the text generated by the AI system had the required originality of expression and form and the use of the AI was controlled by humans. Although there was no direct human formulation, the court at least recognized the media company as having certain rights to the AI article – a significantly different approach than in Germany or the USA. Nevertheless, it remains unclear to whom such rights are ultimately assigned (the AI operator, the user of the AI?). In China, a more open attitude is thus cautiously emerging towards treating AI results as works after all, provided there is a creative result. Other Asian countries are more cautious: Japan, for example, also requires a human author for copyright and is currently focusing its legal reforms more on the training phase of AI (keyword: permitted use of data to train AI models) than on the question of the quality of AI outputs as works. Overall, it should be noted that there is still global disagreement – while Western core markets (EU, USA) currently do not grant protection for purely AI-generated content, individual jurisdictions could take different paths in the future. For internationally active developers, this means: be careful, consider different legal situations! If in doubt, you should take the most conservative position – i.e. assume that AI content does not offer automatic copyright protection – and take appropriate contractual and business measures.
Assignment of rights, responsibility and quality assurance for the use of AI in the contract
If AI-generated quests, texts or graphics are used in a game, the development or publishing contract must take this special content creation into account. Developer studios and publishers should clearly regulate who owns the rights to AI content, who is responsible for its function and legal compliance and how quality is ensured. In the absence of such specific regulations, there is a risk of uncertainty later on – for example, as to whether the publisher may exclusively exploit the AI content, or who is liable if an automatically generated quest text contains errors or legal violations.
Assignment of rights (ownership): In games contracts, it is common for the developer to grant the publisher comprehensive rights to all work results. In the case of AI-generated content, however, there is the particularity that there may be no copyright at all (see above). Nevertheless, it should be contractually stipulated that all content created in the course of development – whether created manually by employees or generated by AI – is transferred to the publisher for use. In practical terms, the developer will therefore ensure that the publisher receives the full rights of use to the delivered game content, including AI-based components. Even if, strictly speaking, there is no exclusive copyright right to purely AI-generated texts/graphics, such a contractual clause creates legal certainty and prevents disputes: The publisher may use the content in the game and commercially in any case, and the developer undertakes not to exploit it in any other way or make it available to third parties. This establishes a kind of quasi-exclusivity, at least contractually, even if no statutory copyright protection applies. In addition, it is advisable – especially if AI is used significantly – to contractually stipulate which party is considered the “author” or has created the content. For example, it could be defined that all content contributions are deemed to originate from the developer, even if AI tools were involved. This makes it easier to enforce rights later (e.g. against copiers) and prevents ambiguities in the rights chain. Important: If the developer provides content that is not completely newly generated, but possibly replicated from AI training data, the rights of third parties must of course be observed – more on this in a moment under Liability.
Technical responsibility when using AI: The use of AI tools does not release the developer from their responsibility to provide contractual services. From the publisher’s point of view, the contract should clearly state that the developer bears the full risk for the tools they use. This includes, for example, the guarantee that AI-generated NPC behavior works just as reliably as traditionally scripted behavior. If the AI causes bugs, unforeseeable reactions or performance problems, for example, the developer remains liable as if they had programmed the errors themselves. Developers should be aware of this and, if necessary, contractually limit the extent to which they are liable for unforeseeable AI errors. In practice, however, a publisher will rarely accept an “AI apology” – they expect a functioning product. Therefore, a developer must ensure internally that AI-generated code parts, levels or dialogs are sufficiently tested and controllable. Contracts can stipulate that the developer is responsible for selecting and monitoring the AI tools and that any technical problems with the AI , such as their own errors, will be rectified immediately. It is also advisable to agree whether the publisher must be informed about the use of AI or should approve certain tools. Large publishers in particular could specify which AI platforms may not be used for data protection or quality reasons (e.g. no cloud AI that continues to train confidential game data). Such technical specifications and responsibilities can be specified in an annex to the contract. It should also be specified who is responsible if an AI service used fails or is changed – does the developer then have to find an alternative at their own expense, or does the publisher bear the risk of a delay? With clear agreements on technical responsibility, the project risk remains controllable and both sides know what they are getting into when AI is used.
Quality assurance and testing obligations: AI-generated content should meet the same quality standards as content created by humans – the contract must reflect this. It is not enough to copy texts unseen from an AI into the game; the developer should be contractually obliged to carry out appropriate quality assurance measures for AI content. Specifically, it can be agreed that the developer carefully checks all AI output before it is integrated into the game or delivered to the publisher. This inspection obligation covers both technical quality (e.g. correctness of AI-generated code, plausibility of automatically generated quest sequences) and content control. The latter is critical: an AI text could contain factual errors, inappropriate wording or even illegal content (e.g. discriminatory statements). The contract should therefore stipulate that the developer is responsible for a manual review and rectification. If necessary, the publisher will introduce an acceptance procedure in which AI-generated content must be approved separately. This allows the publisher to filter again and send any problem areas back to the developer for correction. Both sides benefit from clearly defined processes: The publisher receives the promised quality and compliance, the developer avoids liability cases due to overlooked defects. Quality assurance also includes the requirement that AI outputs are consistent with the rest of the game in terms of style and content – ideally, the player should not notice any “break”. It can therefore make sense to stipulate in the contract that AI-generated texts are checked by the developer for consistency (language style, terminology, lore) and adapted if necessary. Overall, AI is a tool, not a substitute for testing. The developer must plan for this and the contract should clearly stipulate this obligation.
Best practices: Important contractual clauses for AI-generated content in games
A number of best practices have emerged in the drafting of contracts between developer studios and publishers in order to implement the aforementioned points in concrete terms. Listed below are key clause topics and exemplary formulations that should not be missing in AI-related developer and publishing contracts:
- Clear definition of AI content and transfer of rights: Specify in the contract that AI-generated content is also part of the work results covered by the contract. For example, a clause could read: “The developer grants the publisher the exclusive right of use, unlimited in time and space, to all content created under this contract, including content generated using AI systems.” This ensures that the publisher receives the same exploitation rights to AI output as to conventionally created content. In addition, it is possible to define what is meant by “AI-generated content” in order to avoid misunderstandings (e.g. texts, dialogs, graphics, code generated by automatic algorithms). This clarification of ownership closes the gap that arises because copyright does not automatically apply here. Contractually, a separate “right” is created, so to speak: at least the contractual right of use, even if no statutory copyright exists. It is also important that the developer assures that he himself has acquired or created all the necessary rights to the AI results in order to be able to license them further.
- Limits of responsibility and liability for AI-based content: A key aspect is the liability regulation for content that originates (in part) from AI. The contract should explicitly state that the developer is liable for the AI content supplied to the same extent as for manually created content. For example, the following could be agreed: “The developer guarantees that AI-generated components of the contractual service are also free from material defects and defects of title. In particular, the contractually agreed warranty and liability provisions apply regardless of whether content was created with the help of AI software.” Such a clause makes it clear that AI is not a way out of responsibility – any errors or legal infringements trigger the same legal consequences (rectification, compensation, etc.). At the same time, it is advisable to include limitations of liability for risks that are difficult to calculate. A developer could insist on excluding or limiting liability for AI anomalies that are not their fault (e.g. limiting liability to cases of gross negligence). The following could be formulated: “The developer is only liable for damages caused by faulty AI-generated content if it could have recognized and prevented them if it had exercised due care in the industry.” This clarifies that the developer is not liable for completely unforeseeable “peculiarities” of the AI as long as it has fulfilled its testing and control obligations. From a publisher’s perspective, such restrictions will be examined critically, but a fair middle ground may be to set specific maximum liability amounts or exceptions. It is important that both sides reach a common understanding of the limits of responsibility: The developer should not have unlimited liability for every abstract residual risk, but should very much be liable for failures in implementation, quality assurance or rights clearance.
- Ensuring freedom from rights and checking for infringements: AI models are based on large amounts of data, often including copyrighted material. It must therefore be contractually regulated that the developer does not supply any infringing content – regardless of whether this originates from an employee or was “learned” by an AI. A best practice clause would be: “The developer warrants that the content generated by the AI does not infringe any third-party rights. In particular, no texts, dialogs, graphics or other assets may be supplied that have been taken from protected source material without authorization.” In addition, the developer should ensure that the AI results are original or do not simply reproduce existing works. In practice, this can be achieved through spot checks, plagiarism checks or content reviews – these obligations can be anchored in the contract. If a third party does make a claim (e.g. an author claims that an NPC monologue is verbatim a quote from a novel), the contract should provide for a procedure and an indemnity clause: The developer agrees to indemnify the publisher against all claims of copyright or trademark infringement by supplied content and to resolve any disputes in consultation. Conversely, an experienced developer can ensure that such indemnification obligations are limited if he has adhered to the specifications (see liability limits above). The developer’s review obligations should also be explicitly mentioned: e.g. “The developer will review AI-generated content for possible legal violations (in particular copyright, personal rights, protection of minors) before delivery and sort out or adapt any problematic content.” Such a clause creates confidence that no obviously unlawful results will enter the game unfiltered.
- Quality assurance clause for AI content: In addition to the legal review, general quality is also an issue. A contractual obligation similar to a guarantee that AI-generated game elements comply with the design specifications and quality standards is recommended here. For example: “The developer shall ensure through appropriate checks that AI-generated quests and dialogs meet the requirements described in the concept documentation, are consistent in language style and character representation and do not contain any errors that impair the gaming experience.” In addition, a reservation of acceptance could be formulated: “The publisher reserves the right to reject AI-based content submitted by the developer or to demand changes if it does not meet the agreed standards in terms of content or quality.” This makes it clear that the publisher is not at the mercy of the developer if the automated content is not convincing – it can demand improvements as with any other deliverable. Such clauses also motivate the developer to critically curate and, if necessary, manually optimize AI results before presenting them.
- Licensing of the AI tools used: One point that is often overlooked is the licensing situation of the AI tools themselves. Developers may use external AI platforms (e.g. a text generator via API, graphics AI or middleware). In this case, the contract should stipulate that the developer only uses AI software that may be used legally and permissibly for the project. One formulation could be: “The developer guarantees that all AI systems used for content creation are properly licensed and that their terms of use do not restrict the intended use of the generated content in the game.” This excludes, for example, the use of a free AI whose terms and conditions prohibit commercial use, or the use of open source AI under a copyleft license that could legally “infect” the game. If necessary, the developer should disclose to the publisher which AI services or libraries are being used. The publisher can then check for itself whether there are any risks under data protection or IP law (for example, if an AI provider grants itself rights of use to all generated content in its general terms and conditions – which would be problematic as the publisher is seeking sole use). An agreement on updates and maintenance of the AI tools is also conceivable: Should an AI service fail or change its licensing policy, the developer must proactively provide a replacement or inform the publisher. Overall, the license clause ensures that the contractual transfer of rights does not fail due to unclean advance payments – the publisher should ultimately be able to freely dispose of the game and its content without any hidden conditions.
- Involvement of the publisher & acceptance process: It has proven to be best practice to also make the publisher responsible when AI is used in the game. Of course, the main responsibility lies with the developer, but certain cooperation services from the client can be contractually agreed. For example, it could be stipulated that the publisher provides early feedback on AI-generated story drafts or that internal lore experts proofread the AI texts. A milestone can also be included in the contract where AI content is previewed and approved. Why is this important? On the one hand, it distributes responsibility: if the publisher explicitly releases content, they can later claim that the developer delivered “unsuitable” content – after all, they had the opportunity to check it. On the other hand, it improves quality because two pairs of eyes (developer and publisher) recognize potential problems. In the contract, this could look like this: “AI-generated content (e.g. procedurally generated quests, texts) must be expressly approved by the publisher as part of the acceptance process. If the publisher identifies discrepancies or potential risks, it will inform the developer so that adjustments can be made.” Such a clause promotes transparent collaboration and prevents AI content from “secretly” slumbering in the project and causing surprises later on.
- Confidentiality and data protection when using AI: Finally, a contract for AI projects should also regulate the handling of sensitive data. Many AI tools (such as cloud-based services) send data to external servers. In the games sector, this could even be confidential information about the game world or unpublished characters that you do not want to pass on to third parties. It is therefore advisable to extend a confidentiality clause specifically to AI: “The developer shall ensure that no confidential information about the game is disclosed to third parties without authorization through the use of AI services. If necessary, input data to AI systems must be anonymized or generalized in such a way that conclusions about protected project content are excluded.” It could also be required that only AI services that are contractually obliged to maintain confidentiality are used. Breaches of such obligations should have clear consequences (contractual penalty, damages), as both trade secrets and copyrights may be affected. This clause ensures that the use of AI does not become a security leak – an aspect that is easily forgotten in the heat of the moment, but is of enormous importance for publishers.
By including the above-mentioned clauses, developers and publishers create a contractual framework that addresses the special features of AI-generated content. Both sides know where they stand: The developer has clear guidelines on what is expected legally and in terms of quality, and the publisher has assurances in place to protect their investment in the game. Such detailed regulations reflect the best practices of experienced games lawyers and help to prevent disputes before they arise.
Recommendations for developers: Limitation of liability and allocation of responsibilities in publishing contracts
From the perspective of developer studios, it is crucial to work towards clear regulations on AI-based content at an early stage when negotiating with publishers. Smaller studios in particular, which are increasingly relying on AI tools, should take care to limit their own liability and involve the publisher in decision-making processes in an appropriate manner. Here are some specific recommendations for developers on publishing contracts in the AI age:
- Communicate the use of AI openly: Make it clear from the outset that you want to use AI in content creation and how. Ideally, the contract should already state which areas (e.g. side quests, NPC dialogs) will be created procedurally or supported by generative AI. This transparency creates understanding with the publisher and avoids mistrust later on (“You secretly had the texts written by an AI!”). It also allows you to set sensible limits together – for example, that certain main story elements should be written by human authors. Open communication about the use of AI demonstrates professionalism and makes it possible to establish appropriate contractual guidelines (see above).
- Limit liability appropriately: Don’t allow full liability to be unknowingly imposed on you for AI risks that you have little control over. Carefully review the warranty and liability clauses proposed by the publisher and negotiate exceptions or liability limits for AI issues. For example, you could agree that your liability for any copyright infringements caused by AI-generated content is limited to cases where you have culpably breached your duty of care or due diligence. The total amount of liability for such new risks should also be limited if necessary (e.g. to a certain percentage of the order volume). Make sure that standard clauses such as “the developer guarantees the complete novelty and originality of all content created” are mitigated in the AI constellation – a realistic formulation would be, for example, that you do not infringe any third-party rights to the best of your knowledge and according to the state of the art. Such adjustments protect you from being held liable indefinitely later on for undiscovered peculiarities of the AI. If in doubt, consult an experienced games law lawyer to find balanced formulations that do not jeopardize your existence and are still acceptable to the publisher.
- Agree clear responsibilities and involvement of the publisher: Don’t be afraid to get the publisher on board with AI content. It is also in your interest that the client keeps an eye on generated quests or dialogs and provides early feedback. Negotiate contractual clauses that oblige the publisher to actively accept or participate. For example, it could be agreed that the publisher will make prompt decisions as to whether a plot twist generated by AI is acceptable or whether adjustments are necessary. Such agreements protect you: if the publisher accepts an AI content, they also assume some responsibility for it. It is difficult for them to claim later that you have delivered nonsensical content on your own authority if they have previously given their approval. Joint review loops also reduce the risk of problematic content ever reaching the release stage. As a developer, you can work towards keeping acceptance protocols in which the publisher expressly approves the AI-generated content – this creates certainty of evidence in the event of an emergency. In a nutshell: include in the contract that AI issues are managed jointly and are not your sole risk.
- Ensure documentation and quality internally: Even if it is not written directly into the contract – ensure complete documentation of your AI deployment internally. Record which tools were used with which settings and keep AI output versions and any manual changes. In the event of a dispute, you can show that you have proceeded methodically and fulfilled your due diligence obligations. If, for example, an allegation of plagiarism arises, you ideally have logs of which prompts were used and that the result was checked. On the contract side, you can offer to provide the publisher with access to this documentation on request – this strengthens trust and shows your willingness to comply. At the same time, you protect yourself: Good documentation can also help internally to identify the AI’s weak points early on and make improvements before anything is delivered to the publisher.
- Plan for human creativity where necessary: As a developer, you are faced with the balancing act between efficiency through AI and legal certainty through human creativity. Our recommendation from a legal perspective: consciously plan for human creative work for all central content or content worthy of protection. As much as AI can relieve you of routine work, you should avoid making the heart of your game (story, important characters, unique dialogs) completely “public domain”. On the one hand, you secure copyrights (and thus marketing opportunities such as merchandising) for yourself and the publisher, on the other hand you reduce legal risks. This strategy can also be contractually underpinned: for example, by ensuring that key creative elements are designed by your team and that AI is only used in a supporting role. Such an assurance may sound unusual, but it can create trust with sensitive publishers. At the same time, it prevents the publisher from being dissatisfied later on because the game may lack the human touch after all. Best practice is therefore a hybrid approach: use AI where it is strong (mass content, variations), but human control and design at the points that are crucial for the identity and protectability of the game.
- Coordinate AI tool selection and licenses: In your own interest, clarify with the publisher which AI tools may be used and who will pay for their license costs. Nothing would be more annoying than if you were to use a fee-based AI platform on your own initiative and the publisher later refuses to cover the costs – or conversely, the publisher demands a specific enterprise AI tool that goes beyond your budget. Address these points and, if relevant, include them in the contract or at least in a side agreement. Equally important: If the publisher has reservations about certain AI services (e.g. for data protection reasons), get this in writing so that you can prove that you have complied with the requirements in case of doubt. The licensing of the AI should therefore also be considered on the developer side: make sure that you have all the necessary rights and communicate early on if the publisher needs to support or agree to this.
- Agree on room for improvement: Despite all due care, it can happen that concerns arise after the game launch or during a review (e.g. someone discovers an inappropriate sentence in an automatically generated text). As a developer, you don’t want to fall directly into a liability trap, but want to have the chance to rectify the problem. It therefore makes sense to agree on accommodating rectification mechanisms for AI content in the publishing contract. For example, it could be stipulated that the developer may deliver free patches/updates within a certain period of time in the event of subsequently identified defects in AI texts before the publisher resorts to legal action. This corresponds to the idea of subsequent performance in the law on contracts for work and services. Such a clause protects both sides: The publisher receives the promise that problems will also be rectified retrospectively, and the developer can take action before damages or public debates arise. At the same time, it signals that the publisher takes responsibility seriously and is prepared to ensure quality and legal certainty even after release.
In summary, developers should be proactive when it comes to contracts for AI-supported game development: identify and address their own liability risks, sensitize the publisher to the topic and strive for contractual regulations that create fair rules for the use of AI. Those who heed the above recommendations demonstrate professionalism and protect their studio from unpleasant surprises. The result is a contract that offers legal certainty despite the new technology – a win-win situation for both parties.
Conclusion: AI-controlled NPCs and procedurally generated content open up enormous opportunities for game development in terms of efficiency and creativity. However, developers and publishers are moving into uncharted legal territory here. No copyright protection without a human author – this dogma still characterizes the legal situation in Germany, Europe and the USA. This makes it all the more important to take contractual precautions: Clear agreements on the transfer of rights, quality checks and liability for AI content are essential to avoid disputes. A well-drafted contract can significantly mitigate the risks of new technologies. Ultimately, it is clear that with legal expertise in IT and games law and foresighted contract drafting, the benefits of AI can be exploited without ending up in legal chaos. Developers and publishers who implement these tips are ideally equipped to shape the future of storytelling with AI in a legally compliant and successful manner.