Agencies, freelancers, external development studios and content service providers have long been part of the value chain for many companies. This applies to corporate structures as well as start-ups, which often scale growth, marketing and product development with external partners. At the same time, the use of AI has become a matter of course: Text drafts, design variants, code snippets, translations, research, image and video creation, automation in ticket systems, even AI-supported analysis of customer data. This is precisely where a typical compliance problem arises: AI is being used, but without clear guard rails. And as soon as external service providers are involved, the risk multiplies – because information, data and work results pass through additional systems, people and tool chains.
An AI guideline for external parties is not a “nice-to-have”, but an operational management tool. It defines which systems may be used, which data may be used in which tools, which transparency and documentation obligations apply, how rights to work results are safeguarded and how to remain capable of acting in the event of an emergency (data protection incident, IP claim, reputational damage). Without such rules, companies are flying blind: the service provider uses “any” tool, feeds content into open systems, works with subcontractors, and the company only finds out about it when it is too late – for example, through a warning, a data protection notification or because confidential information suddenly appears in places where it shouldn’t be.
1. why AI policies for external parties are different from internal policies
Many companies now have internal AI guidelines or at least instructions on tool use. The key difference: internal policies only have a limited effect on external partners because they are not integrated into the organizational structure and often use their own systems, accounts and processes. Agencies in particular often work with standardized toolchains – from text and image generators to automation platforms and collaboration tools. If there is no binding regulation in place, the client quickly runs into an unpleasant evidence and control situation: the results arrive, but no one can say for sure which data was processed where, whether training took place, whether third parties were involved or whether output is based on problematic sources.
In addition, AI usage is not binary (“AI yes/no”), but gradual. One difference is whether a service provider uses a closed system in a controlled environment or an open system where inputs are potentially used for training or other purposes. It also makes a difference whether a service provider merely “smoothes text” or whether sensitive information is processed – such as product roadmaps, customer lists, financial data, internal strategy papers, unpublished campaigns or source code. An AI policy for external parties must therefore not only state rules, but also clearly operationalize how approvals are made, how transparency is created and what minimum standards must be met.
2. tool selection, data flows and control mechanisms
In practice, most disputes are not decided by whether AI was used, but by how it was used. This is why a solid AI policy should include three components that are often missing: (1) tool classification, (2) approval process, (3) verification and documentation logic.
Tool classification means that a distinction is made between open and closed systems – and, above all, which category is permitted under which conditions. A frequently practicable approach: open systems are taboo for confidential information or only permitted after explicit approval; closed systems are more likely to be possible if certain settings (e.g. training/logging options) and contractual bases (e.g. order processing, subcontractor list) have been clarified.
The approval process is the central lever for turning “we have a policy” into real control. A mere notification obligation is of little help if the service provider provides information but is in fact free to make decisions. In practice, one rule has proven its worth: new or modified AI systems only after prior approval in text form (email is sufficient). This is low-threshold, but clear in the event of a dispute. In addition, a “tool list” is useful: what has already been approved may continue to be used; changes must be notified; new tools require approval.
The documentation logic must be so easy that it is really lived. Nobody wants a 20-page log per campaign. But a short deployment log (tool/provider, deployment environment/account, open/closed, key settings, subcontractors) is an extremely effective compromise: it creates verifiability, facilitates audits and reduces the risk of being left without facts in an emergency. For larger companies or regulated areas in particular, this is often the difference between “controllable” and “uncontrollable”.
3. copyright, rights chain and AI output
The second major risk area is the rights and IP issue. Agencies supply logos, campaign visuals, texts, claims, videos, templates, code, music or UI elements. As soon as AI is involved, two typical questions arise: (1) are there any transferable rights at all?(2) can the service provider effectively grant these rights?
Legal sobriety is required here: rights can only be granted to the extent that they arise and to the extent that the grantor is authorized to dispose of them. This is precisely why AI guidelines and accompanying contractual clauses should work with an “if and when” logic. This is not an end in itself, but a risk reduction: it prevents the service provider from “guaranteeing” something that it cannot actually guarantee – and it prevents the client from relying on a seemingly watertight rights clause that can be challenged in the event of a dispute.
At the same time, a chain of rights is crucial: employees, freelancers, subcontractors, production studios involved – everyone must grant their rights in such a way that the result reaches the client cleanly. In traditional agency contracts, this is often done with a blanket “the contractor warrants”. This is not always enough for AI outputs. Not because AI is “automatically illegal”, but because additional uncertainties arise in the tool chain: What database? What license conditions? What further use? Which third-party rights may be affected? A good guideline therefore links rights commitments to specific mandatory mechanisms: Tool release, input prohibitions, testing obligations, documentation. This is much more reliable than blanket assurances “free of third-party rights”, which are often too absolute in practice.
And another point that many overlook: Even if an IP claim is rare, it is usually expensive when it occurs. Campaign stops, re-design, re-cut, re-deployment, reputational damage – and suddenly the supposed cost benefits of using AI are pulverized. A guideline is therefore not “legal bureaucracy”, but an economic safeguard for the production chain.
4 Liability, data protection and compliance
When an external service provider uses AI, companies quickly find themselves at the intersection of data protection law, confidentiality and contractual liability. The core problem: many regulations are either too soft (“please be careful”) or too hard (“comprehensive, independent of everything”). Both are impractical. Too soft is ineffective. Too hard is not signed or leads to a false sense of security because you end up working “somehow”.
A clear line is practical: strict liability and indemnification where obligations are breached, not blanket strict liability for tool risks. A good AI policy therefore specifically defines which obligations are “critical”: no open systems for confidential data, release of new tools, compliance with transparency, no unauthorized inputs, compliance with data protection requirements. If a breach results in damage or third-party claims, liability becomes severe. If the service provider adheres to the rules, the risk remains manageable.
The key question in data protection is: Who processes what data for what purpose in whose system? Agencies in particular often work with customer data “on the side”: CRM exports, newsletter lists, lead data, support cases, user feedback. As soon as such data ends up in AI tools, the question of order processing, TOMs, subcontractors, storage locations and reporting channels regularly arises. An AI policy cannot (and should not) replace complete GDPR documentation – but it can ensure that there is a clear block (“certain data categories not in certain tool categories”) and that there is an obligation to coordinate relevant uses.
The AI Act is also playing an increasingly important role – not so much because every agency is suddenly assuming manufacturer obligations, but because companies have an interest in ensuring that obligations are clearly assigned along the chain: What lies with the provider? What lies with the operator? What needs to be documented? A sensible guideline here does not state “we ensure everything” (this is often not objectively possible), but rather “we fulfill the obligations that apply to us in our role and contribute to providing evidence”. This is legally sound and operationally feasible.
5. implementation in practice
An AI directive is only effective if it is bindingly incorporated – typically as an annex to the service, agency or framework agreement. Three things are crucial here:
- Validity and hierarchy: Clear rule that the guideline is part of the contract and how it relates to other regulations (e.g. MSA/SOW structure).
- Change mechanics: AI tool landscapes are constantly changing. It must be possible to update a policy without having to renegotiate the entire contract each time. This can be achieved via text form notification, an appropriate deadline and practicable conflict resolution (objection/vote).
- Operational connectivity: approvals must fit into everyday life. A process that only works with a compliance ticket and three signatures will be ignored. A process that runs by email and tool list is lived.
It’s tempting for start-ups to play down the issue: “We’re too early, too small, it’ll be fine.” However, this is exactly where the typical long-term damage occurs: contracts are concluded using standard templates, agencies work quickly and creatively, and nobody pays attention to what happens to product and customer data. When the startup grows later on, due diligence is required – and suddenly it is unclear whether IP was transferred properly, whether data was processed properly and whether subcontractors were involved properly. A lean, well-formulated AI policy costs little at the beginning, but saves a lot of time, money and discussions later on.
The opposite is true for larger companies: compliance structures are often already in place here, but they do not extend to operational agency work. This creates “parallel policy worlds”: internally strict, externally unclear. An external AI guideline closes precisely this gap.
Conclusion:
As soon as external service providers start working with AI, the question is no longer whether there are risks, but whether they are being managed. An AI guideline for external parties is one of the most efficient tools here: it creates clarity about tools, data, approvals, documentation, chain of rights and liability. It reduces the potential for disputes, improves verifiability and prevents companies from being left without facts and without contractual protection in the event of an emergency.
Anyone working with agencies, studios, freelancers or external tech teams should not leave the issue to chance. In many cases, a compact, practical guideline that is clearly linked to the contract and works on a day-to-day basis is sufficient. There is no “one-size-fits-all” template for this: the tool landscape, risk profile, data types and value creation vary from company to company.
The creation, adaptation and contractual integration of such AI guidelines – especially for agency and service provider constellations (marketing, content, software, games, media) – typically requires a combination of operational knowledge of the tool chain and precise contractual work. Accordingly, the development of a tailor-made AI guideline including approval processes, chain of rights and liability logic can be implemented in a structured manner at short notice if the collaboration with external parties is to be scaled or ongoing projects are to be secured.









































