Sometimes it pays to think one step ahead legally. Not because the law has already settled everything conclusively. But because practice has long since moved on from many dogmatic discussions.
2026 is one such year. In SaaS systems, AI agents negotiate prices independently with supplier APIs. In e-commerce systems, autonomous systems dynamically adjust discount logic. In procurement processes, offers are automatically obtained, compared and – at least technically – accepted. Internally, companies are increasingly delegating operational decisions to AI-based agents that not only analyze but also act.
The question is obvious: if an AI agent says “yes” – who has spoken legally?
Is the AI agent a contractual partner? An agent? A mere tool? And above all: who is liable if the system makes the wrong decision?
Legally, this is not a science fiction issue. It is a problem of attribution. And therefore deeply rooted in classic civil law.
No separate legal entity: AI remains a legal tool
First of all, the sober starting point: an AI is not a legal entity. Neither German civil law nor European law recognize autonomous systems as having their own legal personality. The AI agent is not an “electronic contractual partner”.
Legal transactions require a declaration of intent. According to prevailing opinion, this is an expression of human will. An AI does not have legal capacity, legal capacity or capacity in tort. It therefore cannot itself be the bearer of rights and obligations.
So the real question is not whether the AI acts – but to whom its actions are attributed.
And this is where the legal precision engineering begins.
Right of representation (§§ 164 ff. BGB) and AI systems
The law of representation provides the first dogmatic starting point. According to Section 164 (1) BGB, a declaration of intent that someone makes on behalf of another person within the scope of their power of representation is directly effective for and against the represented party.
Traditionally, the representative is a natural person. However, the law does not necessarily require the representative to have legal capacity – the decisive factor is that the representative acts on behalf of the represented party and that an attribution is made.
In the case of automated systems, it is often argued that there is no representative, but merely a “messenger” or a technical tool. This view falls short when the system independently selects parameters, modifies prices or adjusts contract terms.
The stronger the autonomy, the less the image of the mere transmitter fits.
Dogmatically, the problem can be solved as follows: The AI agent is not its own representative, but part of the company’s organizational sphere. The declaration of intent is attributed to the company because it has set up the system, parameterized it and released it into legal transactions.
This is not an analogy, but a continuation of the case law on automated declarations – for example in vending machines or online stores. Anyone who places a system on the market must accept responsibility for its declarations.
Autonomy does not change this. It only increases the risk.
Apparent and acquiescent power of attorney for AI systems
Things get exciting when AI agents act beyond the originally intended limits.
What happens when an AI system grants discounts that were never intended? When it extends contract terms even though internal guidelines prohibit this? Or when it independently offers additional services?
This is where the principles of prima facie and estoppel come into play.
A power of attorney for acquiescence exists if the represented party knows that someone is acting as a representative for them and allows this to happen. A prima facie power of attorney applies if the represented party is not aware of the appearance, but could have recognized and prevented it with due diligence.
If these principles are applied to AI systems, a clear picture emerges: Anyone who equips an autonomous system with far-reaching powers and uses it in legal transactions on a permanent basis is creating a bill of rights.
The contractual partner may regularly rely on the system acting within the scope of the powers granted to it. Internal programming errors or poorly defined parameters do not exonerate the company.
Legally speaking: The autonomy of the system becomes an organizational risk.
Organizational fault and liability for wrong decisions
This leads directly to the next point: organizational fault.
Companies are obliged to organize their internal processes in such a way that legal violations are avoided as far as possible. Anyone using AI agents is adding a technical decision-making system to their own organizational structure.
Wrong decisions can have various causes:
– incorrect training data
– inadequate parameterization
– lack of control mechanisms
– inadequate monitoring
– unclear responsibilities
If damage occurs as a result of such deficits, the company is liable according to general principles. Exculpation with the argument “It was the AI” is legally irrelevant.
On the contrary: the more complex the system, the higher the requirements for monitoring and governance.
A clear definition of decision limits is particularly important for autonomous price adjustments or automated contract conclusions. Otherwise, a liability regime is created that cannot be controlled in practice.
Product liability and software errors
Another aspect concerns liability for faulty AI software. A distinction must be made here:
– Internal in-house development
– Use of external SaaS solutions
– Integration of third-party APIs
If an economic loss occurs due to a software error, the question of chains of recourse arises.
In the case of in-house development, the company is directly liable. In the case of third-party providers, contractual liability regulations, service level agreements and warranty rights apply. In complex AI ecosystems, however, these liability chains are often opaque.
The situation is particularly sensitive with AI agents as a service. Here, an external provider takes over the technical control, while the company using the service acts as a contractual partner to the customer.
In legal terms, the external relationship remains decisive. The customer regularly has only one opposing party – the company using the AI agent.
AI Act compliance as a new organizational obligation
The European AI Act has further shifted the discussion. The AI Act introduces a risk-based regime that places strict requirements on transparency, documentation and risk management, particularly for high-risk systems.
For autonomous AI agents in contractual or decision-making processes, a high-risk classification may become relevant depending on the area of application. Then, among other things, the following apply:
– Documentation requirements
– Risk analyses
– Human supervision
– Traceability of decisions
Violations of these obligations may not only be subject to fines, but may also be considered evidence of organizational negligence in liability proceedings.
AI Act compliance is therefore not only a regulatory obligation, but also a safeguard under liability law.
Future or present?
Is this all a dream of the future? Partly. But only partly.
Technical development is progressing faster than legal classification. Autonomous negotiation systems, dynamic contract models and AI-supported purchasing agents are no longer theoretical constructs.
The law does not react to this with new legal figures, but with traditional instruments:
– Attribution
– Organizational responsibility
– Protection of legitimate expectations
– Liability
This also means that there will be no “AI exception”. Companies remain responsible.
Contract design and risk limitation
Anyone using AI agents should not leave the legal framework to chance.
The following points are particularly relevant in the B2B context:
– Clear definition of the degree of automation
– Limitations of liability
– Transparency regarding the use of AI
– Logging of decisions
– Recourse rules for third-party providers
Terms and conditions clauses should also be adapted. The question of whether contracts are concluded automatically can give rise to an obligation to provide information in individual cases. At the same time, limitations of liability must be carefully formulated in order to counter risks under GTC law.
The more autonomous the system, the more important the legal architecture in the background.
AI agents do not become contractual partners. They remain tools – albeit highly complex ones. From a legal perspective, it is not technical autonomy that is decisive, but organizational integration.
Companies that use autonomous systems expand their sphere of action. They create new decision-making bodies within their organization. And they bear the risk.
This may seem dogmatically unspectacular. But it is highly relevant in economic terms. Because with every step towards autonomous business processes, the importance of clean attribution, clear governance and well thought-out contract design increases.
Perhaps the issue has not yet been fully resolved. But it is by no means a pipe dream. It is a classic example of how old civil law meets new technologies – and why legal structuring is not a brake on innovation, but a prerequisite for it.









































