Marian Härtel
Filter nach benutzerdefiniertem Beitragstyp
Filter by Kategorien
Archive - Old blogposts
Blockchain and law
Blockchain and web law
Blockchain Law
Competition law
Data protection Law
Esport and politics
Esport Business
EU law
Labour law
Law and Blockchain
Law and computer games
Law and Esport
Law on the Internet
Law on the protection of minors
News in brief
Online retail
Web3 Law
Youtube video
Just call!

03322 5078053

SOMKID THONGDEE | Shutterstock

Is the unpredictability of AI outcomes a legal time bomb?

Through two recent mandates and some conversations in the last few days, I became aware of a fascinating legal issue that could pose a significant legal challenge to courts in the future. This issue has the potential to profoundly change the formulation of contracts and T&Cs for AI providers. Therefore, I invite colleagues from the legal industry as well as AI experts to take on this topic and explore it further. It is essential that we deal with these possible changes in good time.

Can you be liable for an outcome that you cannot predict?

Most AI systems, especially those based on LLM (Language Model), produce results that are essentially based on mathematical probabilities. This means that the outcome is not always predictable, even when the AI is working correctly. In a world where precision and accuracy are often critical, this unpredictability can create significant challenges.

This topic not only touches on the content of my article yesterday,“Legal Aspects of Using AI in Marketing,” but also extends to other application areas. Think of using AI in investments, analyzing images, detecting diseases on X-rays, or analyzing business information. In all these areas, the question arises: When does the technically determined mathematical (in)probability become a legal error? And to what extent must providers explicitly refer to this unpredictability and the associated risks in their GTCs or contracts? It is a complex dilemma that requires both technical and legal considerations and has the potential to fundamentally change the way we think about AI and the law.

T&Cs in the AI world: A balancing act between protection and liability

Traditionally, software developers and SaaS providers are responsible for the results of their products. They can only exclude their liability to a limited extent. But in the era of artificial intelligence, we are entering new territory. In AI systems, especially those based on complex algorithms and machine learning, the results are inherently unpredictable. This challenges the traditional legal framework.

It is not only a question of liability, but also of transparency and education towards users. For example, if an AI system makes a decision based on probability rather than fixed rules, to what extent must the provider inform the user? And how detailed does this information need to be?

In addition, traditional T&Cs developed for standardized software products or services may no longer be sufficient for AI products. It may be necessary to add specific clauses or sections to address the specifics and potential risks of AI. This could also mean that vendors need to actively educate their customers about the limitations and nature of the AI technologies they use.

Overall, the legal industry faces the challenge of adapting traditional legal concepts to the dynamic and often elusive world of AI. It will be exciting to see how this discussion evolves in the coming years and what new regulations and practices will take hold.

The dilemma of liability: Who bears the responsibility?

If an AI system makes a mistake that leads to legal problems, who is responsible? The programmer who developed the system? The vendor that provides the AI? Or the end user who uses the AI? This dilemma is particularly relevant when the basic logic of the system does not allow to predict the outcome.

However, the complexity of these legal issues increases further when considering how AI systems might be used in practice. Imagine that a company does not use its own AI directly, but accesses a third-party AI via an API, as is the case with ChatGPT, for example. In such a scenario, several parties could be involved: the developer of the original AI, the provider of the API, the service provider integrating the AI into its own platform, and finally the end user. In such a complex network, who is liable if something goes wrong?

The question of liability within such a contractual chain becomes a central concern. There could be situations where multiple parties are partially liable or where liability is passed from one party to the next. This could lead to complicated legal disputes, especially if cross-border aspects come into play.

Clearly, traditional legal frameworks and contractual structures need to be rethought and adapted to address the unique challenges and complexities created by the use of AI technologies. It will be crucial to create clear and understandable contractual terms that clearly define the rights and obligations of all parties involved.

Future prospects: an open topic worthy of discussion

It is clear that the issue of liability in AI is a complex and contentious topic that will be intensely debated in the coming years. While some argue that traditional liability rules are sufficient, others believe that a new legal framework is needed to address the unique challenges of AI.

In conclusion, the unpredictability of AI outcomes is an exciting and challenging topic for both lawyers and technology experts. It remains to be seen how courts and legislators will respond to this challenge. One thing is certain, however: the discussion has only just begun.

Although today is Saturday and many are enjoying their free time, I felt the need to put these thoughts on “paper” for once. Who says that weekends are only for relaxing 😉.

Marian Härtel

Marian Härtel

Marian Härtel is a lawyer and entrepreneur specializing in copyright law, competition law and IT/IP law, with a focus on games, esports, media and blockchain.


03322 5078053