Introduction
In my daily work as a technology and media law attorney, I come into contact with various forms of artificial intelligence time and again. It is fascinating to see how this technology is rapidly gaining ground and being used in so many different areas. Whether it’s a text generator that creates a template for a blog post or an AI-powered design tool that creates an engaging image or video for a marketing campaign, AI tools are everywhere and their applications are incredibly diverse.
However, despite the many benefits these technologies bring, there are also some significant issues and challenges. One of the most pressing questions is liability: who is responsible if an AI-generated work is faulty or does not meet expected quality standards? Is it the contractor who used the AI tool, or could it even be the AI software vendor?
Another important issue is the disclosure of the use of AI tools. If a contractor uses an AI tool for a task, do they have to tell the client? And if so, to what extent? These questions are relevant not only from a legal perspective, but also from an ethical one.
In this blog article, I will take a closer look at these issues and try to provide some answers. I will also discuss the role of lawmakers in this rapidly evolving field and reflect on whether it is time for regulation to keep pace. It will be exciting to explore these questions and consider the dynamics between technology, law, and ethics.
Who is liable for errors in AI-generated content?
According to the German Civil Code (BGB), the contractor is usually responsible for the quality of the service he provides. In practice, this means that if an AI generator – whether it’s a text generator or a graphics or video creation tool – produces content with errors, liability generally lies with the contractor. It is a fundamental responsibility of the contractor to ensure that the services provided meet the standards set forth in the contract.
As someone who values modern technologies and especially software-as-a-service (SaaS) services, I see it as critical that users of AI are aware of this reality. AI is a powerful tool capable of doing amazing things. But just like any other technology, it is not infallible. It can and will make mistakes.
At the end of the day, in most cases, it will be the contractor who will have to take responsibility for these mistakes. This is an important aspect to consider for anyone working with AI generators. It is important to recognize and prepare for the risks to protect yourself from potential legal consequences.
The issue of disclosing the use of AI tools.
In the current legal landscape, there are not yet clear rules or rulings that determine whether the use of AI tools in a contract relationship must be disclosed. However, contracts can and should be adapted to the evolving technology landscape.
Let’s think of editorial offices or programming projects, for example. It may well be appropriate to include a clause in contracts requiring disclosure of the use of AI tools. This would ensure a higher level of transparency and promote awareness of the possibilities and limitations of AI.
But this is also where a challenge lies. Drafting such a clause could prove difficult. The use of AI tools is not problematic in itself and should not be generally prohibited. On the contrary, they can bring enormous benefits. Therefore, the focus should not be on banning them, but rather on clear communication about their use. As a client, you want to be clear about how and when such tools will be used.
Another point I would like to raise is the issue of pricing. The use of AI tools should not necessarily lead to a reduction in prices. Creating appropriate prompts for AI generators, then checking and editing the generated content, all of this takes time and expertise. The use of AI tools should therefore not be seen as a means of cutting costs, but rather as a tool that opens up new possibilities.
To end on a tongue-in-cheek note, no one has lowered their hourly rates just because they now create texts with Word and automatic spell checkers instead of pen and paper. On the contrary, we have used these tools to improve our work and make it more efficient. And that’s exactly how we should look at the use of AI tools.
Recourse claims against AI providers
The possibility of recourse claims against AI tool providers is another area of significant legal ambiguity. In a world where business relationships are increasingly globalized and cross-border, such claims may be difficult to enforce. Especially if the providers are based in a different jurisdiction. Different jurisdictions could have different standards and rules for such cases, further complicating the matter.
In addition to the challenges posed by internationality, there could also be cases where AI SaaS service providers exclude recourse claims in their terms and conditions. It is common practice in many industries to include such disclaimers in contract terms to minimize risk.
However, providers of SaaS products and other software tools should carefully review and, if necessary, update their terms and conditions to ensure that they address these issues. This is important not only to protect their own interests, but also to create a fair and transparent framework for their customers.
It is essential that all stakeholders – from contractors to clients to AI tool providers – are aware of the potential legal risks and take appropriate measures to address them. This can help avoid disputes and increase trust in AI tools and the technologies behind them.
The role of the legislator
The challenges and questions raised by the use of AI tools lead us to an important discussion: how and to what extent should legislators intervene in a regulatory way?
Currently, the legal environment around AI is characterized by uncertainty and ambiguity. Clearer and more uniform regulation specific to the use of AI tools could provide significant benefits to both users and providers of these technologies. For users, such regulation would provide more clarity about their rights and obligations, which in turn could increase trust in these tools. For providers, clearer legislation could minimize legal risks and help better align their business practices.
However, legislators should take a balanced approach to this. Overregulation could inhibit innovation and unnecessarily complicate the use of AI tools. On the other hand, a lack of regulation could lead to abuse and leave consumers unprotected. It is therefore important to find a balanced middle ground that takes into account the interests of both users and providers.
In addition, the development of such a regulation should involve a broad range of stakeholders to ensure the most comprehensive and balanced view possible. This could include privacy advocates, ethics experts, and civil society representatives, in addition to AI tool providers and their users.
There is no doubt that the road to effective regulation for the use of AI tools will not be an easy one. But given the rapid development and proliferation of these technologies, it’s a discussion we absolutely need to have.
Given the challenges and questions that arise from the use of AI tools, we have engaged in the important debate of how and to what extent legislators should intervene in a regulatory manner. Now it appears that the European Parliament has taken action and approved new transparency and risk management rules for AI systems. The Internal Market Committee and the Civil Liberties Committee in Strasbourg adopted the draft negotiating mandate for the first Artificial Intelligence rules with 84 votes in favor, 7 against and 12 abstentions.
In their amendments to the Commission’s proposal, MEPs have emphasized the desire for AI systems to be supervised by humans, safe, transparent, accountable, non-discriminatory, and environmentally friendly. In addition, they are striving for a unified and technology-neutral AI definition that will apply to both current and future AI systems.
The proposed rules follow a risk-based approach and establish obligations for providers and users based on the level of risk that AI can create. Particular attention has been paid to AI systems that pose an unacceptable risk to human safety by using subliminal or intentionally manipulative techniques that exploit human vulnerabilities or are used for social scoring.
MEPs significantly expanded the list of Prohibited AI Practices to include prohibitions on intrusive and discriminatory uses of AI systems. These include real-time biometric recognition systems in publicly accessible spaces; biometric categorization systems that use sensitive characteristics; predictive policing systems; and emotion recognition systems in law enforcement, the workplace, and educational institutions. They have also banned the indiscriminate harvesting of biometric data from social media or video surveillance footage to create facial recognition databases.
The classification of high-risk areas was expanded to include health, safety, fundamental rights and the environment. Also added were AI systems for influencing voters in political campaigns and recommendation systems used by social media platforms with more than 45 million users.
MEPs have included commitments for foundation model providers, a new and rapidly evolving area of AI. These must ensure robust protection of fundamental rights, health and safety, as well as the environment, democracy and the rule of law. They are required to assess and mitigate risks, comply with design, information and environmental requirements, and register in the EU database. Generative foundation models such as GPT must meet additional transparency requirements, for example, they must disclose that content was generated by AI. They are also not allowed to generate illegal content or publish summaries of copyrighted data.
To protect innovation, MEPs have included exemptions for research activities and AI components under open source licenses in the regulations. In addition, the new law encourages reallabs (regulatory sandboxes) – controlled environments set up by public agencies to test AI before it is deployed.
The rights of citizens are to be strengthened. They should have the right to file complaints about AI systems and receive explanations about decisions based on risky AI systems that significantly affect their rights. The role of the EU Office for Artificial Intelligence will be redefined to ensure monitoring of the implementation of the AI framework.
Brando Benifei, co-rapporteur and member of the Socialist Group in the European Parliament, stressed the importance of this legislation:
“We are on the verge of launching groundbreaking legislation that must stand the test of time. It is crucial to increase citizens’ confidence in the development of AI, to define the European path for dealing with the extraordinary changes already taking place, and to guide the policy debate on AI at the global level. We are confident that our text strikes a balance between the protection of fundamental rights and the need to provide legal certainty for businesses and promote innovation in Europe.”
The next step is for the draft negotiating mandate to be approved by the full Parliament before negotiations with the Council on the final form of the law can begin. The vote is expected at the June 12-15 meeting. It is a significant step in ensuring a balanced approach to AI tools that takes into account the interests of both users and providers, while building trust in these technologies.
Conclusion
In summarizing this extensive information, it should be emphasized that a continuous exchange on these issues is crucial. In particular, the review and adaptation of contracts and general terms and conditions (GTCs) tailored to the specific needs and challenges in the context of artificial intelligence (AI) should be a priority given the rapidly evolving technologies.
There is no doubt that AI offers us enormous opportunities. From optimizing workflows to opening up new, unexplored areas of business, AI can enrich and simplify lives in many ways. However, at the same time, we need to be aware of the potential challenges that this technology brings. This applies in particular to aspects such as liability, data protection law and ethical issues arising from the use of AI.
With over 15 years of experience advising on IT contracts, I am well positioned to help you navigate this complex and ever-evolving landscape. Whether you are a provider or a user of AI tools, I can help you draft your contracts and T&Cs to comply with current legal requirements while addressing your specific needs and goals.
It is important to remember that the regulatory framework for AI must evolve and adapt to technological advances. Therefore, it is essential to always stay up to date and to review and adjust contracts on a regular basis. It is my firm belief that we can only realize the full potential of AI if we also address the legal challenges it brings.
I look forward to working with you to help you take advantage of the many opportunities AI offers while navigating the legal and ethical challenges it presents.