Filter nach benutzerdefiniertem Beitragstyp
Filter by Kategorien
Archive - Old blogposts
Blockchain and law
Blockchain Law
Competition law
Data protection Law
Esport and politics
Esport Business
EU law
Labour law
Law and Blockchain
Law and computer games
Law and Esport
Law on the Internet
Law on the protection of minors
News in brief
Online retail
Web3 Law
Youtube video
Just call!

03322 5078053

Legal aspects of the use of AI in marketing

This post is also available in: Deutsch

In recent years, artificial intelligence (AI) has emerged as a transformative technology across numerous industries, with the marketing sector standing out. By using AI, marketing strategies can be refined, personalized advertising campaigns can be made more efficient, and customer service inquiries can be answered in real time using chatbots. These developments have led to companies being able to better understand and target their customers. However, the increasing integration of AI technologies into marketing processes not only expands the spectrum of possibilities, but also that of responsibilities. The legal frameworks that accompany the use of AI in marketing are complex and must be carefully considered to meet both ethical and legal standards.

Data protection and DSGVO

Data protection law is a key area of law in the context of AI in marketing. The EU’s General Data Protection Regulation (GDPR) defines clear guidelines for processing personal data. It is necessary for organizations:

  • To obtain unambiguous consent to data processing.
  • To use the data exclusively for the specified purpose.
  • To keep the data secure and protect it against unauthorized access.

There is currently no specific legal framework for the creation or application of AI. Therefore, the general legal provisions apply to them. Many AI systems use personal training data and thus must comply with the GDPR guidelines.


AI systems have the potential to generate content that could fall under copyright law. This raises complex legal issues that need urgent clarification. A key problem is determining the copyright owner of such AI-created content. Is it the company that runs the AI? The developer who programmed the AI? Or could the AI itself theoretically be considered the originator?

Resolving these issues is critical, as it has direct implications for the licensing, distribution, and monetization of this content. Companies that use AI systems to create content need to be aware of this legal gray area and take appropriate precautions.

Discrimination and bias

AI models reflect the data they are trained with. If these data contain biases or prejudices, the models may make discriminatory decisions in practice. This can have serious ethical and legal consequences, especially when such systems are used in critical areas such as healthcare, finance, or law enforcement.

Companies that deploy AI models have a responsibility to ensure that their systems make fair and unbiased decisions. This requires careful review and selection of training data to ensure that they are representative and free of systematic bias. It is also important to regularly monitor and validate AI models to ensure that they do not develop discriminatory patterns over time.

In addition, companies should promote transparency about their AI models and their decision-making processes. This can be achieved through techniques such as “Explainable AI”, which allow the decision-making processes of AI models to be understood. Training and awareness of ethical issues related to AI are also essential to ensure that developers and other stakeholders recognize and address the potential risks and challenges.

Ultimately, it is critical that companies take a multidisciplinary approach by involving experts in ethics, social sciences, and law in the AI model development and review process. This ensures that diverse perspectives are considered and that models are developed and deployed in a manner that respects the rights and well-being of all stakeholders.

Transparency and explainability

According to the GDPR, individuals have the right to know how decisions affecting them are made. This right of explanation aims to ensure transparency and traceability of automated decisions. Therefore, it is necessary for companies to be able to clearly state the decision-making processes of their AI models. It is equally important that these explanations are made understandable and accessible. In the context of the GDPR, it is also essential that companies comply with data protection requirements, especially when AI models are trained with personal data. Regular reviews and audits of AI systems can help ensure compliance with data protection regulations and safeguard the rights of data subjects. Involving data protection experts in the AI development and implementation process can be helpful in identifying and addressing potential risks early on.

Liability and Programmatic Marketing

When an AI makes a mistake, who is responsible? The company, the AI developer, or the AI itself? This is a legally unresolved issue, and companies need to be aware of the potential liability risks. This question becomes particularly relevant when considering the use of AI in programmatic marketing.

Programmatic marketing refers to the automated buying and selling of advertising space in real time. This uses algorithms and AI systems to automatically book, order, weight, stop or optimize marketing campaigns. This process is based on data analysis and target audience behavior to make advertising as efficient and targeted as possible.

But what happens when the AI in Programmatic Marketing makes a mistake? For example, when it stops an expensive advertising campaign that was actually successful, or when it places a campaign in an inappropriate context that could damage a brand’s image? The financial and reputational risks can be enormous.

A major problem is the complexity and opacity of AI systems. Many of these systems are designed to be self-learning, meaning they make decisions based on data they collect over time. This can make it difficult to trace the exact decision path and determine why a particular decision was made.

If an error occurs, the company could argue that it is not responsible for the AI’s actions because it could not have foreseen how the AI would decide. The AI developer could argue that it only provided the tool and is not responsible for its use. And, of course, the AI itself cannot assume legal responsibility because it is not a legal subject.

In the context of programmatic marketing, such errors could lead to significant financial losses. For example, if an AI mistakenly buys a large amount of advertising space that later turns out to be ineffective, this could result in significant costs for the company. Similarly, incorrect placement or weighting of an advertising campaign could damage a brand’s reputation and have long-term negative effects.

Another problem is the speed at which decisions are made in Programmatic Marketing. Because everything happens in real time, companies often have little time to identify and correct errors before they have an impact. This increases the risk of wrong decisions and their potential consequences.

Companies using AI in programmatic marketing need to be aware of these risks and develop strategies to minimize them. This could include conducting regular reviews and audits of AI decisions, implementing security measures to detect and correct errors, and conducting training for employees to ensure they understand how AI works and its limitations.

In addition to internal monitoring and controls, companies should also seek legal advice to ensure they are meeting all legal requirements and protecting themselves from potential liability risks. This could include reviewing contracts with AI developers to ensure liability issues are clearly addressed and developing legal strategies to be prepared in the event of errors or disputes.

In conclusion, the use of AI in Programmatic Marketing brings both opportunities and challenges. Companies need to be aware of the potential risks and take proactive measures to minimize them and protect themselves from legal consequences.


The use of AI in marketing not only offers the opportunity to optimize processes and create personalized customer experiences, but also the potential to develop completely new business models and strategies. Integrating AI can lead to more efficient use of resources, better decision making, and ultimately a competitive advantage.

However, there are legal and ethical issues associated with these benefits. The dynamic nature of AI technology and its ability to learn and make decisions on its own poses challenges to existing legal frameworks. It’s not just about whether companies are complying with current laws and regulations, but also how they are preparing for future legal changes that could come with the advancement of AI technology.

For companies that want to use AI in their marketing strategies, it is therefore essential to continuously learn about and comply with the legal framework. This includes not only compliance with data protection regulations and consumer protection laws, but also consideration of ethical aspects, such as avoiding discrimination or bias in AI models.

In addition, it is advisable to seek legal advice on a regular basis to ensure that all aspects are considered. This can help companies avoid potential legal pitfalls and ensure they are moving forward with their AI initiatives in a responsible and legally compliant manner.

In conclusion, the use of AI in marketing presents both risks and opportunities. However, a responsible and informed approach can help minimize these risks and take full advantage of this revolutionary technology.

Marian Härtel

Marian Härtel

Marian Härtel is a lawyer and entrepreneur specializing in copyright law, competition law and IT/IP law, with a focus on games, esports, media and blockchain.


03322 5078053


Share via
GDPR Cookie Consent with Real Cookie Banner
Send this to a friend