Artificial intelligence (AI) has become an integral part of our everyday lives. More and more companies are using AI systems to optimize processes, make decisions or analyze user behaviour. But what does this mean for data protection? Do users have to be informed when AI evaluates their data, makes assessments or even initiates blocks? These questions are becoming increasingly important in view of the rapid development of AI technologies. After all, users have a right to know how their personal information is processed. At the same time, companies are faced with the challenge of explaining complex AI processes in an understandable way without revealing their business secrets. It is therefore important to find a balance between transparency and competitiveness. Not only data protection regulations play a role here, but also competition law aspects and industry-specific regulations such as the Network Enforcement Act (NetzDG) for social networks. The issue of transparency in the use of AI is therefore complex and requires a differentiated approach.
Transparency as a fundamental principle of data protection
Transparency is a central principle of the General Data Protection Regulation (GDPR ). Data controllers must inform data subjects clearly and comprehensibly about what happens to their data. Only in this way can users decide for themselves whether they consent to data processing and exercise their rights. This results from Art. 12 et seq. GDPR, according to which the information must be provided in a “concise, transparent, intelligible and easily accessible form, using clear and plain language”. The specific purposes of the processing must also be stated (Art. 13 para. 1 lit. c GDPR). This also and especially applies to the use of AI. This is because AI systems often make decisions that are incomprehensible to the individual. There is also a risk of discrimination and wrong decisions. This makes it all the more important for companies to communicate openly when they use AI. Only in this way can those affected assess what consequences the processing may have for them. It is not enough to simply make general reference to the use of AI. Rather, the essential functionalities and decision criteria of the AI systems must be explained, insofar as this is possible without disclosing business secrets (see Recital 58 GDPR).
Information obligations when using AI
The AI Regulation (Artificial Intelligence Act) adopted by the European Parliament in April 2024 provides for special transparency obligations for high-risk AI systems. These include AI applications that are crucial for access to education, employment, justice or law enforcement. Providers of such systems must, among other things, disclose that AI is being used and explain how it works. This is to ensure that those affected understand the basis on which decisions are made. Art. 1 and Annex III AI systems are considered high-risk systems if they are used as a safety component of products that are subject to third-party conformity assessment or if they are used in certain sensitive areas such as employment, education, law enforcement or justice. Users must also be informed about their rights, e.g. the right to object to automated decisions(Art. 22 GDPRHowever, information obligations are also sensible and necessary for less risky AI systems. After all, any form of automated analysis encroaches on users’ rights. Companies should therefore be transparent in their privacy policies if they use AI to analyze user behavior and create profiles, evaluate or categorize users and make automated decisions, e.g. blocking or refusal. They must state the purposes of AI-supported processing and explain what impact this may have on users. They should also state whether the AI systems were obtained from third parties or developed in-house. The ECJ ruling of 13/05/2014 (C-131/12 – Google Spain), according to which the search engine operator must provide information about the functioning of its ranking algorithm insofar as this is possible without disclosing business secrets.
Competition law claims and NetzDG
In addition to data protection obligations, competition law claims may also arise if companies conceal the use of AI. For example, a misleading privacy policy can be considered a violation of Sections 5, 5a UWG. Anyone who misleads users about data processing is acting unfairly and can be sued by competitors for injunctive relief. This was clarified by the BGH in its decision “Customer card bonus program” of 29.07.2021 (I ZR 40/20). Accordingly, misleading information is deemed to exist if material information on data processing is concealed or embellished. A non-transparent design of the privacy policy can also be anti-competitive if it obscures the scope of the consent (cf. OLG Frankfurt, Urt. v. 27.06.2019 – 6 U 6/19The provisions of the Network Enforcement Act (NetzDG) also apply to social networks.) Accordingly, platforms must inform their users in general terms and conditions and community standards about whether and how they review and remove content(Section 3b NetzDG). If they use AI, this must also be made transparent. Otherwise, fines may be imposed by the Federal Office of Justice. The judgment of the Higher Regional Court of Karlsruhe of 28.02.2022(15 W 4/22) is relevant here, according to which Facebook must disclose its deletion practice. The court found that the community standards did not provide sufficient information about the criteria for removing posts. The use of AI to detect hate speech and other prohibited content was also not sufficiently explained.
The challenge of comprehensible information
The crux of the matter is that many users don’t even read privacy policies because they are too long and complicated. Companies are therefore faced with the challenge of communicating the necessary information on the use of AI as concisely and comprehensibly as possible. In addition, AI systems are often perceived as a “black box” whose decision-making process is difficult to understand, even for experts. This makes it all the more important to highlight the key points and explain them in plain language, which requires new, user-friendly approaches such as layered notices, icons and videos. Interactive elements such as chatbots can also help to answer questions. It is crucial that the core messages can be grasped quickly and that users can find out more details if required. At the same time, the information must not be too superficial, but must cover all essential aspects. Creativity is required here in order to present complex issues clearly without overwhelming the user. The information should also be updated regularly if the use of AI changes. The Data Protection Conference’s guidance on AI and data protection from 06.05.2024 can serve as a guideline. It outlines best practices for the design of data protection notices when using AI, e.g. the use of pictograms and traffic light colors to visualize the level of risk.
Conclusion
AI offers great opportunities, but also harbors risks for fundamental rights and data protection. This makes transparency all the more important. Companies that use AI should actively inform users about it – not only because it is required by law, but also because it creates trust. Because only if people understand what happens to their data can they trust AI systems and benefit from their advantages, and data protection is not a brake on this, but an enabler for AI that is geared towards the common good. At the same time, the information obligations must be implemented with a sense of proportion. It does no one any good if data protection declarations become ever longer and more incomprehensible. Instead, we need a new “information culture” that reconciles transparency and usability. If this succeeds, all sides benefit: Companies can exploit the potential of AI, users retain control over their data and data protection becomes a quality feature for trustworthy AI applications. This requires a continuous dialog between business, science, politics and civil society in order to jointly develop standards for the ethically responsible and legally compliant use of AI. The EU’s AI regulation can provide important impetus here, but it must be brought to life in practice.