The integration of artificial intelligence (AI) into business processes offers enormous opportunities for increasing efficiency and innovation. At the same time, the use of AI presents companies with new legal challenges and risks. This article highlights the most important legal aspects and provides advice on effective risk management when using AI in a corporate context.
Legal framework for AI in Germany and the EU
On August 1, 2024, the Artificial Intelligence Act came into force in the EU, creating a comprehensive legal framework for AI. This law will be fully applicable from August 1, 2026, with some exceptions for certain provisions. In addition to this specific AI Act, various existing areas of law are also relevant to the use of AI:
- Data protection law: The GDPR must be observed in particular when processing personal data using AI systems.
- IT security law: The IT Security Act and the NIS Directive set requirements for the security of IT systems, including AI.
- Liability law: The Product Liability Directive and national liability rules are relevant for AI-supported products and services.
- Copyright: Copyright issues arise in particular with AI-generated works.
The EU AI law follows a risk-based approach that classifies AI applications into different risk categories and sets out corresponding requirements. This complements existing legislation and creates a specific framework for the development and use of AI systems.
Data protection challenges
The use of AI raises specific data protection issues: 1. lawfulness of data processing: The processing of personal data by AI systems must be based on one of the legal bases listed in Art. 6 GDPR. 2. purpose limitation and data minimization: AI systems tend to process large amounts of data. This can come into conflict with the principles of purpose limitation (Art. 5 para. 1 lit. b GDPR) and data minimization (Art. 5 para. 1 lit. c GDPR). 3. transparency and information obligations: The complexity of AI systems can make it difficult to fulfill the information obligations under Art. 13 and 14 GDPR. 4 Automated decisions: In the case of AI-supported automated decisions, the requirements of Art. 22 GDPR must be observed, which grants data subjects the right not to be subject to a decision based solely on automated processing. In order to overcome these challenges, it is advisable to carry out a data protection impact assessment in accordance with Art. 35 GDPR for AI projects and to implement privacy-by-design concepts.
Liability issues when using AI
The use of AI raises new liability issues, especially when AI systems make autonomous decisions: 1. product liability: In the case of AI-supported products, the question arises as to the extent to which the manufacturer is liable for damage caused by autonomous decisions made by the AI. 2. fault-based liability: Determining fault can be difficult with AI systems that are self-learning and whose decision-making processes are not always comprehensible. 3. distribution of the burden of proof: The complexity of AI systems can make it difficult for injured parties to prove a causal link between the use of the AI and any damage that has occurred. Companies should define clear responsibilities for AI systems and consider specific insurance solutions where appropriate.
Copyright aspects of AI
AI systems can generate creative works such as texts, images or music. This raises copyright issues: 1. protectability: Are AI-generated works protectable under copyright law? According to the current legal situation in Germany, a human act of creation is required for copyright protection. 2. authorship: Who owns the rights to AI-generated works? The developer of the AI, the user or the AI itself? 3. training material: The use of copyrighted works to train AI systems can raise copyright issues. Companies should be careful when using AI-generated content and, if necessary, make contractual arrangements for the granting of rights.
Discrimination risks and ethics
AI systems can unintentionally make discriminatory decisions if they have been trained with biased data sets. This can lead to violations of the General Equal Treatment Act (AGG), particularly in the area of personnel selection or lending. Companies should:
1. Check AI systems for possible bias
2. Use diverse and representative training data
3. Conduct regular audits of AI decisions
4. Develop ethical guidelines for the use of AI
Risk management and compliance
Effective risk management for the use of AI includes: 1. AI governance: Establishing clear structures and responsibilities for AI projects
2. Risk assessment: Conducting regular risk assessments for AI applications
3. Documentation: Comprehensive documentation of AI systems, including training data and decision logic
4. Training: Regular training for employees on the legally compliant use of AI
5. Monitoring: Continuous monitoring and evaluation of AI systems
6. Contingency plans: Development of contingency plans in the event of malfunctions or unexpected results of AI
Practical tips for companies
Based on his experience as an IT legal expert, the following practical tips for companies can be derived: 1. legal due diligence: carry out a comprehensive legal review before implementing AI systems. 2. data protection by design: integrate data protection requirements into the development and use of AI systems from the outset 3. transparency and explainability: Strive for AI systems that are as transparent and explainable as possible in order to meet legal and ethical requirements 4. contract design: When procuring AI systems or services, ensure that responsibilities and liability issues are clearly regulated in the contract. 5 Ethics guidelines: Develop internal ethics guidelines for the use of AI. 6. interdisciplinary teams: For AI projects, rely on interdisciplinary teams that combine technical, legal and ethical expertise. The use of AI in the corporate context offers enormous opportunities, but requires careful management of the associated legal and ethical risks. A proactive approach to the legal framework and the implementation of robust risk management are crucial to fully exploit the potential of AI while ensuring compliance. Given the complexity and constantly evolving legal and technological landscape, it is advisable for companies to seek specialized legal and technical expertise when implementing AI systems.