Increasing digitalization and the increased use of artificial intelligence are leading to automated decision-making processes in numerous areas. While such processes enable efficiency gains and technological innovations, they also raise fundamental ethical questions and liability uncertainties. A precise analysis of the allocation of responsibility and the necessary safety standards is required, particularly in view of the provisions of the AI Act, which came into force on February 2, 2025.
As an IT lawyer and self-confessed AI nerd, I follow developments in this area with great interest – not only from a technical perspective, but above all from a legal one. The fascination for the technology is not only evident in the application of simple generative AI for text creation, but also in the complex systems that are used, for example, in the automated audit logic of insurance companies, in the evaluation of the truthfulness of statements or in dynamic pricing in online retail. In these fields of application, different prices and even different conditions for returns can be set depending on user behavior or regional characteristics, which raises a variety of ethical and liability issues.
These developments are once again focusing on the tension between technological innovation and legal responsibility. The challenge is to ensure the transparency and traceability of decision-making processes while at the same time not unnecessarily restricting the scope for necessary entrepreneurial innovation. The AI Act, which has been in force since February 2, 2025, sets clear legal standards here with requirements for risk management, conformity assessment and the duty of human oversight (see Art. 9 ff. AI Act).
As a practicing IT lawyer, it is a personal concern of mine to not only shed light on the legal implications of these developments in theory, but also to work through them in a practice-oriented manner. The aim is to combine technological progress with appropriate protection for those affected and a clear allocation of liability – a challenge that is both technically and legally demanding. This complex topic requires a critical and differentiated approach in order to pave the way for future-proof regulation and the legally compliant use of AI systems.
Ethical issues
Automated systems make decisions that can have a direct impact on affected individuals and companies. Key ethical aspects include in particular
– Transparency and traceability:
The often high complexity of the underlying algorithms makes it difficult to fully trace the decision-making processes. It is therefore necessary to implement comprehensive technical and organizational measures that ensure traceable documentation of the decision-making processes.
– Fairness and non-discrimination:
Distortions in the data used can lead to systematic discrimination. Compliance with the principle of equal treatment requires appropriate precautions to guarantee non-discriminatory decisions.
– Responsibility and human control:
Despite increasing automation, the ultimate responsibility for decisions made must remain with the operators. Ensuring effective human supervision is essential in order to detect errors at an early stage and take corrective action.
The principles formulated in the “Ethics Guidelines for Trustworthy AI” – in particular transparency, robustness and fairness – are increasingly being taken into account in the legal discussion surrounding the use of AI.
Regulatory framework: The AI Act
The AI Act has been in force since February 2, 2025 and the first regulations for the use of artificial intelligence within the European Union already apply. The AI Act creates a uniform legal framework and differentiates AI systems based on risk categories. High-risk systems in particular are subject to strict requirements, which include the following aspects
– Risk management and conformity assessment:
Providers are obliged to implement a comprehensive risk management system and provide evidence of the conformity of their systems (see Art. 9 ff. AI Act).
– Transparency and documentation obligations:
There is an obligation to maintain detailed technical documentation and to inform data subjects about the functioning of AI systems in order to ensure the traceability of automated decision-making processes.
– Human oversight:
The AI Act requires that automated decision-making processes are always under the control of a responsible natural person to enable interventions and corrections.
These regulations are intended to help minimize the risk of incorrect or non-transparent decisions and to clarify the allocation of liability in the event of damage.
Liability risks with automated decisions
The lack of transparency and complexity of AI systems make it difficult to clearly assign liability claims in the event of incorrect decisions. From a liability law perspective, the following aspects are of particular importance:
– Attribution of responsibility:
General tort law, in particular Section 823 of the German Civil Code (BGB), generally provides a framework for claims for damages. However, proving an adequate causal link between the use of AI and the damage caused is often problematic.
– Product liability and operator liability:
The applicability of traditional product liability to AI systems is the subject of controversial debate in legal literature. The existing liability regulations, for example under the Product Liability Directive, often reach their limits in view of the special features of AI applications.
– Ethical dimension of liability:
In addition to the purely legal allocation of responsibility, the extent to which ethical deficits – such as a lack of transparency or inadequate risk management – can justify extended liability is being discussed.
Legal opinions and future developments
The legal discussion on liability issues in the context of automated decision-making processes is controversial. While some experts are in favor of adapting the existing liability regulations, others advocate a differentiated approach that does justice to the specific nature of AI systems. Central points of view include:
– The need to reform liability law in view of the increasing complexity and self-learning mechanisms of AI applications.
– The integration of technical security measures into a holistic risk management system that takes into account both technical and legal requirements.
– A gradual further development of case law that will enable a more precise allocation of liability in cases of algorithmically induced wrong decisions in the future.
Concluding remarks
Automated decision-making processes open up a wide range of opportunities, but come with considerable ethical and liability risks. The provisions of the AI Act, which have been in force since February 2, 2025, help to increase the transparency and traceability of these systems and at the same time clarify the allocation of responsibility in the event of damage. Nevertheless, the practical implementation of the requirements remains a challenge, meaning that continuous further development of the legal framework seems essential. The combination of ethical principles and liability law requirements is a central starting point for meeting the complex requirements of an increasingly digitalized world.