The Artificial Intelligence Act is a proposal for a European law on artificial intelligence (AI) – the first law. This is to ensure that artificial intelligence (AI) systems placed on the market and used in the EU are safe and comply with applicable legislation on fundamental rights and values of the Union.
The law assigns applications of AI to three risk categories. First, applications and systems that pose an unacceptable risk, such as government-run social scoring as used in China, will be banned. Second, high-risk applications, such as a resume scanning tool that ranks job applicants, are subject to specific legal requirements.
Finally, applications that are not explicitly prohibited or classified as high-risk remain largely unregulated.
The draft regulation presented by the Commission in April 2021 is a key element of the EU’s policy to promote the development and diffusion of safe and lawful AI throughout the single market, respecting fundamental rights.
The proposal follows a risk-based approach and establishes a single, horizontal legal framework for AI that aims to ensure legal certainty. It promotes investment and innovation in the field of AI, improves governance and effective enforcement of existing law related to fundamental rights and security, and facilitates the development of a single market for AI applications. It goes hand in hand with other initiatives, including the Coordinated Plan for Artificial Intelligence, which aims to accelerate investment in AI in Europe.
Definition of an AI system
To ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems, the Council text narrows the definition to systems developed using machine learning approaches and logic- and knowledge-based approaches.
Prohibited AI practices
Regarding prohibited AI practices, the text extends the prohibition on the use of AI for social scoring to private actors. In addition, the ban on the use of AI systems that exploit the weaknesses of a certain group of people now also applies to people who are vulnerable due to their social or economic situation.
Regarding the prohibition of the use of remote biometric recognition systems in publicly accessible premises by law enforcement authorities, the text clarifies the objectives for which such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be allowed to use such systems by way of exception.
Classification of AI systems as high-risk
Regarding the classification of AI systems as high-risk, the text adds a horizontal layer above the classification as high-risk to ensure that AI systems that are unlikely to cause serious fundamental rights violations or other significant risks are not covered.
Requirements for AI systems with high risk
Many of the requirements for high-risk AI systems have been clarified and adapted to be more technically feasible and less burdensome for stakeholders to comply with, e.g., in terms of data quality or the technical documentation that should be prepared by SMEs to demonstrate that their high-risk AI systems comply with the requirements.
As AI systems are developed and distributed through complex value chains, the text includes amendments that clarify the distribution of responsibilities and roles of the different actors in these chains, in particular the providers and users of AI systems. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as relevant Union data protection legislation or sectoral legislation, including in relation to the financial services sector.
AI systems for general purposes
New provisions have been added to address situations where AI systems can be used for many different purposes (general purpose AI) and where a general purpose AI technology is subsequently integrated into another high risk system.
The text specifies that certain requirements for high-risk AI systems would also apply to general purpose AI systems in such cases. However, instead of a direct application of these requirements, an implementing act would specify how they should be applied in relation to general purpose AI systems, based on consultation and a detailed impact assessment and taking into account the specific characteristics of these systems and the related value chain, technical feasibility, and market and technological developments.
It was specifically noted that national security, defense, and military purposes are excluded from the scope of the AI Act. Similarly, it was clarified that the AI Law should not apply to AI systems and their results used solely for research and development purposes, and that the obligations of persons using AI for non-professional purposes are outside the scope of the AI Law, except for transparency obligations.
Taking into account the specifics of law enforcement agencies, several amendments were made to the provisions on the use of AI systems for law enforcement purposes. In particular, these changes are intended to reflect, subject to appropriate safeguards, the need to maintain the confidentiality of sensitive operational data related to their activities.
Compliance framework and AI board
In order to simplify the framework for compliance with the AI Act, the text includes several clarifications and simplifications of the provisions on conformity assessment procedures. The text also significantly amends the provisions on the Committee on Artificial Intelligence to give it more autonomy and strengthen its role in the governance architecture of the law on artificial intelligence. To ensure stakeholder participation in all matters related to the implementation of the AI Act, including the development of implementing and delegated acts, a new requirement has been added that the Committee establish a standing subgroup to serve as a platform for a broad range of stakeholders.
As for sanctions for violations of the provisions of the AI Act, the text provides for more proportionate caps on fines for SMEs and startups.
Transparency and other provisions in favor of the parties concerned
The text includes several amendments that increase transparency regarding the use of high-risk AI systems. In particular, some provisions have been updated to require certain users of high-risk AI systems that are public entities to also register with the EU database of high-risk AI systems.
In addition, a newly added provision highlights the obligation of users of an emotion recognition system to inform natural persons when they are exposed to such a system.
The text also clarifies that a natural or legal person may file a complaint with the relevant market supervisory authority regarding non-compliance with the AI Law and may expect that such complaint will be addressed in accordance with that authority’s specific procedures.
Measures to support innovation
With the aim of creating a more innovation-friendly regulatory framework and promoting evidence-based regulatory learning, the provisions on innovation-enhancing measures in the text have been substantially amended.
In particular, it was clarified that AI sandboxes, which are intended to provide a controlled environment for the development, testing, and validation of innovative AI systems, should also allow for the testing of innovative AI systems under real-world conditions.
In addition, new provisions have been added to allow unsupervised testing of AI systems in the real world under certain conditions and safeguards. In order to reduce the administrative burden on smaller companies, the text provides a list of measures to be taken to support such operators and provides for some limited and clearly specified exemptions.
The next steps
The adoption of the general approach will allow the Council to enter into negotiations with the European Parliament (“trilogue”) once the latter has established its own position with a view to reaching agreement on the proposed Regulation.