The European Council has adopted the AI Act, the world’s first comprehensive set of regulations for artificial intelligence (AI). This law aims to promote the development and use of safe and trustworthy AI systems in the EU single market by both private and public actors, while respecting the fundamental rights of EU citizens and stimulating investment and innovation in the field of artificial intelligence in Europe.
Classification of AI systems as high-risk and prohibited AI practices
The new law categorizes different types of artificial intelligence according to risk. AI systems that pose only a limited risk are subject to very light transparency obligations, while high-risk AI systems are authorized but subject to a number of requirements and obligations in order to gain access to the EU market. AI systems, such as cognitive behavioral manipulation and social scoring, are banned in the EU because their risk is considered unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorize people by certain categories such as race, religion or sexual orientation.
Next steps
Once signed by the Presidents of the European Parliament and the Council, the legal act will be published in the Official Journal of the EU in the coming days and will enter into force twenty days after this publication. The new regulation will be applicable two years after its entry into force, with some exceptions: Prohibitions will take effect after six months, the governance rules and the obligations for general AI models will apply after 12 months and the rules for AI systems embedded in regulated products will apply after 36 months. To facilitate the transition to the new legal framework, the Commission has launched the AI Pact, a voluntary initiative to support future implementation and invite AI developers from Europe and beyond to comply with the key obligations of the AI Act in advance.
Background
The AI Act is a key element of EU policy to promote the development and use of safe and lawful AI that respects fundamental rights. The proposal for the AI Act was submitted by the Commission in April 2021. A provisional agreement between the co-legislators was reached on December 8, 2023.The AI Act aims to provide AI developers and users with clear requirements and obligations regarding certain uses of AI. At the same time, the regulation is intended to reduce the administrative and financial burden for companies, especially small and medium-sized enterprises (SMEs).The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, including the AI Innovation Package and the Coordinated Plan for AI. Together, these measures will ensure the safety and fundamental rights of people and businesses in relation to AI. They will also boost acceptance, investment and innovation in the field of AI across the EU.The AI Act is the world’s first comprehensive legal framework for AI. The aim of the new rules is to promote trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles and address the risks of very powerful and effective AI models.
Why do we need rules for AI?
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited or no risk and can help solve many societal challenges, certain AI systems create risks that we need to address to avoid undesirable outcomes.For example, it is often not possible to find out why an AI system made a certain decision or prediction and carried out a certain action. It can therefore be difficult to assess whether someone has been unfairly disadvantaged, for example in a recruitment decision or when applying for a public benefit.Although existing legislation provides some protection, it is not sufficient to deal with the specific challenges that AI systems can bring.The proposed rules will:
- Address risks that arise specifically from AI applications;
- Prohibit AI practices that pose unacceptable risks;
- draw up a list of high-risk applications;
- define clear requirements for AI systems for high-risk applications;
- define specific obligations for users and providers of high-risk AI applications;
- require a conformity assessment before a specific AI system is put into operation or placed on the market;
- ensure enforcement after the market launch of a particular AI system;
- establish a governance structure at European and national level.
The AI Act is an important step towards the responsible and ethical development and use of AI in Europe. It creates a framework that promotes innovation and competitiveness while protecting the fundamental rights and security of citizens. The impact of the AI Act will need to be carefully monitored over the coming years to assess its effectiveness in promoting safe, ethical and innovative AI.