AI Literacy and the interpretation of Article 4 of the EU AI Act
The EU AI Act, which came into force on August 1, 2024, presents companies with new challenges, particularly in the area of AI competence or “AI literacy”. Article 4 of the Act, which will apply from February 2, 2025, obliges companies to ensure a sufficient level of AI literacy among their employees. However, the exact interpretation of this article is still the subject of debate. For small start-ups, the question arises as to whether they actually need a dedicated AI officer. Experts interpret the requirements to mean that there does not necessarily have to be a specific officer, but rather that a basic understanding of AI should be created throughout the entire team. This can be achieved through regular training, participation in specialist conferences or cooperation with external experts. The scope of the measures required should be based on the complexity and risk potential of the AI systems used. In recent months, there has been a veritable boom in providers selling expensive training courses and certificates for AI expertise. While basic training and knowledge can be useful for teaching topics such as data protection and the responsible use of AI, most high-priced certificates are probably too expensive for the vast majority of companies. Especially for small start-ups and medium-sized companies that use AI systems with low risk potential, such costly training programs are often not necessary. Instead, it is advisable to carefully analyze the actual need for AI expertise in the company and develop tailor-made solutions. This can include internal workshops, the use of free online resources or targeted collaboration with experts on specific issues. The AI Act aims to ensure a “sufficient level” of AI competence, which can vary greatly depending on the company and the AI system used. A look at the recitals of the AI Act provides further important information on the interpretation of Article 4. Recital 30 emphasizes that the promotion of AI literacy aims to improve the understanding of the functioning, possibilities and limitations of AI systems. This should enable users and stakeholders to make informed decisions and use the technology responsibly. Recital 31 emphasizes that measures to promote AI literacy should be proportionate and appropriate to the context. This underlines that no blanket, overly elaborate training programs are required, but rather skills development geared to specific needs and risks. For start-ups and small companies, this means that they can focus on the aspects of AI technology that are relevant to them. For example, a company using AI-powered chatbots should ensure that the relevant employees understand the functionality, potential sources of error and ethical aspects of this specific application. Comprehensive training on complex AI algorithms or neural networks would probably not be necessary in this case.
It is also important to note that the AI Act provides for the development of guidelines and support measures by the EU Commission and national authorities. Startups should take advantage of these resources as soon as they become available to build the necessary AI expertise in a cost-effective manner. In addition, industry associations and networks can be valuable platforms for the exchange of knowledge and the joint development of best practices. Ultimately, building AI expertise is not about acquiring expensive certificates, but about creating a sound understanding of the technologies used and how to use them responsibly. Startups should see this as an opportunity to strengthen their innovative power and at the same time consolidate the trust of their customers and stakeholders in their AI-based products and services. Speaking of what else there is to say about the AI Act:
Concrete obligations and their implementation in everyday startup life
In addition to AI expertise, the AI Act imposes other important obligations on companies. Central to this are the documentation and transparency of AI systems, compliance with ethical guidelines and consideration of data protection and data security. Particularly strict requirements apply to high-risk AI systems, including the performance of conformity assessments and registration in an EU database. Providers of high-risk AI systems must provide appropriate operating instructions, including information on accuracy and relevant accuracy metrics. They must also ensure that operations are sufficiently transparent to allow operators to interpret and use the system’s outputs appropriately. While many startups may not develop high-risk systems, careful review and classification of their own products is essential. While implementing these obligations can be challenging for small teams, it also provides an opportunity to establish robust processes and build customer trust. The AI Act also provides for special measures to promote innovation, such as AI real-world laboratories where providers can test their AI systems under regulatory supervision. This gives start-ups in particular the opportunity to further develop their innovations under real-life conditions.
Opportunities and competitive advantages through early compliance
Compliance with the AI Act offers start-ups and self-employed individuals not only challenges, but also significant opportunities. Early compliance can lead to a significant competitive advantage. Companies that demonstrably meet the requirements of the AI Act can position themselves as trustworthy and responsible players in the AI market. It can be worthwhile for companies to voluntarily submit to the requirements, even if this is not mandatory for lower risk classes. This can strengthen trust and reputation and position start-ups as pioneers in responsible AI development. The clear legal framework can also stimulate innovation and open up new business areas, for example in the field of compliance tools or consulting services. In addition, compliance with EU rules facilitates access to the entire European market, which offers new growth opportunities for small companies in particular. It is important to emphasize that the AI Act is not intended to hinder innovation, but rather to create a trustworthy and secure framework for the development and use of AI technologies. Companies that proactively implement the requirements of the AI Act can use this as a competitive advantage by strengthening the trust of their customers and positioning themselves as responsible players in the AI market.
Proactive handling of the AI Act as the key to success
For start-ups and the self-employed, a proactive approach to the AI Act is the key to long-term success in the AI market. It is important to see the new requirements not as a burden, but as an opportunity to differentiate and improve quality. Investing in AI expertise and robust compliance processes can pay off in the long term by strengthening the trust of customers and partners and opening up new business opportunities. As an experienced lawyer in the field of AI and technology law, I recommend that startups deal with the requirements of the AI Act at an early stage. The planned AI real-world laboratories offer an excellent opportunity to develop and test AI systems under supervision. Small companies and start-ups in particular should take advantage of this opportunity to drive forward their innovations and ensure compliance at the same time. In my consulting practice, I often advise companies to voluntarily submit to higher standards, even if their AI systems fall into lower risk classes. This can be a decisive competitive advantage and boost customer and investor confidence. Implementing the necessary measures, especially for high-risk systems, can be time-consuming and resource-intensive. My experience shows that early and thorough preparation is crucial. I support companies in this by developing customized compliance strategies and assisting with their implementation. The AI Act is a dynamic set of rules that will continue to evolve. As your legal advisor, I will keep you up to date with all relevant changes and help you adapt your strategies accordingly. This allows you to focus on developing innovative AI solutions while I ensure you remain compliant. By taking a proactive approach, you can not only minimize legal risks, but also strengthen your position in the growing AI market. Let’s work together to establish your company as a responsible innovator in the European AI landscape.