- Artificial intelligence (AI) is changing recruiting, but is also becoming increasingly regulated and poses legal challenges.
- Automated decisions are generally not permitted under Art. 22 GDPR, unless there is human control.
- Transparency regarding the use of AI in application processes is required by law and is important for building trust.
- The General Equal Treatment Act (AGG) protects against discrimination; AI can introduce unwanted bias into selection processes.
- Startups must comply with GDPR and AGG requirements, despite limited resources and expertise.
- Choose legally compliant AI tools and consider bias testing, human control and data protection impact assessments.
- Early planning and proactivity in legal matters minimize the liability risk and create trust.
As a lawyer specializing in advising start-ups, I have first-hand experience of the growing importance of artificial intelligence (AI) in recruitment. Many founders and HR managers are fascinated by the prospect of using AI to make application processes more efficient and objective – for example through automated CV screening or digital pre-interviews. At the same time, however, I also sense considerable uncertainty: What is legally permitted? Where are the pitfalls, especially in German data protection and employment law? And how do you make sure that, as a young company, you don’t unintentionally violate laws such as the GDPR or the AGG?
In this article, I provide a comprehensive guide from my legal practice. I highlight the legal framework for the use of AI in the application process in Germany – from the requirements of the General Data Protection Regulation (GDPR) for automated decision-making (keyword Art. 22 GDPR) to information obligations towards applicants, protection against discrimination under the General Equal Treatment Act (AGG) and data protection requirements (legal bases, purpose limitation, storage periods, technical and organizational measures). Specifically for start-ups, I discuss the additional challenges that arise and how these can be addressed pragmatically. Finally, I provide practical recommendations on how founders can select and implement legally compliant AI tools – from selecting the provider to audit trails and bias monitoring to the necessary documentation.
My aim is to give you – as the founder or HR manager of a startup – the tools to use AI in recruiting responsibly and in compliance with the law. When used correctly, AI can help you find the right talent and save valuable time. However, if used incorrectly, there is a risk of legal conflicts and reputational damage. So let’s go through the relevant legal issues step by step so that you can take advantage of the opportunities offered by AI without falling into legal traps.
Automated decisions in the application process (Art. 22 GDPR)
A central starting point is Art. 22 para. 1 GDPR Everyone has the right not to be subject to a decision based solely on automated processing which produces legal effects concerning him or her or similarly significantly affects him or her. What does this mean for the application process? If an AI system independently decides to reject certain applicants – for example by creating a ranking and all candidates below a certain score automatically receive a rejection – then the AI makes a decision with a significant impact (the applicant loses the chance of getting the job). Such fully automated rejections are generally not permitted under Art. 22 para. 1 GDPR.
Although there are exceptions in Art. 22 para. 2 GDPR, these hardly apply to recruitment: an exclusively automated decision would be permitted if it is necessary for the conclusion or performance of a contract with the data subject (lit. a), is legally permissible subject to appropriate safeguards (lit. b) or is expressly based on the consent of the data subject (lit. c). In the recruitment process, it can rarely be argued that a computer must necessarily make the decision without humans in order to conclude a contract – on the contrary, there is always the possibility of human review. There are no special legal permissions for automated recruitment decisions in Germany. And as a lawyer, I think relying on the consent of applicants is risky: on the one hand, it is questionable whether consent can really be voluntary in the application process (due to the pressure to get a job). Secondly, consent can be withdrawn at any time, which undermines planning security.
The practical recommendation is therefore clear: avoid making decisions in the application process that are made solely by an AI. The HR department should always have the final say. Art. 22 GDPR forces HR to retain “decision-making sovereignty”.
It is important to understand how strictly this principle is interpreted by current case law. A landmark ruling by the European Court of Justice – the so-called SCHUFA ruling of 7 December 2023 (ECJ, Case C-634/21) – has emphasized the scope of Art. 22 para. 1 GDPR. Although the case concerned credit scoring, the guiding principles can be applied to applicant selection: The ECJ ruled that even an automated action that significantly shapes the decision-making process can be considered an “exclusively automated” decision. This means that even if a human is still formally involved, there is a violation of Art. 22 GDPR if this human in fact only nods at the result specified by the AI. Applied to recruiting, this means that if you have an AI system pre-sort applications and then only show the HR department the top candidates, while the remaining applications are sorted out unseen, then the AI has de facto decided the fate of these unseen people on its own. Such a procedure would be problematic because it automatically excludes applicants without a human ever having checked their documents.
The consequence: ensuring the final human decision. In practice, AI can certainly be used for support – for example, to scan and evaluate incoming applications – but the final selection of who is invited or rejected should not be based stubbornly on the algorithm. For example, HR can consider all applicants classified as suitable by the AI, but also randomly view applications that the AI has rated poorly in order to discover possible “overlooked” candidates. It is important that a human has the option of deviating from the AI recommendation and actually uses this option, at least in individual cases. In this way, the decision is not “exclusively automated”.
Applicants’ rights in this context also include transparency and participation: if – despite all caution – an admissible automated decision is made once Art. 22 para. 3 GDPR certain protective measures. For example, the applicant would have the right to present their point of view or to challenge the decision, as well as the right to human intervention. Ideally, however, we should not let it get to the point where an applicant receives a fully automated rejection. If you take Art. 22 GDPR seriously, you should plan for a “human-in-the-loop” from the outset, i.e. a human entity that supervises the AI selection. This not only drastically reduces legal risks, but often also improves the quality of the selection, as the human can incorporate context and soft skills that an algorithm may not recognize.
Information obligations and transparency in the use of AI
Data protection thrives on transparency. Applicants have a right to know whether and in what form AI systems are used in the selection process. Companies must comply with the information obligations under Art. 13 GDPR as soon as the data is collected – typically when the applicant submits their documents. This privacy policy for applicants should explicitly mention that an AI-based system is used to support the application process. It is important that the purpose is clearly stated (e.g. “Implementation of the application process, including a partially automated evaluation of the application documents”) and that applicants are informed of the key points: What data will be processed? How long will it be stored? To whom will they be transmitted if necessary (e.g. to an external AI service provider)? And above all: How does the AI-supported evaluation work in principle?
Article 13(2)(f) of the GDPR requires that data subjects in the case of automated decisions within the meaning of Art. 22 para. 1 GDPR be informed about the logic involved and the scope and intended effects. Specifically: If an automated decision were actually involved (such as an automatic rejection by the AI), the applicant would at least have to be given a basic explanation of the criteria used by the system to make the decision and what this means for them. In practice, however, we avoid such fully automated decisions, as recommended above. Nevertheless, you should proactively ensure transparency even when using AI for purely supportive purposes: applicants appreciate it when it is openly communicated that, for example, a software tool scans and evaluates their details, but the final decision is made by a human. This information can be included in the privacy policy – for example: “We use software for the initial screening of applications, which creates profiles based on the job requirements and suggests a ranking list. However, the final selection decision is the responsibility of our HR employees.” A text like this informs the applicant honestly without scaring them off and at the same time shows that there is no blind “computer decides, humans have no influence”.
In addition to being fair to candidates, transparency also has a legal aspect: it not only fulfills legal obligations, but can also help to avoid disputes later on. If a rejected applicant feels discriminated against or asks why they were rejected, documented AI logic communicated in advance can make it easier to explain objectively that the decision was based on legitimate criteria. (Note: There is no general obligation to communicate reasons for rejection – many employers deliberately do not do so in order to avoid providing a target. However, under data protection law, the applicant could assert a right to information in accordance with Art. 15 GDPR in order to find out what data is stored about them and, if applicable, whether profiling has taken place. At the latest then you have to drop your pants and explain how the AI system has worked).
Another point is the purpose limitation: Applicant data may only be used for the process for which it was collected. If you use AI to gain insights from CVs, for example, this data must not suddenly be used for completely different purposes. In particular, it is not advisable to use application data unsolicited to further develop the AI model. Example: A start-up develops its own AI evaluation tool and wants to use every incoming application to train the algorithm at the same time. This would be a new purpose (“training the system”) that goes beyond the actual selection of applicants. In this case, the applicant would have to be clearly informed beforehand – presumably you would even need their consent, as it is no longer covered by the original purpose of the “application process”. Alternatively, the data would have to be anonymized so that there is no longer any personal reference before it is fed into the training.
Profiling is a keyword that is closely linked to AI. The GDPR defines “profiling” (Art. 4 No. 4) as any automated processing of personal data intended to evaluate personal aspects relating to an individual. So if the AI tool calculates an aptitude score or evaluates personality traits, for example, this is profiling. Profiling in itself is not prohibited, but requires special attention to the transparency rules mentioned. For example, the privacy policy may state: “We also use automated analysis methods (profiling) to assess your suitability for the position. This serves the sole purpose of matching your details with the job profile.” Important: If you actually carry out a fully automated decision in exceptional cases (which we want to avoid), you would also have to explicitly refer to Art. 22 GDPR and explain the rights mentioned (to human intervention, etc.).
To summarize: Clear and honest communication about the use of AI creates trust and fulfills legal obligations. Applicants should not have the feeling that they are being secretly weeded out by a “recruiting robot”. It is better to provide too much information than too little – this shows professionalism and a sense of responsibility. And always keep your data protection information up to date, especially when you introduce new tools or change the way you process data. This is the only way applicants can exercise their rights and you can meet your accountability obligations (Art. 5 para. 2 GDPR).
General Equal Treatment Act (AGG) and AI risks
In addition to data protection, anti-discrimination law is a particular focus in the application process. The General Equal Treatment Act(AGG) also expressly applies to applicants – a rejected candidate can therefore invoke it if they have been discriminated against on the basis of a protected characteristic. Protected characteristics are listed in § 1 AGG: Race, ethnic origin, gender, religion or belief, disability, age or sexual identity. No one may be discriminated against on the basis of one of these characteristics, either in the selection process or during recruitment.
So how can AI lead to discrimination? It is often not deliberate discrimination (“the algorithm rejects all women” would be overt, direct discrimination under Section 3 (1) AGG, which is of course illegal). Hidden, indirect discrimination is much more likely (Section 3 (2) AGG). Indirect discrimination means that an apparently neutral rule or practice actually leads to a particular disadvantage for people with a protected characteristic. AI systems are susceptible to this. They learn from historical data and find statistical correlations. If the training data or the defined selection criteria are biased, the AI adopts this bias.
A practical example: An AI-supported screening tool was trained using data from previous application processes. In the past, more male applicants were hired – without it being openly stated. The AI may recognize patterns that are indirectly related to gender – such as certain formulations in the CV or hobbies – and classify applications from women lower on average. The company might not even notice this, but the result would be indirect gender discrimination: a seemingly neutral criterion (similarity to previous “successful” applications) disadvantages women as a group. A similar thing can happen with age (e.g. AI systems could rank older applicants lower because they interpret longer professional experience differently, or because certain current software skills are mainly found in younger applicants’ CVs). Or with ethnic origin: If, for example, the algorithm unintentionally rates applications with certain first names or places of residence lower (because these characteristics correlate with a lower hiring rate in the training data), there is indirect discrimination based on origin.
§ Section 7 AGG prohibits such discrimination in the employment relationship – including the application phase. The risk is not only theoretical: there have already been cases in the world of work in which automated systems have produced discriminatory results (keyword algorithmic bias). Important to know: If a rejected applicant presents evidence that points to discrimination, Section 22 AGG applies – the so-called reversal of the burden of proof. The employer must then prove that there was no violation of the AGG. This can be extremely difficult with an opaque AI system. How are you going to prove in court that your algorithm did not have an impermissible bias if even developers are often unable to explain exactly why the algorithm rejected someone? This is precisely where a considerable liability risk lurks.
If courts or the anti-discrimination office come into play, there is a risk of claims for compensation and damages under Section 15 AGG. Although a rejected applicant has no right to be hired, they can demand monetary compensation. Many an employer has had to pay several thousand euros because an applicant could credibly claim to have been discriminated against because of their age, for example – even if the discrimination was not intentional. In addition, the damage to an employer’s image is enormous: a press release saying “Startup XY systematically sorts out older applicants” can be devastating for a young company.
What can be done to minimize the AGG risk? First of all, design your AI-supported procedure in such a way that it focuses on objective, job-related criteria. Avoid any consideration of characteristics that could also only serve as a proxy for discrimination grounds. For example, an AI does not need access to the applicant’s date of birth or gender – such data should either not be fed into the system in the first place or be hidden during the evaluation. Some companies disregard photos and names in the first step in order to reduce bias (keyword: anonymized applications). This makes sense, but is unfortunately no guarantee: even seemingly neutral data such as the zip code (can reveal something about regional origin) or club memberships (can suggest religion or ideology) can become a proxy. It is therefore important that the developers or providers of the AI tool systematically test for bias. Use test data, for example: What happens if two profiles are identical except for one characteristic (such as gender or age) – does the AI rate them the same? Such tests provide indications as to whether the system may be indirectly evaluating a protected characteristic.
Furthermore, AI results should be statistically analyzed: For example, look over a few months to see how high the proportion of certain groups in the top candidates is compared to their proportion of all applicants. If, for example, out of 100 applications, 50 came from women and 50 from men, but the AI only suggests 2 women in the top 10, alarm bells should ring – there could be an (unintentional) bias at play. Such monitoring measures are part of bias monitoring, which I will discuss in the practical section.
In the end, it remains to be said: The responsibility for non-discriminatory procedures always lies with the company. You cannot excuse yourself by saying that the software has spit it out that way. If the AI discriminates, your company is legally “discriminating”. That’s why start-ups need to take the utmost care here. Ideally, you should also document why a candidate was rejected – e.g. “lack of XY expertise” – so that it can be proven in retrospect that the reasons had nothing to do with AGG characteristics. The more plausible and comprehensible your decision-making process, the better you can refute an AGG accusation. AI can be a tool, but it must be trained and used correctly so that it does not become a boomerang.
Data protection requirements in the application process
When handling applicant data, the general rules of the GDPR and supplementary German data protection law apply. First of all, the legal basis: In Germany, data in the employment context (which also includes applicants) is generally § Section 26 (1) BDSG is decisive. Accordingly, the processing of personal applicant data is permitted if it is necessary for the decision on the establishment of an employment relationship. This covers many typical actions in the application process – e.g. reading a CV, taking notes of interview results or even the use of a suitable IT system, provided it really serves the proper selection of personnel. “Necessary” is understood as a proportionality test: Is there a milder means than this data processing? Is it appropriate to achieve the objective (finding the right person)? An AI tool must therefore not be used “excessively”.
Depending on the situation, the general legal bases of the GDPR may also apply: Art. 6 para. 1 lit. b GDPR (implementation of pre-contractual measures in the context of an application relationship) or lit. f (legitimate interest of the company in efficient recruitment). At the end of the day, however, all of the aforementioned bases require the same thing: careful consideration and proportionality. Applicant consent, on the other hand, is rarely suitable as a basis – apart from in special cases. As already mentioned, it is often not really voluntary (keyword “imbalance”). Only in certain cases, for example if you ask applicants whether you may keep their data in the talent pool, can consent be useful. In this case, however, it must be explicit, informed and voluntary and can be withdrawn at any time.
The data protection principles of Art. 5 GDPR also apply in other respects: Purpose limitation, data minimization, storage limitation, integrity and confidentiality. For AI, this specifically means: Do not collect more information than is really necessary for the selection(data minimization). Questions in the application form should be job-relevant – information on religion, ideology, health data or family planning have no place there (unless the applicant voluntarily declares a severe disability, for example). Any additional data collection increases risks. If possible, use privacy by design and default (Art. 25 GDPR): The system should be configured to work in a data-saving manner by default. For example, you can often set which fields the AI evaluates – it is important to weigh this up and, if in doubt, allow less.
Purpose limitation and retention periods: Applicant data may only be used for the ongoing process and may not be misused for other purposes. For example, you should not use unsolicited application documents to “train” your AI algorithm – unless you completely anonymize the data or obtain separate consent. Once an application process has been completed, the data of rejected candidates must be deleted as soon as it is no longer required. In practice, a maximum period of around 6 months has been established as permissible. Many companies keep the documents for this period in order to be able to respond to any legal claims (in particular under the AGG). After this period at the latest, the data should be permanently removed or anonymized. You should also inform applicants of this in your data protection notice (e.g. “Your data will be kept for up to 6 months after the end of the procedure and then deleted”). If you would like to keep someone on record for longer, you will need their consent for the talent pool, as mentioned above.
Technical and organizational measures (TOM): Ensure appropriate security of applicant data (Art. 32 GDPR). This means: control access to the data (only authorized HR personnel or contracted service providers), protect data during transmission or storage (e.g. encryption) and generally maintain confidentiality. If you use a cloud-based AI tool, conclude an order processing contract with the provider in accordance with Art. 28 GDPR. This must state how the service provider protects the data and that it will only process it in accordance with your instructions. With international providers, pay attention to the data transfer requirements (keyword Schrems II and standard contractual clauses if data flows to the USA). If in doubt, a European provider is easier to deal with.
Last but not least: Check whether a data protection impact assessment (DPIA) is required (Art. 35 GDPR). In the case of AI systems for applicant assessment, there is usually a high risk to the rights of applicants – due to automation and potential effects on career advancement. Therefore, a DPIA will usually have to be carried out. So document in advance what risks the AI process poses (e.g. wrong decisions, potential for discrimination, data protection violations) and what countermeasures you are taking (e.g. human control, pseudonymization, strict access restrictions, bias tests, etc.). A DPIA of this kind gives you peace of mind and shows authorities in case of doubt that you have fulfilled your obligations.
Finally, applicants have data subject rights: for example, they can request information about what data has been stored about them (Art. 15 GDPR). You must be prepared for this – if in doubt, you must be able to explain that and how an AI system was used. Applicants may also be able to request rectification or erasure of their data or object to processing. In practice, the latter will mean that the application cannot be considered any further. It is important to build in processes from the outset to ensure that such rights are fulfilled in a timely manner. Then you won’t get into trouble here either.
Special requirements and challenges for start-ups
You might think that all these rules only apply to large corporations with huge numbers of applicants. But far from it: start-ups and small companies must also comply with the legal requirements. However, young companies in particular are often not even aware of what obligations they have – after all, the focus is initially on the product or business idea, not directly on compliance. As a lawyer, I see typical challenges here:
- Limited resources and expertise: A small start-up rarely has its own legal advisor or data protection expert. The topics of GDPR and AGG seem complex and are sometimes suppressed, according to the motto “we have more important things to do”. This can be dangerous, because ignorance is no defense against punishment. It is worth building up at least a basic level of expertise or seeking external advice before unleashing AI tools on applicants. It’s not about every founder becoming a lawyer – but you should know the most important dos and don’ts.
- No free pass for small companies: Contrary to what some people assume, start-ups are hardly privileged under data protection law. There are some simplifications, such as the fact that companies with fewer than 250 employees do not have to keep formal records of processing activities (except in the case of high-risk processing, which is quickly the case with AI) or that a data protection officer is only mandatory for 20 or more employees (Section 38 BDSG). However, the core obligations of the GDPR – from the data protection principle to the obligation to report data breaches – apply from the first customer or applicant. The AGG also has no trivial limit: as soon as you want to hire someone, you are obliged to make a non-discriminatory selection.
- Works council and co-determination: Many startups do not have a works council in the first few years, so you don’t have to worry about co-determination rights at first. However, if there is a works council (e.g. if your startup grows and employees set one up), remember that the introduction of AI systems may be subject to co-determination. According to the Works Constitution Act, a works council must have a say in selection guidelines (Section 95 BetrVG) and consent to technical equipment that is suitable for monitoring the behavior or performance of employees (Section 87 (1) No. 6 BetrVG). An applicant management system with AI could be considered a “selection guideline”, and if it also evaluates online interviews or digital tests, for example, this falls under these rules. For a small startup, this may be a distant dream of the future – but I mention it because founders often have no idea that such aspects could become important later on. So if you eventually grow and have a works council, get them on board before you use AI in HR to avoid formal conflicts.
- International tools and data transfer: Startups are keen to experiment and like to use new tools that are available globally. An AI recruiting tool from the USA may be technically great, but think about the legal implications – especially data transfers to third countries. Small companies in particular need to be careful here: A breach of the GDPR (e.g. unauthorized transfer to the USA) can theoretically be punished just as severely as for large companies. With a limited budget, a high fine can threaten a company’s existence. So it’s better to check in advance: Where is the provider based? Does it offer EU servers? Does it have standard contractual clauses (if necessary) and other guarantees for international transfers? You need to ask yourself questions like these, even if you only have 10 employees.
- Rapid growth and scaling: If successful, a start-up grows quickly. Processes that were still informal with 5 employees must be structured with 50 or 100 employees. What does this mean for recruiting? Perhaps you only receive applications sporadically at the beginning and can do a lot manually. Later on, when the number of applicants increases, AI solutions are used – but then the “mountain of data” is also larger and errors in the system scale with it. As a larger company, you are also more likely to be the target of audits by the data protection authorities. So my advice is to set the right course early on. If there is an awareness of data protection and equal treatment right from the start, processes can be scaled up with the company without having to reinvent everything. If, on the other hand, you have “grown wildly” and only realize when you have 50 employees that you have ignored the GDPR & AGG so far, the retrofitting will be all the more painful.
- External impact: Start-ups thrive on trust – from customers, investors and future employees. A faux pas on the subject of AI (such as a public accusation of discrimination with opaque software) can shake this trust. Small companies have fewer reserves to ride out such reputational damage than large corporations. Start-ups in particular should therefore take the issue seriously. Word also gets around in applicant circles when a company treats candidates fairly and transparently – or not. In the “war for talent”, this can make the difference between top talent applying to you or going to the competition.
To summarize: Startups may have to juggle being innovative with scarce resources and acting quickly, but compliance in AI-supported recruiting must not fall by the wayside. It is usually enough to clear the biggest stumbling blocks with common sense and a little advice – then the effort remains manageable and you can still recruit in a modern way.
Selection and implementation of legally compliant AI tools – practical guide
- Provider and tool check: Choose your AI system carefully. Check the provider’s reputation and compliance as early as the selection phase. Is the company based in the EU and therefore automatically subject to the GDPR? If not, what safeguards does it offer (e.g. EU data center, standard contractual clauses)? Read the tool’s data protection information: does it use applicant data for its own purposes (e.g. training other AI models)? If in doubt, ask. Reputable providers should be prepared to explain to you how their model works, what data it processes and what measures have been taken to prevent bias. A look at certifications can also help – are there perhaps already seals of approval or audits for the product?
- Bias monitoring right from the start: Before you fully implement the tool in your application process, carry out internal tests (if necessary with historically anonymized application data or constructed test profiles). The aim is to uncover any bias at an early stage. Analyze the results: Are there any tendencies, e.g. that a certain age group always performs worse?
- Ensure human control: Implement organizationally that the AI does not make any final decisions. This may sound obvious, but it is important in practice: for example, define that every automated pre-selection is always cross-checked by a recruiter – at least on a random basis or in borderline cases. Train your HR employees to critically scrutinize AI results. It can be useful to create an internal guideline that states: “AI is an auxiliary tool, the responsibility lies with people.” Such a guiding principle helps to shape the culture in dealing with the tool. Specifically, you could, for example, stipulate that all applicants who meet certain minimum criteria are screened manually at least once before being rejected, regardless of their AI score. This allows you to maintain Art. 22 GDPR-compliant conditions.
- Transparency towards applicants: As described in detail above, inform candidates about the use of AI. In practice, you should revise your privacy policy for applicants before the tool goes live. Mention the system, describe how it works in a few sentences and emphasize that no decision is made without human review. You can also consider briefly mentioning this in the job advertisement or on the careers website (“We use modern tools to evaluate applications efficiently and fairly”). This creates trust. It is also important to be able to name someone internally who can answer queries in case of doubt. If an applicant wants to know, for example, “What criteria does the AI actually use to evaluate me?”, your team should be prepared to give a comprehensible answer.
- Carry out a data protection impact assessment (DPIA): Document a DPIA before the tool goes live (if required – which, as mentioned, will usually be the case). Take a structured approach: Describe what the system does, what data it uses, who has access, how long data remains etc. Identify risks (e.g. unauthorized access, discrimination, misjudgements) and list countermeasures (e.g. pseudonymization of applicant data during analysis, regular checks, human review). Evaluate the residual risk. It doesn’t have to be a 50-page treatise – the important thing is that you think through the process. Should a supervisory authority or an investor ask, you can show: “Here, we have carefully examined this – and this is how we act.”
- Set up an audit trail and documentation: Don’t let the system run as a black box. Make sure that decisions remain traceable. Many AI tools offer, for example, the option to see afterwards what score a candidate had or what criteria led to rejection (sometimes in the form of explanations such as “lack of key qualification XY”). Activate such functions. Save the relevant logs– naturally in compliance with data protection periods. In the event of a dispute (e.g. an AGG procedure), you can then prove why the decision was made. Attention: Applicants may have a right to information. Therefore, do not write any derogatory comments in the system, but limit yourself to factual information. An audit trail can also include that every recruiter who overrides an AI recommendation (e.g. invites an applicant who was rejected by the AI after all) briefly justifies this. This allows you to see later whether the AI criteria may need to be adjusted because good people were overlooked.
- Contractual protection with the provider: Contractually ensure that the AI service provider offers you support if required – for example, in fulfilling requests for information or official audits. Also clarify liability issues in advance so that you are not left holding the bag in the event of a problem.
- Training and sensitizing your team: Train your HR managers in the use of the AI tool. There must be an understanding of how AI works, where its limits lie and how wrong decisions can be recognized. Raising awareness helps to critically question AI results instead of blindly trusting them.
- Contingency plan for incidents: Even with the utmost care, something can go wrong – such as a technical failure of the AI platform, a data leak or an indication of blatant bias. Develop a simple emergency plan: Who will be informed (data protection officer, management)? What happens if the tool fails – do we have a backup process (e.g. manual processing of all applications)? What do we do if we discover serious discrimination – do we stop the application immediately, do we inform affected applicants, do we correct decisions retrospectively? Going through these questions in advance helps to keep a cool head in an emergency and to act quickly and correctly. In particular, a data protection breach (e.g. a leak) must be reported to the authorities within 72 hours – responsibilities should be clear.
- Staying up to date: The world of AI and the laws surrounding it is in a state of flux. What is state of the art and legal today may change in a year’s time. So stay informed! Follow the development of the planned EU AI Act (the European AI regulation). It is foreseeable that AI in recruiting will be classified as a high-risk application and will have to meet future legal requirements (e.g. technical documentation, risk management, certification if applicable). You should also keep an eye on national legislative initiatives (e.g. amendments to the AGG or the BDSG). Legal certainty is an ongoing process, not a state of affairs – those who update themselves regularly will not experience any nasty surprises and can take advantage of innovations at an early stage.
Current developments: Case law and regulation
Finally, a look at current rulings, official notices and new regulations that are relevant for AI in recruiting:
- European case law (ECJ): The European Court of Justice has now clarified the issue of automated decisions. In addition to the SCHUFA ruling mentioned above (ECJ, 07.12.2023 – C-634/21), which clarified that an AI selection can fall under Art. 22 GDPR even if a human is only involved pro forma, there is, for example, the ruling of 30.03.2023 (ECJ, Case C-34/21). In this ruling, a national regulation (similar to Section 26 BDSG) was declared invalid because it conflicted with the GDPR. The bottom line: National special rules may not apply weaker standards than the GDPR. The German Federal Labor Court (BAG) has also clarified that a proportionality test must always be carried out (BAG, decision of 7.5.2019 – 1 ABR 53/17). So if you use AI, you must be able to demonstrate that the procedure was fair and necessary in an emergency.
- Official guidelines: The data protection supervisory authorities (e.g. in the DSK guidance on employee data protection) emphasize that special care must be taken with AI systems in human resources. In particular, attention should be paid to transparency, explainable criteria and – if automated assessments are carried out – strict compliance with Art. 22 GDPR. The European Data Protection Board (EDPB) demands “meaningful human intervention” in important decisions, i.e. genuine human control. In 2023, the Federal Anti-Discrimination Agency suggested in an expert opinion that the AGG should be adapted to AI challenges (e.g. through rights of association and extended rights to information as well as obligations for AI providers). These are still just proposals, but politicians are taking the issue seriously – future tightening of the law is possible.
- Opinions in the literature: There is a broad consensus among legal experts that fully automated decision-making systems without a human check in the application process are legally impermissible or at least highly risky. The tenor: AI tools should be used as a support, but must not completely replace humans. There are isolated discussions as to whether, for example, a pure ranking without a final decision falls under the ban – but in view of the ECJ case law, one should be careful here and, in case of doubt, always include an intermediate human step.
- EU AI Regulation (AI Act): The European AI Regulation is a real game changer on the horizon. The AI Act was politically approved at the beginning of 2025 and is expected to come into force in 2026. It will classify AI systems in recruiting as high-risk. This entails additional obligations: providers of such systems must, among other things, operate a risk management system, check the training data for bias and create extensive technical documentation. However, users are also required to use and monitor their AI system properly. Infringements can result in drastic fines that are standardized across Europe – the AI Act stipulates up to 6% of annual global turnover. As a startup, you are therefore well advised to keep an eye on developments. The good news: in future, AI tools will probably come with more compliance “out of the box” (e.g. certifications), which will make it easier for users. Nevertheless, it remains your job to prevent discrimination and protect data privacy – even an EU seal of approval won’t change that.
Conclusion
In my experience, the use of AI in recruiting can be a real added value for start-ups: faster pre-selection, more objective decisions, less pressure on the team. But only if it is done responsibly and in a legally compliant manner. The legal guard rails – from Art. 22 GDPR to information obligations and the AGG – are not an end in themselves, but protect applicants from unfair treatment and their data from misuse. Especially in the sensitive situation of an application, a startup should show that it takes this responsibility seriously.
My advice as a lawyer: take a proactive approach to the issue. Don’t wait until the first incident or a warning letter arrives. Data protection and equal treatment should already be on the table when planning to integrate AI into recruiting. It sounds time-consuming, but it can be structured well with the tips given here. If necessary, seek external advice – a brief consultation can prevent costly mistakes.
The ethical dimension is also important: a start-up usually wants to be innovative and modern, but also inclusive and fair. A biased AI system does not fit this image. Conversely, a transparent, fair AI process can strengthen the image of a progressive and responsible employer. Applicants are increasingly paying attention to how they are treated.
Ultimately, investing in legal certainty pays off: you not only avoid fines or legal proceedings, but also create trust – among applicants, employees and business partners. If an authority should actually investigate or an applicant should follow up, you can show your confidence: We have this under control, we use AI, but the human being remains the master of the process.
Developments will continue (think of the AI Act or new rulings), but if you have internalized the basic principles – transparency, fairness, data economy and security – then you will also master future challenges. AI in the application process doesn’t have to be a minefield. With the right mindset and the right precautions, it becomes a powerful tool that gives your startup a head start without getting into legal trouble.