- VibeCoding is revolutionizing software development with AI systems and no-code platforms that replace manual programming.
- Lack of clarity regarding liability and rights for AI-generated code harbors legal risks for tech start-ups
- Tort and contract law provide the basis for liability for defective code.
- Copyright protection of AI code is problematic, as there are often no creative human contributions.
- Investors should carefully analyze IP risks and liability potentials for AI-generated products.
- Future EU regulations will further specify liability issues and copyright aspects.
- Startups need to adapt their compliance and contracts to avoid legal uncertainties.
VibeCoding describes a current trend in which software is no longer programmed manually, but is developed almost exclusively using AI systems or no-code platforms. Instead of writing traditional source code, founders and developers simply explain in natural language what their software should do or configure it using visual interfaces. Modern AI tools such as Codex, ChatGPT or special platforms then automatically translate these instructions into executable program code. As an experienced IT lawyer with a particular passion for AI topics, I am watching this development with fascination: Although VibeCoding enables a considerable increase in efficiency and drastically speeds up development – at the same time, automated code creation also raises completely new legal issues.
For example, it is still completely unclear who is liable if the AI-generated code contains errors or causes damage, and who ultimately owns the rights to such automated creations. In this article, I will therefore focus in particular on the civil liability of tech start-ups when using vibe coding and no-code tools, discuss the responsibility of platform providers and also highlight the associated copyright challenges. I also explain how uncertain intellectual property rights situations surrounding AI-generated code can lead to difficulties in legal due diligence and what this means for future investor discussions. I also take a look at relevant standards such as Sections 823, 831 and 307 BGB, the German Product Liability Act (ProdHaftG) as well as provisions of the German Copyright Act (UrhG) and the latest developments in EU law (e.g. AI Regulation) in order to provide practical guidance for founders, developers and investors.
Liability risks for startups with AI-generated code
Startups that largely use AI-generated code face the challenge of being legally responsible for the quality and safety of this code, even though it has not been fully tested or written by humans. In principle, AI systems have no legal capacity and therefore cannot be held liable themselves. Responsibility for AI output – be it program code, texts or images – lies with the person who uses the AI and makes use of the results. A company can therefore not claim that the AI is “responsible” – an exclusion of liability by merely referring to the AI’s origin is legally ineffective. This has several facets:
- Liability in tort (Section 823 BGB): If faulty AI code causes damage, the startup is liable according to general tort principles. § Section 823 (1) BGB establishes a liability for damages if, for example, an absolutely protected legal interest (such as life, health, property) is violated by negligently inadequately checked code. This could be the case, for example, if software generated by the AI causes material damage to the user due to a programming error (e.g. data loss, system failure with consequential damage). The decisive factor is whether the startup has breached its duty of care – i.e. failed to exercise the care required in traffic. In the absence of human control over AI code, the startup could be accused of negligence, especially if a review or tests would have been required according to the state of the art. However, it must also be taken into account that software errors can never be completely ruled out; liability therefore requires a breach of duty (e.g. complete lack of quality assurance). In addition to § 823 BGB, tortious producer liability can also be considered: According to established case law, anyone who places a product (here: software) on the market must exercise sufficient control to avoid risks – otherwise they are liable in the event of damage by way of producer liability on the basis of Section 823 (1) BGB. It should be noted that an AI itself is not a “product” in the legal sense, but part of a software product or service offered by the startup.
- Contractual liability and warranty: The startup is contractually liable to customers for defects in its software. In the B2B context in particular, startups often try to limit their liability contractually (limitations of liability in general terms and conditions or contracts). In Germany, however, this comes up against legal limits: A complete exclusion for simple negligence is usually ineffective in GTCs if it covers essential contractual obligations or unreasonably disadvantages the contractual partner (Section 307 BGB) . Cardinal obligations – i.e. central obligations whose breach jeopardizes the purpose of the contract – may not be completely excluded in GTCs. In individually negotiated contracts between entrepreneurs, liability can be limited further, but intent can never be excluded (Section 276 (3) BGB), and strict standards apply to gross negligence and personal injury. In practical terms, this means A startup can limit its liability for slight negligence towards business customers to a certain extent, but not for serious negligence or damage to life, body, health. Standard clauses that exclude any liability for AI errors across the board would generally be ineffective. Even in a purely B2B relationship, a complete exclusion of liability for AI errors for which the startup is responsible would often not stand up to a content review pursuant to Section 307 BGB because the customer would then bear the full risk. Furthermore, a startup cannot invoke such clauses against consumers at all (Section 309 No. 7 BGB prohibits exclusions of liability for bodily injury and gross negligence in consumer contracts).
- Liability for legal infringements (in particular copyright): Another risk is that AI-generated code infringes third-party intellectual property rights. AI models (e.g. code generators such as GitHub Copilot) have been trained with huge code datasets, which may also contain third-party copyrighted source code. It has been shown that AI outputs are sometimes very similar to the training material). As a result, there is a risk that newly generated code may be under an open source license, for example, without the startup being aware of this. For example, the AI could reproduce code from a GPL-licensed library – the startup would integrate this code into its proprietary product and thus violate the license terms and possibly copyrights. Code segments protected by copyright may not be adopted without further ado, otherwise there is a risk of injunctive relief and claims for damages by the copyright holder (Sections 97, 69c UrhG). A startup is liable here as the perpetrator or disturber of a copyright infringement, even if the adoption was done unknowingly by the AI, as the use of the result in its own product is attributed. Experts are already warning that AI-generated code could be the “next warning trap” for developers Example: If the AI code output corresponds to a third party’s code published on GitHub, the latter can send warning letters. A resourceful “troll” could even deliberately post code online under a strict license and speculate that AI systems will feed this code into various projects. This means considerable risks for the startup – from compliance violations for open source software to claims for damages due to copyright infringement. Accordingly, a startup must thoroughly check all AI outputs (e.g. using code scanning tools for matches with known codes) and ensure that it has rights to all code components. If it fails to carry out such checks, this could also be considered negligence.
- Product liability: In rare cases, damage caused by faulty code can also fall under the Product Liability Act (ProdHaftG). However, product liability traditionally applies to physical injury to persons or damage to property caused by a product defect. Whether purely digital products such as software fall under the definition of a product has long been controversial. According to the current legal situation, software as such (without a physical data carrier) is not clearly defined as a “product” within the meaning of Section 2 ProdHaftG. This means that a pure software error, which leads to data loss, for example, does not currently regularly trigger product liability – in this case, tortious and contractual claims remain. The situation is different if AI-controlled software is integrated into a physical product (e.g. an AI controls a machine or a vehicle) and an accident occurs as a result: Then the entire system can be considered a defective product and the manufacturer is liable under ProdHaftG. It is important to note that product liability is independent of fault (the manufacturer is already liable for defects in the product without having to prove negligence). It is therefore particularly important in the case of personal injury whether the AI-based system can be qualified as a product. In cases of doubt, however, the injured party would in any case take parallel tortious action against the startup (e.g. for breach of a duty of care). The ProdHaftG liability cannot be excluded by contract: Section 14 ProdHaftG prohibits any advance agreement that excludes or limits the manufacturer’s liability towards the injured party; such agreements are null and void. A startup can therefore not waive its product liability through general terms and conditions – neither towards consumers nor towards business partners, as far as claims of injured parties are concerned. Recourse agreements are only permissible in the internal relationship (between manufacturer and supplier, for example). This means for AI start-ups: If their products fall under product liability, the strict liability rules are mandatory – contractual indemnification is excluded. Given the previous uncertainty as to whether pure software is covered, this may not have been a major issue in practice; however, new EU rules (see section 2 below) are imminent and will clearly include software.
Interim conclusion: A startup that uses AI and no-code for programming is generally liable for errors and damages like any software manufacturer, even though the code creation is automated. The challenge is to maintain sufficient due diligence measures (quality tests, code reviews, license checks) despite a high level of automation. Automation does not remove responsibility – it only shifts the nature of the risks. A lack of human final control over the AI results can increase liability risks, as errors remain undetected. Startups should therefore clearly regulate contractually what they promise (without giving unrealistic guarantees), but also know that they cannot exculpate themselves across the board. Ultimately, the person who places AI code on the market is liable for its effects.
Liability of operators of no-code platforms and AI coding tools
Not only the start-ups themselves, but also the providers of no-code platforms or AI coding tools could come under scrutiny if faulty or malicious software is generated via their systems. This raises the question of the extent to which platform operators are liable for results that their users generate with the tool.
Basically, a no-code or AI code generator is a tool. The operator provides the infrastructure and perhaps predefined building blocks, while the user (such as the startup) uses them to build the actual application. This is why the industry tries to contractually shift most of the liability to the user. Typically, the terms and conditions of such services contain clauses stating that the user is responsible for compliance with all laws and that the platform operator assumes no liability for the accuracy or suitability of the applications created. Exemptions from liability in B2B T&Cs are permissible to a certain extent – in contrast to consumer transactions, Section 309 BGB (list of prohibited clauses) does not apply directly, but Section 307 BGB (content control) also applies between companies. A complete indemnification of the provider for its own fault is also likely to be ineffective in B2B T&Cs if it unreasonably disadvantages the customer.) However, a distinction will be made: The platform provider should typically not be liable for errors based solely on the specific implementation of the user– it would be comparable here to a tool manufacturer who does not have to be liable for every misuse of its tool. The situation could be different if the platform itself has an error that leads to damage (e.g. a software bug in the no-code engine that systematically causes incorrect calculations in all apps created with it). In such a case, the platform provider takes on the role of a manufacturer of a defective product. This is where liability for defects and the German Product Liability Act (ProdHaftG) come into play: The provider is liable to its direct customer (the startup) under contract law for ensuring that its tool has the agreed quality and is functional. Under certain circumstances, the platform operator could also be liable in tort to third parties who are harmed by the error in the generated software if it can be accused of being at fault (e.g. gross negligence in the programming of the platform).
Contractual limitations of liability of platform operators: In practice, platform providers protect themselves in their terms of use. Exemptions from liability state, for example, that the provider is not liable for indirect damages, loss of profit, etc. and is only liable up to a certain amount. In B2B contracts, such limitations (e.g. capping the amount of liability to the annual fee) can be effective, provided that no fundamentally important obligations are affected and no intentional conduct is excluded.) It is important to note that willful misconduct on the part of the provider always remains liable. A provider can exclude gross negligence to a certain extent in GTCs vis-à-vis an entrepreneur, but this is tricky – German courts tend to assume that clauses that also exempt gross negligence are unreasonably disadvantageous (especially if the provider effectively has sole influence on the source of the error). Furthermore, liability for personal injury may not be contractually excluded in B2B either, as this violates fundamental legal principles (and would be prohibited by Section 309 BGB in consumer constellations anyway).
Product liability of tool providers: As mentioned above, it was unclear whether purely digital products fall under the ProdHaftG. The operators of no-code platforms were previously able to argue that software was not a product – liability under the ProdHaftG therefore did not appear to apply. However, this legal situation is about to change at EU level: in October 2024, an amendment to the EU Product Liability Directive was adopted that explicitly classifies software (including AI systems) as a product. Article 4 of the new directive clearly defines software as a product, meaning that in future, manufacturers of software in the EU (after transposition into national law) will also be liable under the German Product Liability Act (ProdHaftG). This means for platform operators: If their no-code tool itself is faulty and causes personal injury, for example (e.g. the generated application malfunctions on a device and someone is injured), the injured party can claim product liability directly against the tool provider once the new directive has been implemented. Exclusion of liability is then excluded – as already provided for under current German law (Section 14 of the Product Liability Act). The Directive also tightens manufacturer liability: According to Art. 11 Para. 2, manufacturers should also be liable if product defects are caused by a lack of software updates. This is relevant if the platform provider fails to rectify known security vulnerabilities in its system by means of an update, thereby causing damage.
Planned EU AI liability rules: In addition to product liability, the EU is pursuing a comprehensive approach to regulating AI. The EU AI Regulation (AI Act) was adopted in 2024 as the first global legal framework for AI. It will largely come into force from 2026 and obliges providers of AI systems to carry out risk assessments, transparency and safety precautions. However, the AI Regulation does not contain any direct liability provisions – it is a market authorization and supervisory law, not a liability law. It does not contain any new offences; rather, the obligations for the provision of AI are intended to reduce the technical and organizational risks. In addition, the EU Commission proposed an AI Liability Directive in 2022 to make it easier for victims to obtain compensation. Among other things, this was intended to introduce simplified rules of evidence and presumptions in favor of injured parties – such as a right to information to identify the developer of an AI system and a presumption of causality if an AI breach of duty probably led to the damage. In particular, it was planned to establish a claim under Section 823 (2) BGB in conjunction with the AI Regulation as a protective law in the event of breaches of the AI Regulation (e.g. non-compliance with prescribed safety measures). This would have made AI violations directly sanctionable under civil law. However, the new EU Commission decided at the end of 2024 to withdraw this draft directive for the time being.
For operators of no-code platforms, it is important to note that they may already have to fulfill strict obligations under the AI Regulation as providers of high-risk AI systems (e.g. if their tool is used for safety-critical applications). However, they should not be lulled into a false sense of security under liability law – even if the special AI liability directive has been stopped, they remain vulnerable under general law. Disclaimer clauses only help to a limited extent: An AI cannot be interposed as a “responsible party”; ultimately, a human or legal actor is always liable. Practice will show whether injured parties will try to hold platform providers more liable, for example by arguing that the tool offered is similar to a product or that they were partly responsible. Providers should therefore clearly regulate contractually what the user has to do (e.g. testing obligations, notification obligations in the event of malfunctions) and, if necessary, provide for recourse options. Ultimately, however, a platform operator who provides an unsafe AI tool in gross breach of duty cannot expect to be completely exculpated by general terms and conditions – statutory guidelines such as Section 307 BGB, the German Product Liability Act and, in future, the EU liability regime set limits so that the risks are not unilaterally passed on to the user.
Copyright protectability of AI-generated code (VibeCoding)
A key question for start-ups that use AI code is: Is the AI-generated code protected by copyright at all? Only if a program source code is to be regarded as a personal intellectual creation of a person does it enjoy protection under the German Copyright Act (UrhG). Section 2 (2) of the German Copyright Act expressly requires that the work is a “personal intellectual creation”. Computer programs are protected by copyright in the same way as linguistic works (Section 2 (1) No. 1 in conjunction with Section 69a UrhG), but only if they reach the level of creation – i.e. if they exhibit a minimum degree of individuality through human design.
Fully AI-generated code lacks precisely this human act of creation. An AI does not “think” creatively in the copyright sense, but generates content based on probabilities and training data. According to the current understanding, an AI can therefore not be an author. In the patent case DABUS 2024, the Federal Court of Justice (BGH ) clarified that an invention cannot have a non-human inventor. This assessment can be transferred to copyright law, where the creator principle also applies, according to which the author is always the human creator of the work. Works created autonomously by AI – i.e. works that are created without any formative human influence – are undoubtedly not protected by copyright. This is because “the human substrate” is missing in the creation, as it has been legally formulated The European courts emphasize that only a human author can create originality in the legal sense. The European Court of Justice (ECJ) defines a protectable work as the result of the author’s own intellectual creation, which implicitly presupposes a human being. For example, in decisions such as Infopaq, the ECJ has stated that the author’s individuality and creative decisions are essential – neither of which a machine can exhibit.
This means for VibeCoding code: If the code was generated without the creative contribution of a human being, it does not enjoy copyright protection. It would be virtually in the public domain and could not be monopolized by anyone as their own work. Any third party could copy and use such code without infringing the Copyright Act. This has serious consequences for a startup: the AI code could not be used exclusively; competitors would be able to take it over legally, which weakens the competitive position.
However, the reality is often more complex. Software is rarely created entirely without human intervention. There is usually a developer who prompts and directs the AI, selects the result and perhaps combines or reworks parts. The legal question is whether this human contribution is sufficient to speak of human co-authorship or authorship. Here, case law and literature differentiate according to the degree of human influence:
- Autonomous AI products: If the AI really creates autonomously and the human only gives a general order (“Write me a program that does X”), then there is no personal creation by the human. A simple prompting (“Develop code for function Y”) without specific content specifications will not give the user authorship. The work then originates intellectually from the AI, which is why no protection arises in the absence of a human creator.
- Computer-implemented works with AI support: In this case, the human provides essential specifications, the AI only serves as a tool for implementation. In the DABUS patent case, the Federal Court of Justice indicated that a significant human influence can enable attribution In copyright law, this corresponds to the case where the user already provides the AI with a formulated idea or structure – such as their own draft source code or detailed instructions that the AI only refines. In this case, it could be argued that the human provided the “intellectual creation” and the AI acted as an extension (perhaps comparable to an autocomplete, but controlled by the human). Copyright protection would then be granted to the human being if their contribution reaches the level of creation. An example: A developer designs the software architecture and core algorithms conceptually himself (creative achievement), and then uses AI to program routines or optimize code. In this case, the result is a personal imprint of the developer that manifests itself in the code – the code would be protected as a computer program and the developer or his company would be the author or owner of the exclusive rights.
- Borderline cases: Constellations in which AI and humans work closely together are difficult. Has the human only sketched out rough ideas and the AI generates the creative code flow independently? Or did the human select many small suggestions from the AI and put them together (curator role)? This raises the question of the level of creativity of the human contribution. The current prevailing opinion is that mere selection or commissioning (“Make code for X”) is not sufficient for authorship. There would have to be a qualitative creative influence. If the AI has created the majority of the code independently and the human only makes minimal corrections, protection is unlikely to be assumed. In case of doubt, the code would remain in the public domain, as no sufficient human design is recognizable.
It becomes clear: Copyright protection of AI code is possible, but only if the AI truly acts as a subordinate tool and the creative value content ultimately comes from humans). In many VibeCoding scenarios, where developers have 95% of the code generated by the AI, this threshold is probably not reached – the majority of the creative programming work is done by the machine, not the human. Accordingly, the resulting code components will be unprotected.
This assessment is also reflected in expert recommendations: companies are advised to clarify contractually and organizationally how AI output is handled, as “AI output is generally not protected”. It is “not possible to assert any rights to it”, which is crucial for commercial exploitation. Startups should be aware of this: code generated by an AI system may not provide a copyright defense against imitators. At best, alternative protection mechanisms remain, such as trade secret protection (if the code is kept secret) or patents for underlying technical solutions (whereby AI-generated inventions again have the dilemma with the inventor – analogous to DABUS, a human inventor must then be named who has significantly controlled the use of AI.
In summary: no copyright protection without human creativity. AI-generated code therefore often falls into a protection gap. A startup must therefore check exactly what share human developers have in an AI-generated code and document these shares in order to be able to argue for protection in the event of a dispute. Otherwise, it risks its core software product being legally regarded as “freely available” – which significantly impairs investment and innovation protection strategies.
Effects on legal due diligence for venture investments
For investors and their legal advisors in legal due diligence (LDD ), a startup’s handling of AI-generated code is a growing concern. In financing rounds or tech M&A, the IP position and liability risks of the target company are carefully examined. If a startup relies heavily on automated code generation (vibe coding), this poses particular difficulties:
a) Lack of IP protection rights and impairment: As explained above, AI-generated code often cannot be protected as intellectual property. However, for investors investing in a software startup, the uniqueness and legal security of the code is an important value factor. If the startup has no patentable inventions and no copyrighted code (because much of the code comes from the AI), it lacks essential protection mechanisms against competitors. Any competitor could reuse the disclosed code without paying a license. This is identified as a weakness during due diligence – the IP portfolio looks thin. Investors typically ask specifically: “What copyrighted software components or patents does the start-up own? Have all rights to the software been clarified?” If the answer is that the software is largely AI-generated and therefore virtually in the public domain, alarm bells start ringing. The startup then primarily owns know-how, brand or customer access, but no exclusive code. From an investor’s perspective, this can depress the valuation, as future competitive advantages are uncertain. In addition, investors will insist on comprehensive guarantees in the investment agreement that no third party can assert rights to the code – which is difficult to insure if there are actually no rights of their own.
b) Unclear copyright and license chains: A due diligence team will check exactly who is considered the author of the software and whether all rights have been effectively transferred. In traditional start-ups, there are usually developers (employees or freelancers) who assign their copyrights to the client by contract (Section 69b UrhG for programs). In the case of AI-generated code, the question is: to whom should rights be assigned if the AI has no copyright status? Providers of AI coding tools often state in their terms of use that the user receives the rights to the output. However, such clauses have more of a declaratory legal effect – they cannot create copyright where there is none. At best, they act as a contractual promise that the tool provider will not make any claims of its own and will allow the user to use it. In due diligence, these terms of use would have to be examined to ensure that no catch is hidden (e.g. that the platform operator retains certain rights of use after all). Investors also check whether all human contributors (e.g. prompt engineers or employees who have curated AI results) have employment or service contracts with IP clauses to ensure that the company holds any rights that may arise. A risk would be, for example, if a freelance consultant created prompts, the AI generated code from them and this consultant later claims co-authorship because their contribution was creative. Due to the uncertainty, investors will demand that the startup has legally bound all parties involved (via IP assignment and confidentiality agreements).
c) Risks from open source and third-party code: A particularly sensitive point in DD is open source compliance and possible infringements of third-party rights. In the case of traditionally developed software, one examines which open source components were used and whether licenses (GPL, MIT, etc.) were complied with. This is more difficult with AI-generated code, as the startup may not even know if and which third-party code snippets have been incorporated. As Chan-jo Jun (IT lawyer) points out, AI code is often very similar to the training material, and depending on the license of the original material, this may mean that the new code is also subject to licensing. In due diligence, experienced auditors will therefore ask: “Do you use AI for code? If so, what measures have you taken to exclude licensing risks?” They may require that the code has been audited, for example by tools that compare source code with known repositories (Black Duck, Fossology, etc.). Chan-jo Jun expressly recommends that buyers check AI-generated code line by line for matches with existing software. If identical or very similar sections are found, it must be clarified whether these are harmless (e.g. trivial lines of code that do not enjoy protection) or whether there is a risk of license infringement. The result of such an examination can be a deal breaker: If it turns out that central parts of the code should actually be under GPL or belong to proprietary third parties, the investor either demands cleanup (re-implementation of the relevant parts without AI) or assesses the legal risk as too high to invest. Even in the early phase, a VC can have the term sheet guarantee that no significant part of the technology is based on problematic AI outputs – otherwise there is a risk of claims for damages against the founders under the contract.
d) Reputational and liability risks: Investors also consider the liability risk and the associated financial exposure. If a startup sells software that was developed with AI support, the due diligence will also check whether there have already been or are likely to be liability cases. For example: Have there been customer complaints or cases of damage in connection with a software error? Are there contractual limitations of liability and are they effective? A startup that has negligently delivered unsafe AI code could be confronted with latent liability lawsuits – this deters investors. They will want to know if the startup has taken out insurance (e.g. product liability, tech E&O) to cover such risks. The future regulatory landscape also plays a role: an investor who finances a startup in 2025 must anticipate which compliance obligations and liability rules will apply in the coming years. The EU AI Regulation, for example, will have to be complied with from 2026, which may mean registration and documentation obligations for a company specializing in AI code. If the startup were ignorant here, it would be a red flag in due diligence. The same applies to the upcoming product liability reform: investors will price in the fact that claims could become more expensive once the new rules come into force because software manufacturers will then also be liable even if they are not at fault.
e) Recommendations and measures: From the investor’s point of view, the following findings and conditions are typically made in a due diligence if highly automated software development is present:
- Transparency regarding the use of AI: The startup should disclose the extent to which AI was used and for which components. DD consultants often request a list of all AI tools and an estimate of the proportion of code that originates from them. This is the only way to estimate the scope of potentially unprotected code.
- IP policy and controls: Does the startup have internal guidelines on how to handle AI outputs? (For example, a guideline: “No untested AI code should be transferred directly into the production system.”) A responsible startup implements quality controls to minimize risks. These include code reviews, including for AI-generated code, the use of plagiarism scanners and documentation of all AI deployments (traceability). Such policies are noted positively in the LDD, as they show that management has recognized the issue.
- Legal opinion on the copyright situation: If necessary, the start-up asks a lawyer to confirm how it sees the intellectual property rights situation. Although the uncertainty remains, an investor would prefer to hear that at least it has been checked whether there is a sufficient human contribution to critical code. If not, alternative protection must be found: many start-ups then increasingly rely on secrecy (trade secrets). The due diligence process then checks whether, for example, the source code is not in the public domain and whether measures have been taken in accordance with the German Trade Secrets Act (GeschGehG). If copyright law does not apply, trade secret protection can at least prevent third parties from gaining unauthorized access to the code – provided that the startup protects it accordingly (access controls, NDA with partners, etc.).
- Insurance and indemnification: Investors will demand that the startup has insurance cover in the event of IP disputes or product liability cases or that the founders are personally liable to a certain extent if they have concealed risks. Warranties are common in investment agreements, e.g. that all proprietary code is free from third-party claims. If founders know that they have used AI code, they must formulate this warranty very carefully – in the worst case, they must specify exceptions (disclosure schedules), which in turn discloses the problem to the investor.
- Future IP strategy: A startup that has so far relied on AI code should be able to explain during due diligence how it intends to strengthen its IP in the future. This could be: targeted in-house developments of particularly critical components, patent strategies for AI-developed inventions (with naming of human inventors to ensure patentability), or exclusive training data that others do not have. Investors want to see that the startup has a plan to create unique assets despite automation.
f) Specific complications for startups with a strong focus on AI: If a startup has created complex software based on AI generation in a short period of time with few staff, this may be impressive from a business point of view – but in legal due diligence it makes you skeptical. Typical risks are:
- Code quality and maintainability: Although not primarily a legal issue, tech due diligence and legal DD are intertwined here. AI-generated code could be difficult to maintain or understand, especially if no developer has fully penetrated it. This can lead to delays in rectifying defects – which in turn becomes legally relevant if, for example, contractual service levels or warranty periods cannot be met.
- Dependence on third-party providers: If the startup uses third-party AI APIs (e.g. from OpenAI, Google), then there is a contractual dependency. The due diligence checks: Does the startup have stable license terms with these providers? What happens if the service is discontinued or the conditions change (prices, rights to use the output)? These questions go beyond classic IP, but concern operational risks that are taken into account in the investment.
- Regulatory environment: In the case of highly innovative AI start-ups, investors also look at future regulatory costs. For example: Does the product fall within the scope of the AI Regulation (possibly as generative AI, possibly with obligations for registration or conformity assessment)? Will the company be subject to certification obligations? For example, the due diligence may state: “The company must implement an AI compliance system within 2 years, costs approx. XYZ.” Such aspects are then taken into account in the valuation or recorded as conditions subsequent in the investment agreement.
Conclusion on due diligence: For founders, this means that a strong use of AI in software development accelerates development, but creates additional work and uncertainty later on in the investment process. Many of the advantages (lots of code generated quickly) must then be weighed against the disadvantages (less clear IP situation, potential legal risks). From an investor’s point of view, a startup is most attractive if it uses the efficiency of AI on the one hand, but has also proactively done its legal homework: i.e. introduced compliance guidelines, thought through IP law issues, adapted contracts (general terms and conditions, license agreements) accordingly and made risk provisions. Such a company can convince the DD process that it is “investable” despite VibeCoding. In contrast, a startup that naively assumes “the AI code is already ours and will fit” is likely to encounter considerable difficulties during the audit. In the worst-case scenario, unresolved IP relationships or pending liability risks could lead to an investor jumping ship or only getting involved on significantly less favorable terms.
Conclusion
The use of AI and no-code platforms in tech start-ups is currently moving faster than the legal framework. Liability questions still have to be answered on the basis of general principles – anyone who uses AI tools is liable for their output as well as for their own actions. Neither start-ups nor platform operators can hide behind the “responsibility” of a machine. Under civil law, the established standards (Section 823 BGB, product liability, contractual obligations) apply, but these are flexible enough to be applied to AI constellations. The upcoming EU regulations – above all the AI Regulation 2024/1689/EU and the amended **Product Liability Directive (expected 2024/2853/EU) – will further specify the framework without turning it on its head. AI systems will be regulated and software products will be explicitly included in manufacturer liability, which tightens rather than reduces responsibilities. The expressly planned but then withdrawn AI Liability Directive shows that the legislator struggled to make things easier for injured parties, but has left this to development for the time being.
In terms of copyright, VibeCoding reveals a protection zone: as long as there is no human creative contribution, AI code remains unprotected. For start-ups, this means that their “USP” is more difficult to protect legally. Innovative solutions could be considered in the future – such as new categories of intellectual property rights or adjustments to copyright law, which would, however, be controversial (currently, there is a conscious decision to link protection to human creativity. There are different approaches internationally (for example, British copyright law recognizes so-called computer-generated works with a short term of protection, whereas the US Copyright Office rejects AI works), but in Germany/EU the line is likely to remain the same for the time being: no protection without a human author.
For legal due diligence when investing in AI-heavy start-ups, this means that diligence and transparency are essential. Ideally, startups should audit and eliminate their legal risks before a financing round. This includes identifying AI-generated code components, reviewing them and – where necessary – replacing or securing them. They should also draft their contracts (with customers, suppliers, platforms) in such a way that the use of AI is regulated (e.g. disclaimers, where permissible, and information obligations towards customers if AI has been used). When dealing with no-code platforms, it is advisable to pay attention to contractual assurances from providers (e.g. that their tool does not infringe the rights of third parties and is state of the art). Some AI platforms now offer liability assumptions or guarantees in order to appear more trustworthy – these should be used where possible to reduce your own risks.
Overall, liability and IP in vibe coding are complex but manageable if traditional legal principles are carefully applied to the new technology. The message for founders and developers is to proactively involve legal expertise instead of having to react in the event of a dispute. Investors, on the other hand, need to develop new valuation criteria to assess the value of a company whose products are technically innovative but may not have a traditional IP portfolio. The value may be shifting from static IP to dynamic ability to develop quickly with AI – but as long as legal systems strongly link competitive advantage to legal exclusivity, lack of IP protection will remain a hard factor.
Finally, it should be emphasized that technological development is rapid: startups should develop their compliance and contracts just as quickly. The planned EU regulations are expected to come into effect in 2026 and beyond – smart entrepreneurs will use the transition period to set up their AI-supported processes in such a way that they are compliant and secure by then. Then nothing will stand in the way of the full potential of VibeCoding – without any nasty legal surprises.