• Areas of expertise
  • |
  • About me
  • |
  • Principles as a lawyer
  • Tel: 03322 5078053
  • |
  • info@itmedialaw.com
ITMediaLaw - Rechtsanwalt Marian Härtel
  • en English
  • de Deutsch
  • About lawyer Marian Härtel
    • About lawyer Marian Härtel
      • Ideal partner
      • About lawyer Marian Härtel
      • Video series – about me
      • Why a lawyer and business consultant?
      • Principles as a lawyer
      • Focus on start-ups
      • Nerd und Rechtsanwalt
      • Ideal partner
      • How can I help clients?
    • Über die Kanzlei
      • How clients benefit from my network of colleagues, partners and service providers
      • Quick and flexible access
      • Agile and lean law firm
      • Team: Saskia Härtel – WHO AM I?
      • Price overview
    • How can I help clients?
    • Sonstige Informationen
      • Einwilligungen widerrufen
      • Privatsphäre-Einstellungen ändern
      • Historie der Privatsphäre-Einstellungen
      • Privacy policy
    • Testimonials
    • Imprint
  • Leistungen
    • Focus areas of attorney Marian Härtel
      • Support with the foundation
      • Games law consulting
      • Advice in e-commerce
      • Support and advice of agencies
      • Legal advice in corporate law: from incorporation to structuring
      • Legal compliance and expert opinions
      • Streamers and influencers
      • Cryptocurrencies, Blockchain and Games
      • Outsourcing – for companies or law firms
    • Arbeitsschwerpunkte
      • Games and esports law
        • Esports. What is it?
      • Corporate law
      • IT/IP Law
      • Consulting for influencers and streamers
        • Influencer & Streamer
      • Contract review and preparation
      • DLT and Blockchain consulting
        • Blockchain Overview
      • Investment advice
      • AI and SaaS
  • Artikel/News
    • Langartikel / Guides
    • Law and computer games
    • Law and Esport
    • Law on the Internet
    • Blockchain and web law
    • Online retail
    • Data protection Law
    • Copyright
    • Competition law
    • Copyright
    • EU law
    • Law on the protection of minors
    • Labour law
    • Tax
    • Kanzlei News
    • Other
  • Videos/Podcasts
    • Videos
    • Podcast
      • ITMediaLaw Podcast
      • ITMediaLaw Kurz-Podcast
  • Knowledge base
  • Contact
Kurzberatung
  • About lawyer Marian Härtel
    • About lawyer Marian Härtel
      • Ideal partner
      • About lawyer Marian Härtel
      • Video series – about me
      • Why a lawyer and business consultant?
      • Principles as a lawyer
      • Focus on start-ups
      • Nerd und Rechtsanwalt
      • Ideal partner
      • How can I help clients?
    • Über die Kanzlei
      • How clients benefit from my network of colleagues, partners and service providers
      • Quick and flexible access
      • Agile and lean law firm
      • Team: Saskia Härtel – WHO AM I?
      • Price overview
    • How can I help clients?
    • Sonstige Informationen
      • Einwilligungen widerrufen
      • Privatsphäre-Einstellungen ändern
      • Historie der Privatsphäre-Einstellungen
      • Privacy policy
    • Testimonials
    • Imprint
  • Leistungen
    • Focus areas of attorney Marian Härtel
      • Support with the foundation
      • Games law consulting
      • Advice in e-commerce
      • Support and advice of agencies
      • Legal advice in corporate law: from incorporation to structuring
      • Legal compliance and expert opinions
      • Streamers and influencers
      • Cryptocurrencies, Blockchain and Games
      • Outsourcing – for companies or law firms
    • Arbeitsschwerpunkte
      • Games and esports law
        • Esports. What is it?
      • Corporate law
      • IT/IP Law
      • Consulting for influencers and streamers
        • Influencer & Streamer
      • Contract review and preparation
      • DLT and Blockchain consulting
        • Blockchain Overview
      • Investment advice
      • AI and SaaS
  • Artikel/News
    • Langartikel / Guides
    • Law and computer games
    • Law and Esport
    • Law on the Internet
    • Blockchain and web law
    • Online retail
    • Data protection Law
    • Copyright
    • Competition law
    • Copyright
    • EU law
    • Law on the protection of minors
    • Labour law
    • Tax
    • Kanzlei News
    • Other
  • Videos/Podcasts
    • Videos
    • Podcast
      • ITMediaLaw Podcast
      • ITMediaLaw Kurz-Podcast
  • Knowledge base
  • Contact
ITMediaLaw - Rechtsanwalt Marian Härtel
Home EU law

New EU Product Liability Directive 2023: Extended liability for software, AI and digital products

9. May 2025
in EU law
Reading Time: 29 mins read
0 0
A A
0
4a9acd08 3bcb 46ee ac9a 932b6dd27fa7
Key Facts
  • Revision of product liability: After 40 years, the EU is reforming product liability directives to better cover digital products.
  • Extended product definition: Software and AI systems count as products, including cloud services, with a few exceptions.
  • New liability subjects: Economic actors in the supply chain are now liable, including imported and modified products.
  • Extended damage concept: Includes psychological damage and data loss; deductibles do not apply.
  • Easing the burden of proof: Reversal of the burden of proof supports injured parties, as well as legal presumptions in the case of product defects.
  • Extended limitation periods: 10-25 years for product liability claims; period begins anew in the event of significant product changes.
  • Liability protection: Startups should implement insurance and legal protection measures to minimize risk.

After almost 40 years, the European Union has fundamentally revised its product liability regulations. The previous Directive 85/374/EEC from 1985 no longer reflected the technical developments of recent decades. The new EU Product Liability Directive of 2023 (formally 2024/2853) takes account of digitalization and explicitly extends strict liability (product liability) to software, AI systems and digital services in the product context. It must be transposed into national law by December 2026 at the latest. These changes are hugely important for tech start-ups – especially providers of SaaS, AI applications, apps, plugins and other digital tools. The following article highlights the most important changes that will apply in the future, compares them with the current legal situation and discusses the developments under discussion regarding a separate AI Liability Directive. In addition, practical examples, potential liability risks and protective measures (e.g. through general terms and conditions, quality processes, insurance) as well as effects on the drafting of contracts are presented.

Content Hide
1. Background: Current EU product liability and need for reform
2. Overview of the main new features of the Product Liability Directive 2023
3. Extended area of application: software and AI systems as products
4. New liability subjects: The concept of producer and responsible parties in the supply chain
5. The concept of error in the digital age: updates, AI learning and cybersecurity
6. Easier enforcement of claims: facilitation of evidence and disclosure obligations
7. Extension of compensable damages and limitation period
8. State of play: Separate AI Liability Directive and further developments
9. Practical consequences for AI start-ups, SaaS providers and digital products
9.1. Liability risks for AI start-ups and software providers
9.2. Preventive protective measures: General terms and conditions, quality assurance and insurance
9.3. Significance for contract and business model design
10. Conclusion

Background: Current EU product liability and need for reform

European product liability was previously based on Directive 85/374/EEC of 1985, which was implemented in Germany in the Product Liability Act (ProdHaftG). This legislation establishes strict liability on the part of the manufacturer if a defective product causes personal injury or certain property damage. It was important that only physical (movable) products were covered. According to the traditional understanding, pure software or digital services were not covered by the product concept unless they were embodied in physical data carriers or were part of a physical product. For example, a CD with software was considered a product, but not a program provided via download.

In practice, this restriction has led to regulatory gaps. Modern products are often hybrid in nature – such as smart devices with integrated software or AI-supported services in the cloud. If, for example, a purely cloud-based SaaS tool failed and caused damage, the victim could not previously rely on product liability, but had to resort to tort claims (fault of the provider) or contractual liability. In addition, under the old directive, only property damage to privately used objects and bodily injury were eligible for compensation, with a deductible of €500 applying to property damage. Pure financial losses or loss of data were excluded. In view of the digital transformation and the increasing importance of AI, the EU recognized a clear need for reform in order to adequately protect consumers and users of modern technologies and at the same time create a uniform legal framework for manufacturers and developers.

Overview of the main new features of the Product Liability Directive 2023

The new version of the EU Product Liability Directive introduces a number of important changes that extend the liability regime in line with technical requirements and make it more claimant-friendly. The most important changes are

  • Expanded product definition: In future, software and AI systems will also be considered products, regardless of whether they are installed locally or provided as a cloud service. This means that not only physical goods but also digital products are subject to product liability. An exception applies to non-commercial open source software that is offered free of charge.
  • Liability of new actors in the supply chain: In addition to manufacturers, other economic actors such as importers, authorized representatives, fulfillment service providers (e.g. warehouse keepers, shipping) and, under certain conditions, even retailers or online marketplaces are now liable. Anyone who subsequently makes significant changes to a product (“remanufacturer”) is also considered a manufacturer.
  • Liability for software updates and cybersecurity: The concept of defect has been adapted to digital products. A product is also considered defective if it does not receive the necessary security updates or has inadequate IT security. Manufacturers must guarantee security throughout the entire product life cycle – even after the product has been placed on the market.
  • Expanded definition of damage: In addition to personal injury and property damage, compensation will in future expressly include medically recognized mental health damage and the destruction or loss of data (provided the data was not used for professional purposes). The previous deductible of €500 for property damage and any maximum liability limits no longer apply, meaning that manufacturers are liable for the entire damage.
  • Easier evidence for injured parties: The new directive makes it easier to enforce claims by reversing the burden of proof and providing information. In certain situations, a product defect or the causal link to the damage is legally presumed and manufacturers can be obliged to provide relevant information. This is intended to simplify the handling of complex AI systems in particular, where it could otherwise be difficult for those affected to prove a defect.
  • Extended limitation periods: The “long stop” period for product liability claims has been significantly extended in the case of hard-to-detect damage – from 10 to 25 years in cases of latent personal injury. In addition, the period begins anew for products that have been significantly changed by updates or modifications. This significantly increases the liability period for long-lasting products and those with regular updates.

These points are explained in more detail below and analyzed with regard to software, AI and start-ups.

Extended area of application: software and AI systems as products

One of the most groundbreaking innovations is the explicit inclusion of software in the scope of product liability. For the first time, not only “movable objects” are covered, but also digital products on an equal footing. The directive now defines products in such a way that operating systems, firmware, computer programs, mobile apps and AI systems are included – regardless of whether they are stored on a device or provided via the cloud.

This means that a cloud-based SaaS service or an AI-supported app is legally equivalent to a physical product if it is provided commercially. Example: A startup develops a medical diagnostic app (pure software) that is made available to users via a cloud platform. This app is now considered a product. If an error in the AI logic leads to an incorrect diagnosis and a user suffers damage to their health as a result, the provider can be liable for a defective product in the same way as a manufacturer.

Only software that is provided free of charge outside of a business activity, in particular open source projects, is excluded from the definition of a product. This is intended to prevent volunteer developers or the open source community from being exposed to incalculable liability risks. Please note: However, if open source code is integrated into a commercial product by a startup, the startup is liable as the manufacturer for the entire product – including for errors in the open source component. The liability privilege only applies to the original free provider, not to the startup that uses the software commercially.

The digitalization of product liability also includes so-called digital production files. These are, for example, CAD files or 3D print files that are used to manufacture a product. In future, the provider of a faulty 3D print template would also be liable if the printed object becomes defective and causes damage.

For AI start-ups and software providers, the expanded definition of a product means that they must be directly subject to product liability law for the first time. Previously, it was often thought that purely digital services were only relevant in the context of contracts or general tort principles – this is changing fundamentally. SaaS solutions, AI algorithms or plugins can be regarded as products, with all the consequences of strict manufacturer liability.

New liability subjects: The concept of producer and responsible parties in the supply chain

Traditionally, product liability is primarily directed against the manufacturer of the end product. However, the new directive significantly expands the circle of potentially liable parties to include all key economic actors in order to effectively protect victims of products that cause damage, even if the original producer cannot be found.

Manufacturer: The producer of the product is still the primary liable party. The manufacturer is the person who produces the end product, a basic material or a partial product (Section 3 (1) ProdHaftG as amended). In the case of software, this would be the developer or the company that creates the software or affixes its name to it. What is new is that the person who significantly modifies or overhauls a product that has already been placed on the market is also deemed to be the manufacturer. This clause is aimed at refurbishing companies and upgraders, for example: anyone who refurbishes and resells used devices, or an AI startup that significantly modifies and offers an existing algorithm, assumes manufacturer liability for the “new” product.

Importers and authorized representatives: If the manufacturer is not based in the EU, the importer and any designated authorized representative move into the liability position. A SaaS or AI provider outside the EU that distributes its product in the EU must therefore have an EU representative or importer – otherwise, for example, an EU importer or even the online marketplace through which the product is distributed may be liable as a quasi-manufacturer. This extension closes a loophole: Consumers should not be left unprotected if the actual manufacturer is based in a third country and is difficult to prosecute.

Fulfillment service providers and retailers: Operators of fulfillment services (such as warehousing, packaging, shipping) can now also be liable if no other responsible party based in the EU is available. Under strict conditions, even distributors and online marketplace operators are included. For example, an online marketplace such as Amazon can become liable if an AI product sold on its platform causes damage and neither the manufacturer nor an importer in the EU can be identified. However, retailers can avoid liability by naming the manufacturer, importer or supplier to the injured party within one month. This regulation motivates retailers to properly document their sources of supply and product information.

This has two implications for AI start-ups and SaaS providers: Firstly, if they do not operate out of the EU, it is imperative that they ensure that a reliable importer or authorized representative is in place – otherwise the European distribution partner or platform operator, for example, could take recourse. On the other hand, startups that act as suppliers of AI components, for example, should know that their contractual partners (such as an OEM) can take recourse against them internally if their sub-product is defective. This makes it even more important to draft clear contracts on the allocation of liability in the supply chain (e.g. indemnification agreements) (more on this below).

Example: A startup develops an AI module that is integrated into an autonomous vehicle system from a major manufacturer. If accidents occur later because the module was faulty, the car manufacturer is initially liable to the injured parties as the distributor of the overall product. However, the car manufacturer can demand recourse from the AI startup. In addition, the startup itself would be liable as the manufacturer of its module if it was placed on the market in a directly identifiable manner or the vehicle company cannot be identified. This example shows how important it is to agree an indemnification clause or limitation of liability in B2B contracts so that young companies do not bear the full burden alone in the event of an emergency.

The concept of error in the digital age: updates, AI learning and cybersecurity

The definition of when a product is considered “defective” has been adapted to modern technologies. In principle, a product is defective if it does not offer the safety that can reasonably be expected (Art. 6 of the Directive). New criteria have been added that are particularly relevant for connected and AI-based products:

  • Interoperability and combination risk: A product can now be classified as faulty if it is not safe when interacting with other products. The legitimate safety expectation of traffic therefore also includes interaction – e.g. if software triggers dangerous conflicts in a standard system environment. Example: A smart home control app interferes with the alarm system and deactivates it unintentionally – this lack of compatibility could be considered a fault, as users can expect common combination applications to be safe.
  • Cybersecurity deficiencies: If a product lacks sufficient cybersecurity, it will also be defective in future. Manufacturers must therefore take appropriate technical and organizational measures to protect their product from unauthorized access or manipulation. An IoT device or AI software that is easily hackable and thus causes damage (e.g. a hacked care robot that injures people) would be considered defective. This innovation emphasizes the importance of security by design and security by default. Start-ups should pay attention to high IT security standards early on in the development process, also because regulations such as the NIS2 directive demand a higher level of cybersecurity across the industry.
  • Dynamic and learning systems: Products with the ability to learn or acquire new functions after being placed on the market (typical for AI systems) pose particular challenges. The directive takes this into account insofar as it is not only based on the condition when the product is placed on the market. A product defect can also only manifest itself later as a result of an update or machine learning effect. Accordingly, it is clarified that the relevant point in time for the assessment of freedom from defects is not only the time of sale. If an originally safe AI system becomes dangerous as a result of a manufacturer update, the manufacturer is liable as if the product had been defective from the outset.
  • updates are omitted: Equally relevant is the reverse case – when necessary security updates are omitted. Art. 10 para. 2 c of the Directive stipulates that a product is also deemed to be defective if it becomes unsafe due to a lack of updates/upgrades. Manufacturers therefore have a subsequent product monitoring and update obligation. A practical example is a known security vulnerability in a SaaS platform: if a security-critical patch is missing and damage occurs as a result (e.g. data theft or system failure with consequential damage), the company can be held liable.

In summary, the extended concept of defects requires developers and manufacturers to proactively ensure the ongoing safety of their software and AI products. Quality management must not be limited to the delivery of a “finished” product, but must continuously keep an eye on potential risks. For AI start-ups, this means establishing structures for software maintenance, security updates and monitoring at an early stage. It is advisable to set up clear processes for reporting and rectifying vulnerabilities (e.g. responsible disclosure policies). It is also important to document software changes and learning processes of an AI in order to be able to demonstrate how the system has changed and that you have reacted appropriately in the event of liability.

Easier enforcement of claims: facilitation of evidence and disclosure obligations

Because it can be difficult to provide evidence for complex products – especially in the case of opaque AI algorithms – the new directive introduces several mechanisms to make the process easier for injured parties. These changes strengthen the position of claimants and require manufacturers to weigh up the risks even more carefully.

Duty to produce evidence in court: At the request of the injured party, a court can order the defendant to disclose relevant evidence if the plaintiff has presented plausible evidence of a product defect and damage. In many EU countries, there was previously no American discovery obligation – companies were able to rely on trade secrets. Now, however, manufacturers must expect to have to disclose internal documents, test reports, log files or the source code if these are necessary to clarify the defect. Although courts are supposed to take into account the protection of trade secrets, the fundamental obligation to cooperate in the process is a clear turn in favor of the injured parties. For AI start-ups, this could mean having to disclose the decision parameters of an ML model, for example, in an emergency – a potential conflict between transparency and IP protection. It should therefore be weighed up in advance how much documentation can be released in the event of a claim and whether certain secrets can be protected by confidentiality orders, for example.

Reversal of the burden of proof through presumptions: The new directive defines several situations in which a product defect and/or causal link is legally presumed until the manufacturer proves otherwise. These rebuttable presumptions apply in particular if:

  • The manufacturer does not comply with a judicial disclosure obligation. If the provider refuses or delays the information requested, it is assumed that the product is defective and that the defect caused the damage. Such behavior is therefore intended to take revenge in court – it is a strong incentive to cooperate.
  • Breach of safety regulations: If the plaintiff can prove that the product does not meet legal safety requirements (e.g. relevant CE standards, product safety regulations or the AI Act requirements for AI systems), a defect is presumed. An official product recall or a warning from the supervisory authorities due to safety deficiencies is not automatically considered an acknowledgement of error, but it is a strong indication of this. Startups should therefore strictly adhere to compliance with all applicable standards – violations can have fatal consequences in the process.
  • Obvious malfunction: If damage occurs due to an obvious malfunction of the product during normal use, this should also give rise to the presumption of a defect. Example: An AI-controlled robotic lawnmower runs over the user’s foot unpredictably – such a “dropout” during normal operation suggests a product defect, even if the exact programming error is not immediately proven.
  • Excessively difficult to prove: Particularly relevant for high-tech products is the clause that lowers the standard of proof for technically or scientifically highly complex products. If it is excessively difficult for the plaintiff to prove defects or causality, it is sufficient for the plaintiff to make a plausible case that the product was probably defective and that it was more likely than not that it caused the damage. Defects and causality are then presumed. This situation is likely to typically occur with AI systems as a “black box” or with innovative products whose mode of action even experts can hardly understand (the explanations refer to novel medical devices, for example). In practice, this means that a patient who has been harmed by an AI-supported diagnosis or therapy does not have to explain in detail how the error in the algorithm occurred – it is sufficient to show that the AI probably worked incorrectly and caused harm as a result. The burden of proof for harmlessness would then lie with the provider.

These changes are likely to significantly improve the litigation prospects for consumers. Manufacturers, on the other hand, face a stricter regime in which intransparency and complexity no longer work in their favor, but against them in case of doubt. For AI start-ups, this means paying attention to traceability right from the start. “Black box” models without explainability increase the process risk. It can make sense to at least provide internal mechanisms for explaining AI decisions (explainable AI) in order to be able to provide counter-evidence in the event of a dispute. Thorough logging of development and test steps can also help to demonstrate in court that the company has worked with the state of the art in science and technology (keyword: development risk, see below).

Relationship to German law: It is interesting to note that some of these presumptions conflict with German civil procedure law. Under German law, the burden of proof that a product was defective and that the defect caused the damage has been on the plaintiff. The new directive forces adjustments to be made here – in future, national courts will have to implement the aforementioned presumption rules. For companies, this means more uniformity across the EU with a high level of protection for injured parties.

Extension of compensable damages and limitation period

The new directive also modernizes the rules on types of damage and liability periods. The main changes here are as follows:

Mental and data damage: For the first time, damage to health of a psychological nature is also explicitly included, provided it is medically recognized (e.g. a diagnosed trauma). This is relevant because, for example, accidents or dangerous malfunctions of robots can have not only physical but also psychological effects. In addition, damage to or loss of data is now recognized as compensable damage, at least in relation to consumers. The prerequisite is that the data was not used for professional purposes – i.e. it is personal or private data. Example: A cloud backup service (SaaS) irretrievably deletes a user’s private photos due to a software error. Previously, the user could not claim product liability for this, as there was no material damage to a physical object. In future, data loss will fall under the concept of damage and the provider will be liable to the private user for compensation (e.g. for the costs of data recovery or, if applicable, immaterial damage). Although this special rule does not apply to business data loss (e.g. customer data of a company), contractual claims or tortious claims remain possible under national law in such cases.

Abolition of deductibles and liability limits: The old directive allowed member states to provide for a deductible of up to 500 euros for property damage and to introduce certain maximum liability amounts for serial cases. These restrictions have now been abolished. As a result, there is no longer a “liability gap” for property damage below 500 euros. For start-ups, this means that even minor damage (e.g. minor damage to a device worth €100 caused by software) can be subject to compensation – such “trifles” were previously excluded from product liability. In addition, in the event of major damage (e.g. a widespread product defect with many injured parties), there is a risk of potentially unlimited liability sums. Mass damage caused by a software bug – such as a widespread smart home device that causes an expensive defect for all users – could therefore threaten the company’s existence. Only appropriate insurance (see below) provides financial protection here.

Limitation periods (maximum liability period): In addition to the normal limitation period (in Germany 3 years from knowledge of the damage and defect), there is an absolute limitation period in product liability. Under previous law, claims expired at the latest 10 years after the product was placed on the market, even if the damage was only discovered later. The new regulation modifies this in two ways:

  • Restart in the event of product changes: If a product is significantly changed by the manufacturer or provided with a safety-relevant update, the 10-year period starts again. Every major update or upgrade therefore “extends” the potential liability period from this point onwards. This is extremely relevant for software products with regular updates: The clock is reset with every security-relevant version jump.
  • Extension to up to 15 or 25 years: In cases where the damage remains latent and only becomes apparent much later, the period is extended to 15 years (property damage) or 25 years in the case of personal injury. This applies, for example, to damage to health caused by long-term effects that are only recognized after many years (typical in the case of pharmaceuticals or implants, but also conceivable in the case of cumulative effects caused by an AI medical device). This is intended to ensure that victims are still entitled to claims even if the damage occurs very late.

According to reports, other basic principles remain unchanged: for example, the injured party’s burden of proof for the extent of the damage and the prohibition on contractually excluding or limiting product liability towards consumers (this is mandatory law). The development risk defense – i.e. the possibility for the manufacturer to exonerate itself if the defect was not yet recognizable according to the state of the art in science and technology when the product was placed on the market – also remains in principle. However, the member states will be allowed to exclude this exonerating evidence in future. This means that countries such as Germany could decide that a manufacturer is also liable for unknown risks. In Germany, this “state of the art” objection was previously permissible (Section 1 (2) No. 5 ProdHaftG); it remains to be seen whether the national legislator will deviate from the permission here. Excluding the development risk would be particularly tricky for AI start-ups, as AI technologies are by their very nature breaking new ground. This makes it all the more important to constantly monitor scientific progress and quickly incorporate new findings into improvements in order to avoid entering the realm of unknown risks in the first place.

State of play: Separate AI Liability Directive and further developments

Parallel to the revision of the Product Liability Directive, a special AI Liability Directive was discussed in the EU. In September 2022, the European Commission presented a proposal for an “AI Liability Directive”, which was intended to create a civil liability system for AI, in addition to product liability. The aim of this separate set of rules was to enable victims of AI-related damage to enforce their rights even if product liability does not apply – for example because there is no product in the narrow sense or purely financial losses or violations of fundamental rights caused by AI. In particular, the plan was to ease the burden of proof in cases of fault-based liability (e.g. presumptions of causality) and the possibility of demanding information from operators about high-risk AI systems.

In practice, however, the AI Liability Directive met with political resistance and overlapping problems with the already adopted product liability reform. Many saw little need for separate AI liability rules as soon as software and AI were included in product liability. After the dossier stagnated for some time, the EU Commission decided in February 2025 to withdraw the proposal for the AI Liability Directive. The background to this was a lack of consensus and the desire for regulatory simplification in the digital sector. Some EU parliamentarians criticized the withdrawal and warned of a “liability Wild West” for AI, while others welcomed it.

There is therefore currently (as of May 2025) no prospect of an independent AI liability directive. The EU is likely to initially focus on implementing the new Product Liability Directive and the AI Regulation (AI Act) adopted in parallel. However, the latter is primarily of a public law nature (product approval, conformity, CE marking, supervision) and does not regulate civil law claims. However, the Commission could make a new attempt in future to propose a broader software liability directive, for example, in order to close gaps outside of product liability. Startups should follow this development closely.

It is important to note that liability for AI systems already exists via the general legal instruments – product liability, tort law, contract liability – even without a special directive. The new Product Liability Directive now covers a large proportion of typical AI risks, namely those associated with a product defect. For damages that do not fall under the narrow product liability (e.g. pure financial losses due to incorrect AI decisions without property damage/personal injury), the general liability standards of the member states must continue to apply. In Germany, this would be §§ 823 ff. BGB (tortious liability in the event of fault) or contractual claims for damages, where applicable. However, these generally require proof of fault, which a separate AI liability directive would facilitate. Despite the lack of special legislation, companies in the AI sector should therefore be prepared to be held liable for errors in algorithms or data output if necessary – unless this is contractually excluded with legal certainty, which is hardly possible in the consumer sector.

Practical consequences for AI start-ups, SaaS providers and digital products

The reforms presented are far-reaching and abstract. The questions arise for start-ups in the AI and software sector: What specific liability risks arise? How can you protect yourself against them? And what do the new rules mean for contracts and business models? These points are highlighted below – supplemented by hypothetical examples to make the effects tangible.

Liability risks for AI start-ups and software providers

Strict liability for success: With the extension of product liability to software and AI, tech start-ups now face strict liability regardless of fault. The risk is that even an unintentional software error can lead to significant liability claims without the startup having to prove any wrongdoing. Experience has shown that young companies in particular, which develop in an innovative and agile manner, often have to deal with bugs or unforeseen behavior of their systems. Any such error – if it affects security – can now constitute a product defect and trigger claims in the event of damage.

Example – personal injury caused by AI software: A medical startup offers an AI-based symptom checker as SaaS that provides patients with diagnoses and treatment recommendations. Due to insufficient training data, the AI assesses certain serious symptoms as harmless. A patient is wrongly given the all-clear, but shortly afterwards suffers a serious health incident that could have been avoided if the assessment had been correct. This is a product defect in the software, as it does not offer the expected level of safety – one could at least expect correct warnings. The patient (or his heirs) can claim product liability from the startup: It is a case of personal injury caused by a defective digital product. The startup cannot claim to have exercised all due care; liability is independent of fault. Such scenarios, which were previously only conceivable via medical liability or, at best, tortious software liability, are now clearly assigned to product liability.

Example – Property damage due to software error: A PropTech startup sells a cloud-based building management system (SaaS) that controls heating, ventilation and security in office buildings. A bug in an update causes the system to fail: the heating runs uncontrollably at maximum level, which leads to a smouldering fire in a server room and triggers the sprinkler system. This causes considerable material damage to the building’s inventory. According to the new legal situation, the startup is liable as the manufacturer of the faulty software product to the damaged companies in the building (provided the damage is to their property and the injured parties are natural persons – e.g. freelancers or sole traders; purely legal entities may have to derive their claims from a contract). The fire damage caused to the property is property damage that is now eligible for compensation without a deductible. Even if exclusions of liability were contractually agreed, these do not apply to injured third parties.

Product liability vs. contractual liability in B2B: An important aspect for start-ups that mainly work B2B: Who can sue in the first place? The Product Liability Directive grants claims to injured natural persons – typically consumers or third parties who have been harmed by the product. So if a startup sells software to a company and only this company (a legal entity) suffers financial loss or material damage to company property, EU product liability does not formally apply. The company will then have to rely on contractual warranty or general liability (with fault). But be careful: as soon as a natural person is involved – be it a consumer or an employee who is injured – product liability can come into play. For example, an employee injured by a software error could sue the manufacturer startup directly. Similarly, if an AI product used by the company destroys data belonging to a private customer of the company, this customer could sue the startup on the grounds of product liability. For startups, this results in a fragmented risk: on the one hand, strict liability towards end users/consumers, and on the other hand, the need to continue to contractually regulate liability towards business customers.

Mass damages and collective actions: The EU also promotes class actions and collective redress (keyword: Class Action Directive). The new rules could give rise to collective claims in the event of widespread software errors. One example would be a popular AI fitness wristband whose firmware update has a bug that causes skin burns for thousands of users. Consumer associations could take collective action. For start-ups that scale quickly, the risk of a simultaneous failure in large numbers is real – what may have previously resulted in goodwill arrangements or recalls could lead to large-scale liability claims in the future.

Scope of liability: As there is no longer a cap, the amounts can be high: personal injury includes medical treatment costs, compensation for pain and suffering, loss of earnings and, in the event of death, survivors’ benefits – sums that can quickly reach millions. Material damage, including data loss, can be just as considerable (e.g. costs of data recovery, replacement of equipment). Even immaterial damage such as psychological impairment could be discussed (in Germany, there is no compensation for pure financial loss, but it is possible in the case of personal injury). Startups must therefore take into account that a single serious product defect could result in claims that threaten their existence.

Preventive protective measures: General terms and conditions, quality assurance and insurance

In view of these risks, AI and software start-ups should develop strategies in good time to limit their liability and prevent damage. The following measures are recommended:

1. contractual clauses and the drafting of general terms and conditions:

Contractual limitations of liability can reduce the financial risk, especially in B2B business. These are common, for example:

  • Limitation of liability to certain amounts: Many SaaS contracts limit liability to a maximum amount (e.g. to the annual fee or a fixed amount). Such caps are generally permissible between companies. In relation to consumers, however, such limitations do not apply to personal injury – there they are ineffective or prohibited by law (in Germany, for example, according to Section UNG BGB).
  • Exclusion of consequential damages: Liability for indirect damages, loss of profit, loss of data, etc. is often excluded in general terms and conditions. Under the new directive, however, loss of data could be considered directly as compensable damage, meaning that a GTC exclusion would also be ineffective vis-à-vis consumers. Nevertheless, in the B2B context, it may make sense to exclude liability for loss of profit or business interruption damages in order to increase calculability.
  • Restrictions on use and disclaimers: It is advisable to make it clear in the terms of use what the software is not intended for. For example, a note “This AI system must not be used for vital decisions” or “not medically validated”. Such notices can influence the user’s legitimate expectations. If a customer uses a product in a high-risk environment contrary to an explicit warning and damage occurs, the manufacturer could argue that the use was outside the expected scope (and therefore possibly not a defect in the sense of liability law because safety was not guaranteed for this special case). You cannot rely on this completely, but it is an additional defense. Important: The information must be clear and, if possible, known when the contract is concluded, not hidden in the small print.
  • Indemnification agreements: If the startup acts as a supplier (e.g. AI module for OEM), the business partner will usually contractually demand to be indemnified by the startup in case third parties make claims due to a product defect of the module. Such indemnity clauses should be carefully negotiated, e.g. to limit them to cases in which the error was caused exclusively by the module. Conversely, the startup can demand indemnities from its suppliers (e.g. a provider of an AI framework). The aim is that everyone in the chain is ultimately liable for the errors for which they are responsible. Contractual clauses must also take into account how defense costs are dealt with in the event of liability (who leads the process, who bears the costs).
  • Warranty vs. liability: It is also important to separate the contractual warranty (liability for defects) from product liability. The B2B warranty can be limited in terms of time and content (e.g. notification of defects within 14 days, liability for defects only through rectification/subsequent delivery). However, this only regulates the relationship with the direct contractual partner. Product liability remains unaffected. Nevertheless, a clever warranty provision can at least ensure that the B2B customer does not assert any additional contractual claims for damages due to a defect that run alongside product liability.

2. quality and safety processes:

The best way to avoid liability cases is, of course, not to put errors into circulation in the first place. Despite time and cost pressures, start-ups should establish rigorous QA (Quality Assurance), especially when it comes to security-relevant functions. This includes

  • Intensive test phases: test all relevant use cases before releasing a software version, including the interaction with other systems (interoperability!). In the case of AI models, this also includes validation for distortions and misjudgements. In the event of a dispute, documentation of tests can show that testing was carried out according to the state of the art (which may be relevant for the development risk objection).
  • Risk assessment: For AI systems, it makes sense to carry out a risk analysis in the same way as for safety-critical products. What potential damage could occur? Where are the dangers (e.g. incorrect recommendations, system failures)? Measures can be derived from this. For certain industries (medical, automotive), this is a regulatory requirement anyway.
  • Security by design: Security aspects (e.g. authentication, encryption, protection against injection attacks) must already be taken into account during development. According to the General Product Safety Act and, in future, the AI Act, a state of the art security level is expected. External penetration tests or code audits can be useful to find vulnerabilities before market launch.
  • Update management: A plan for rapid updates when security vulnerabilities are discovered is essential. The company should define internal responsibilities for responding to security incidents (incident response). Are customers actively informed? How quickly can a patch be rolled out? A DevOps approach that enables continuous updates pays off here. You should also not fail to clearly communicate the EOL (end of life) of products – if a product is no longer supported, the customer must know this, as otherwise a lack of support could be seen as a breach of the update obligation.
  • Conformity with standards: Where available, recognized standards and certifications should be used (e.g. ISO standards, IEC 61508 for functional safety, ISO/IEC 27001 for IT security, specific standards for medical device software, etc.). Compliance with such standards can later help to demonstrate that you have acted in accordance with the state of the art. In addition, violations of standards will serve as an indication of errors in the future – conversely, compliance with standards can be used as an indication of safety.
  • Pilot phases and limited rollout: Especially with AI that learns from interaction with users, it is advisable to test new systems in a controlled environment or with a small user group before rolling them out on a large scale. In this way, teething problems can be discovered without affecting thousands of people.

3. insurance cover:

Product liability insurance should be considered at an early stage for any startup that offers a potentially liable product. While traditional hardware manufacturers have such policies anyway, it has been less common in the software sector. At best, many tech companies had professional indemnity/IT liability (which tends to cover financial losses due to poor performance). Now that software is legally equivalent to a product, insurers are increasingly offering special cover for software and AI product liability.

When choosing an insurance policy, pay attention to the following:

  • Scope of cover: Does the policy also cover personal injury and property damage caused purely by software errors? Many traditional policies had exclusions for “pure financial losses” – the problem of data loss should be covered. Ideally, consequential psychological damage should also be covered, if awarded by a court.
  • Sum insured: In view of the potential unlimited liability, sufficiently high sums (possibly several million euros) should be agreed, depending on the risk profile of the product. For AI in the medical or automotive sector, this is correspondingly more than for a pure office software tool.
  • Deductibles and cooperation: Insurers often require certain security standards to be met (e.g. regular updates, documentation). Failure to do so could jeopardize cover. Therefore, read the insurance conditions carefully so that you don’t go away empty-handed in the event of a claim due to breaches of obligations.
  • Recall costs: Although product liability itself does not end there, many insurance companies offer the option of covering recall costs if the product has to be withdrawn from the market or patched. A recall can be very expensive, especially for physical products with software – but even a large-scale patch and incident response incurs costs.

In addition to insurance, reserves for liability cases are an issue: investors and founders should take into account that provisions may be necessary if a liability case becomes likely. In the worst-case scenario, a startup can become insolvent if the liability assets are insufficient – which is not desirable for either the founders or the injured parties. It is therefore better to invest in prevention than to pay afterwards.

Significance for contract and business model design

Business model: AI start-ups should now also consider their products in terms of liability management. One possible scenario is that business models change in order to reduce liability – e.g. focusing more on services instead of products. However, the directive also covers services if they are integrated into a product (e.g. a cloud-based service that is an essential part of the product function is considered part of the product). It is therefore not possible to completely avoid liability by offering “only services”.

Drafting contracts with customers: In the B2C sector, start-ups are well advised to have clear user agreements that oblige users to at least exercise a certain degree of care and inform them of any remaining risks (without undermining legitimate expectations). For example, a contract can stipulate that the user must install regular updates and may not misuse the product. If the user does not comply with this (e.g. ignores security updates), contributory negligence could be argued in the event of a dispute. Even if consumer contracts are strictly regulated, it does not rule out imposing obligations on the user that serve the purpose of security.

Drafting contracts with partners and suppliers: As mentioned above, transfers of liability and indemnification in the supply chain are key tools. Startups that purchase components (such as pre-trained AI models from third-party providers) should contractually ensure that this third-party provider is liable if the error was demonstrably in its component. Conversely, startups will have to provide assurances as suppliers. Close cooperation with insurers makes sense here: some major customers require a certain proof of insurance before they accept a startup as a supplier.

Jurisdiction and choice of law: In international transactions, it should be borne in mind that product liability rules will apply throughout the EEA (EU + Norway, Iceland, Liechtenstein). A choice of law that changes the applicable law has no effect vis-à-vis consumers (they can always claim the protection of their home country). In B2B, for example, you could try to opt for a law without strict product liability – but this is of little practical use, as every country in the EU has to implement it and a choice of law is difficult to enforce outside the EU if the damage occurs in the EU. In addition, you would expose yourself to the reputational risk of wanting to avoid liability. It is therefore advisable to make transparent liability arrangements instead of relying on legal tricks.

Documentation and contract annexes: In contracts with business customers, it can be helpful to include technical specifications, security features and instructions as part of the contract. Why? Because this makes it clear what the contractually required use is and what security measures have been taken. In the event of legal proceedings, you can show that the customer was informed exactly what the product is intended for and how it is operated safely. You can also contractually stipulate that the customer is responsible for the security of the environment (e.g. that they only use the software in supported environments, firewall, etc.). Although this does not release the customer from the manufacturer’s liability, it can lead to the customer being partly responsible in the internal relationship if he acts in gross violation of the specifications.

Monitoring and feedback: Finally, a modern start-up should rely on feedback loops: Take customer complaints or near misses (near misses) seriously, analyze them and improve the product if necessary. Service level agreements (SLAs) can be agreed with customers that also provide for responses to security incidents. This shows a proactive attitude and can be legally relevant to demonstrate that you are continuously striving for security.

Conclusion

The new EU Product Liability Directive 2023 will bring substantial changes to the liability of manufacturers in the digital age. For AI start-ups, SaaS providers and developers of digital tools, this means greater responsibility: from 2026, their products – whether physically tangible or purely digital – will be subject to the strict standards of product liability. Software will thus become “hardware-like” in terms of liability. The inclusion of AI systems in particular ensures that innovation does not come at the expense of safety. Companies in these sectors need to prepare now by reviewing and adapting their processes and contracts.

A positive aspect from the end user’s perspective is that the legal framework covers modern types of damage (data loss, psychological damage) and reduces procedural hurdles (facilitation of evidence, right to information), which will lead to more effective enforcement of justified claims. For companies, on the other hand, the liability risk increases considerably – a single software error can lead to costly legal proceedings and high claims for damages.

AI start-ups should see this change not only as a risk, but also as an opportunity: Security and quality are becoming decisive competitive factors. Those who deliver reliable, well-secured products can gain the trust of customers. Taking liability law requirements into account during the development phase – for example through “security by design” and thorough testing – can therefore become part of the value proposition (“our AI service is certified and meets the highest security standards”). In addition, proactive risk management (e.g. early updates in the event of security vulnerabilities) can not only avoid liability, but also improve the product.

Ultimately, it is becoming apparent that the EU will establish a mandatory framework for safe AI with the combination of new product liability and AI regulation. Even if the specific AI liability directive is off the table for the time being, start-ups in the AI sector must prepare for an era in which liability and regulation go hand in hand. Those who know and implement the legal requirements can not only prevent liability disputes, but also stand out positively in the market. When drafting contracts, dealing with customers and developing technology, it is now important to master the balancing act between innovation and safety – in the spirit of “between start-up dynamics and liability”. The new EU Product Liability Directive provides the legal framework for this, which now needs to be brought to life before it comes into force in 2026.

Beliebte Beträge

ECJ confirms classification of TikTok as a “gatekeeper”

Lego brick still protected as a design patent
13. August 2024

The Chinese Bytedance Group, which operates the video portal TikTok, has failed with a lawsuit against its classification as a...

Read moreDetails

Influencer merchandise and the new EU product safety regulation

f76e6084d2f8ff77279f6149c9676597
4. July 2024

The influencer market is booming and more and more content creators are discovering the lucrative business with their own merchandise....

Read moreDetails

EU directive on the right to repair

Privacy policy
18. June 2024

On April 23, 2024, the EU Parliament adopted a groundbreaking directive to strengthen the right to repair in the European...

Read moreDetails

Advocate General at the ECJ on the admissibility of cheat software

Lego brick still protected as a design patent
14. June 2024

Advocate General at the ECJ on the admissibility of cheat software For many years, I had the opportunity to accompany...

Read moreDetails

Artificial Intelligence (AI) Act: Council gives final green light to first global rules for AI

Artificial Intelligence (AI) Act: Council gives final green light to first global rules for AI
22. May 2024

The European Council has adopted the AI Act, the world's first comprehensive set of regulations for artificial intelligence (AI). This...

Read moreDetails

Is the NetzDG permissible? ECJ with an exciting decision

Lego brick still protected as a design patent
15. November 2023

The ECJ has made an exciting decision that could also be relevant for the NetzDG, which applies to Instagram or...

Read moreDetails

What is the European Accessibility Act?

What is the European Accessibility Act?
7. November 2023

The European Accessibility Act (EAA) represents a transformative legislative initiative of the European Union that was launched with the ambitious...

Read moreDetails

EUGH: No second right of withdrawal if trial subscription turns into paid subscription!

Lego brick still protected as a design patent
12. October 2023

A consumer has a single right to cancel a subscription taken out at a distance, which is initially free and...

Read moreDetails

Geoblocking: A Turning Point for the Digital Single Market?

Lego brick still protected as a design patent
4. October 2023

Introduction Geoblocking is a complex but highly relevant issue that affects not only online stores, but also a wide range...

Read moreDetails

5.0 60 reviews

  • Avatar Lennart Korte ★★★★★ vor 2 Monaten
    Ich kann Herrn Härtel als Anwalt absolut weiterempfehlen! Sein Service ist erstklassig – schnelle Antwortzeiten, effiziente … Mehr Arbeit und dabei sehr kostengünstig, was für Startups besonders wichtig ist. Er hat für mein Startup einen Vertrag erstellt, und ich bin von seiner professionellen und zuverlässigen Arbeit überzeugt. Klare Empfehlung!
  • Avatar R.H. ★★★★★ vor 3 Monaten
    Ich kann Hr. Härtel nur empfehlen! Er hat mich bei einem Betrugsversuch einer Krypto Börse rechtlich vertreten. Ich bin sehr … Mehr zufrieden mit seiner engagierten Arbeit gewesen. Ich wurde von Anfang an kompetent, fair und absolut transparent beraten. Trotz eines zähen Verfahrens und einer großen Börse als Gegner, habe ich mich immer sicher und zuversichtlich gefühlt. Auch die Schnelligkeit und die sehr gute Erreichbarkeit möchte ich an der Stelle hoch loben und nochmal meinen herzlichsten Dank aussprechen! Daumen hoch mit 10 Sternen!
  • Avatar P! Galerie ★★★★★ vor 4 Monaten
    Herr Härtel hat uns äusserst kompetent in einen lästigen Fall mit META betreut. Er war effizient, beharrlich, aber auch mit … Mehr uns geduldig. Menschlich top, bis wir am Ende Dank ihm erfolgreich zum Ziel gekommen sind. Können wir wärmstens empfehlen. Und nochmals danke. P.H.
  • Avatar Philip Lucas ★★★★★ vor 8 Monaten
    Wir haben Herrn Härtel für unser Unternehmen konsultiert und sind äußerst zufrieden mit seiner Arbeit. Von Anfang an hat … Mehr er einen überaus kompetenten Eindruck gemacht und sich als ein sehr angenehmer Gesprächspartner erwiesen. Seine fachliche Expertise und seine verständliche und zugängliche Art im Umgang mit komplexen Themen haben uns überzeugt. Wir freuen uns auf eine langfristige und erfolgreiche Zusammenarbeit!
  • Avatar Doris H. ★★★★★ vor 10 Monaten
    Herr Härtel hat uns bezüglich eines Telefonvertrags beraten und vertreten. Wir waren mit seinem Service sehr zufrieden. Er … Mehr hat stets schnell auf unsere E-mails und Anrufe reagiert und den Sachverhalt einfach und verständlich erklärt. Wir würden Herrn Härtel jederzeit wieder beauftragen.Vielen Dank für die hervorragende Unterstützung
  • Avatar Mikael Hällgren ★★★★★ vor einem Monat
    I got fantastic support from Marian Härtel. He managed to get my wrongfully suspended Instagram account restored. He was … Mehr incredibly helpful the whole way until the positive outcome. Highly recommended!
  • Avatar Mosaic Mask Studio ★★★★★ vor 5 Monaten
    Die Kanzlei ist immer ein verlässlicher Partner bei der Sichtung und Bearbeitung von Verträgen in der IT Branche. Es ist … Mehr stets ein professioneller Austausch auf Augenhöhe.
    Die Ergebnisse sind auf hohem Niveau und haben die interessen unsers Unternehmens immer bestmöglich wiedergespiegelt.
    Vielen Dank für die sehr gute Zusammenarbeit.
  • Avatar Philipp Skaar ★★★★★ vor 9 Monaten
    Als kleines inhabergeführtes Hotel sehen wir uns ab und dann (bei sonst weit über dem Durchschnitt liegenden Bewertungen) … Mehr der Herausforderung von aus der Anonymität heraus agierenden "Netz-Querulanten" gegenüber gestellt. Herr Härtel versteht es außerordentlich spür- und feinsinnig, derartige - oftmals auf Rufschädigung ausgerichtete - Bewertungen bereits im Keim, also außergerichtlich, zu ersticken und somit unseren Betrieb vor weiteren Folgeschäden zu bewahren. Seine Umsetzungsgeschwindigkeit ist beeindruckend, seine bisherige Erfolgsquote = 100%.Ergo: Unsere erste Adresse zur Abwehr von geschäftsschädigenden Angriffen aus dem Web.
  • ●
  • ●
  • ●
  • ●

Video-Galerie

Legal pitfalls for influencers: identifiability and injunctive relief
Legal pitfalls for influencers: identifiability and injunctive relief
My most important principles
My most important principles
Management contracts for OnlyFans are important
Management contracts for OnlyFans are important
Performance protection law

Performance protection law

28. June 2023

Introduction The ancillary copyright is a term that is often mentioned in connection with copyright. This is a special right...

Read moreDetails
Joint venture

Weighted Average Ratchet

16. October 2024
Digital Markets Act (DMA)

Digital Markets Act (DMA)

16. October 2024
European Economic Interest Grouping (EEIG)

European Economic Interest Grouping (EEIG)

16. October 2024
Data Protection Conference (DSK)

Data Protection Conference (DSK)

16. October 2024

Podcast Folgen

Rechtssichere Influencer-Agentur-Verträge: Strategien zur Vermeidung unerwarteter Kündigungen

Rechtssichere Influencer-Agentur-Verträge: Strategien zur Vermeidung unerwarteter Kündigungen

19. April 2025

Anna und Max sprechen in dieser Episode über typische Fallstricke und Gestaltungsmöglichkeiten bei Verträgen zwischen Influencern und Agenturen. Im Mittelpunkt...

KI im Rechtssystem: Auf dem Weg in eine digitale Zukunft der Justiz

KI im Rechtssystem: Auf dem Weg in eine digitale Zukunft der Justiz

13. October 2024

In dieser faszinierenden Podcastfolge tauchen wir tief in die Welt der künstlichen Intelligenz (KI) und ihre Auswirkungen auf unser Rechtssystem...

Das Metaverse – Rechtliche Herausforderungen in virtuellen Welten

Das Metaverse – Rechtliche Herausforderungen in virtuellen Welten

25. September 2024

In dieser faszinierenden Episode tauchen wir tief in die rechtlichen Aspekte des Metaverse ein. Als Rechtsanwalt und Technik-Enthusiast beleuchte ich...

7c0b449a651fe0b81e5eec2e23515012 2

Urheberrecht im Digitalen Zeitalter

22. December 2024

In dieser aufschlussreichen knapp 20-minütigen Podcast-Episode von und mit mir wird das komplexe Thema des Urheberrechts im digitalen Zeitalter beleuchtet....

  • Home
  • Imprint
  • Privacy policy
  • Terms
  • Agile and lean law firm
  • Ideal partner
  • Contact
  • Videos
Marian Härtel, Rathenaustr. 58a, 14612 Falkensee, info@itmedialaw.com

Marian Härtel - Rechtsanwalt für IT-Recht, Medienrecht und Startups, mit einem Fokus auf innovative Geschäftsmodelle, Games, KI und Finanzierungsberatung.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
  • Contact
  • Leistungen
    • Support with the foundation
    • Focus areas of attorney Marian Härtel
    • Consulting for influencers and streamers
    • Advice in e-commerce
    • DLT and Blockchain consulting
    • Games law consulting
    • Support and advice of agencies
    • Legal advice in corporate law: from incorporation to structuring
    • Cryptocurrencies, Blockchain and Games
    • Investment advice
    • Booking as speaker
    • Legal compliance and expert opinions
    • Legal advice in corporate law: from incorporation to structuring
    • Contract review and preparation
  • About lawyer Marian Härtel
    • About lawyer Marian Härtel
    • Agile and lean law firm
    • Focus on start-ups
    • Principles as a lawyer
    • The everyday life of an IT lawyer
    • How can I help clients?
    • Why a lawyer and business consultant?
    • Focus on start-ups
    • How can I help clients?
    • Team: Saskia Härtel – WHO AM I?
    • Testimonials
    • Imprint
  • Videos
    • Video series – about me
    • Information videos – about Marian Härtel
    • Videos on services
    • Blogpost – individual videos
    • Shorts
    • Third-party videos
    • Podcast format
    • Other videos
  • Knowledge base
  • Podcast
  • Blogposts
    • Lange Artikel / Ausführungen
    • Law on the Internet
    • Online retail
    • Law and computer games
    • Law and Esport
    • Blockchain and web law
    • Data protection Law
    • Labour law
    • EU law
    • Corporate
    • Competition law
    • Copyright
    • Tax
    • Internally
    • Other
  • en English
  • de Deutsch
Kostenlose Kurzberatung