• Mehr als 3 Millionen Wörter Inhalt
  • |
  • info@itmedialaw.com
  • |
  • Tel: 03322 5078053
SAVED POSTS
Rechtsanwalt Marian Härtel - ITMediaLaw

No products in the cart.

  • en English
  • de Deutsch
  • Informationen
    • Ideal partner
    • About lawyer Marian Härtel
    • Quick and flexible access
    • Principles as a lawyer
    • Why a lawyer and business consultant?
    • Focus areas of attorney Marian Härtel
      • Focus on start-ups
      • Investment advice
      • Corporate law
      • Cryptocurrencies, Blockchain and Games
      • AI and SaaS
      • Streamers and influencers
      • Games and esports law
      • IT/IP Law
      • Law firm for GMBH,UG, GbR
      • Law firm for IT/IP and media law
    • The everyday life of an IT lawyer
    • How can I help clients?
    • Testimonials
    • Team: Saskia Härtel – WHO AM I?
    • Agile and lean law firm
    • Price overview
    • Various information
      • Terms
      • Privacy policy
      • Imprint
  • Services
    • Support and advice of agencies
    • Contract review and preparation
    • Games law consulting
    • Consulting for influencers and streamers
    • Advice in e-commerce
    • DLT and Blockchain consulting
    • Legal advice in corporate law: from incorporation to structuring
    • Legal compliance and expert opinions
    • Outsourcing – for companies or law firms
    • Booking as speaker
  • News
    • Gloss / Opinion
    • Law on the Internet
    • Online retail
    • Law and computer games
    • Law and Esport
    • Blockchain and web law
    • Data protection Law
    • Copyright
    • Labour law
    • Competition law
    • Corporate
    • EU law
    • Law on the protection of minors
    • Tax
    • Other
    • Internally
  • Podcast
    • ITMediaLaw Podcast
  • Knowledge base
    • Laws
    • Legal terms
    • Contract types
    • Clause types
    • Forms of financing
    • Legal means
    • Authorities
    • Company forms
    • Tax
    • Concepts
  • Videos
    • Information videos – about Marian Härtel
    • Videos – about me (Couch)
    • Blogpost – individual videos
    • Videos on services
    • Shorts
    • Podcast format
    • Third-party videos
    • Other videos
  • Contact
Kurzberatung
  • Informationen
    • Ideal partner
    • About lawyer Marian Härtel
    • Quick and flexible access
    • Principles as a lawyer
    • Why a lawyer and business consultant?
    • Focus areas of attorney Marian Härtel
      • Focus on start-ups
      • Investment advice
      • Corporate law
      • Cryptocurrencies, Blockchain and Games
      • AI and SaaS
      • Streamers and influencers
      • Games and esports law
      • IT/IP Law
      • Law firm for GMBH,UG, GbR
      • Law firm for IT/IP and media law
    • The everyday life of an IT lawyer
    • How can I help clients?
    • Testimonials
    • Team: Saskia Härtel – WHO AM I?
    • Agile and lean law firm
    • Price overview
    • Various information
      • Terms
      • Privacy policy
      • Imprint
  • Services
    • Support and advice of agencies
    • Contract review and preparation
    • Games law consulting
    • Consulting for influencers and streamers
    • Advice in e-commerce
    • DLT and Blockchain consulting
    • Legal advice in corporate law: from incorporation to structuring
    • Legal compliance and expert opinions
    • Outsourcing – for companies or law firms
    • Booking as speaker
  • News
    • Gloss / Opinion
    • Law on the Internet
    • Online retail
    • Law and computer games
    • Law and Esport
    • Blockchain and web law
    • Data protection Law
    • Copyright
    • Labour law
    • Competition law
    • Corporate
    • EU law
    • Law on the protection of minors
    • Tax
    • Other
    • Internally
  • Podcast
    • ITMediaLaw Podcast
  • Knowledge base
    • Laws
    • Legal terms
    • Contract types
    • Clause types
    • Forms of financing
    • Legal means
    • Authorities
    • Company forms
    • Tax
    • Concepts
  • Videos
    • Information videos – about Marian Härtel
    • Videos – about me (Couch)
    • Blogpost – individual videos
    • Videos on services
    • Shorts
    • Podcast format
    • Third-party videos
    • Other videos
  • Contact
Rechtsanwalt Marian Härtel - ITMediaLaw

Startup without developers?

8. July 2025
in Gloss / Opinion
Reading Time: 22 mins read
0 0
A A
0
ChatGPT Image 9. Juli 2025 06 31 51

It’s late in the evening, the coffee next to the laptop has long since gone cold, but I smile with satisfaction: In just a few hours, I have created an entire web application from scratch without typing a single line of code traditionally. No expensive development team, no months of programming – just me, my idea and an AI coding tool to which I have explained what I want in natural language. Welcome to the world of VibeCoding. Sounds like a start-up fairy tale? Maybe it does. But before you put your entire development team on the street: It’s not quite that simple after all.

Content Hide
1. From code servant to AI tamer: the VibeCoding revolution
1.1. AI construction kits and code writing machines
1.2. Faster, cheaper, more competition?
2. Hype vs. reality: what AI tools cannot (yet) replace
3. Legal pitfalls: who is liable, who owns the code, what does the user need to know?
3.1. Liability: AI is no good as a scapegoat
3.2. Copyright & Co.: The intellectual owner a ghost?
3.3. 3 What do users need to know? – Transparency and data protection
4. Conclusion: VibeCoding – Turbo for founders, but not a free ride
4.1. Author: Marian Härtel

In this glossary, I take you on a journey through the new phenomenon of VibeCoding – programming via vibe, i.e. with AI support instead of classic code tapping. We take a look at how AI tools such as Cursor, GitHub Copilot or no-code platforms enable founders to build software and SaaS services at lightning speed. We ask: Can start-ups really get off the ground faster and more flexibly now, without armies of developers and budgets in the millions? Does this change the rules of the game in the start-up scene? And – this is a legal blog – what legal risks are hidden behind the tempting AI turbo? Who is liable if the magical AI suddenly screws up? Who actually owns the code that wasn’t written by a human hand? And as a founder, what do you ultimately have to inform your users about if the AI is pulling the strings in the background?

So let’s make our way through hype and reality, through euphoria and legal texts. A personal, trenchant stocktaking – full of enthusiasm, but also with the critical eye of an IT lawyer who asks himself every day: “What the hell’s next?”

From code servant to AI tamer: the VibeCoding revolution

Let’s start with the term itself: VibeCoding. In case you’re wondering what the heck that is – no, it has nothing to do with music or esoteric vibes. VibeCoding describes the current trend of no longer programming software manually line by line, but developing it almost exclusively with the help of AI systems or no-code platforms. Instead of laboriously cramming syntax and setting every comma in the code themselves, the founder or developer simply tells the AI what the software should do in normal German or English. The AI – whether it’s a specialized code assistant like Cursor or a generalist like ChatGPT – automatically translates these requests into executable code.

“I describe, the machine codes.” – This is how you could summarize the motto of VibeCoding. Visual no-code construction kits follow a similar principle: you click and configure on an interface, the code is created invisibly in the background. In both cases, traditional manual programming work is replaced by automation. A true democratization of software development: Suddenly, even non-computer scientists can create functional applications, and tech founders theoretically need fewer human resources to get their ideas up and running as a product.

This is a small revolution. Let’s think back a few years: if a start-up wanted to build an app or a web service in 2015, it almost inevitably needed a team of developers – or at least a tech-savvy co-founder who would spend all night hitting the keys. These times are changing rapidly. Today, all it takes is a single motivated founder with a vision, a laptop and an AI tool to build something presentable in a few days. The temptation is great: finally you can realize your own ideas without being at the mercy of rare (and expensive) software developers.

Some startup founders report with shining eyes that they have “clicked together” – or rather “chatted together” – an entire SaaS service within a weekend. One example: an entrepreneur recently described how he used ChatGPT and a handful of cloud services to create a functioning web app in under five hours. User login, database connection, smart user interface, integration of third-party providers such as Stripe for payments – it’s all there, all written by the AI. Without having to program it himself! If he had hired a freelancer a few years ago, it might have cost him 50,000 euros and many weeks of work. Now an AI assistant does large parts of it in a fraction of the time, at a cost in the double-digit dollar range (a few prompting sessions and API calls). It makes you rub your eyes involuntarily.

I also find myself thinking about whether I shouldn’t build my next small website at night with such an AI code editor, instead of ripping an offer out of a web developer’s rib. The promise of efficiency is just too tempting: the AI becomes a “colleague” that never sleeps, doesn’t talk back and spits out code suggestions in seconds that would take a human hours. Tools such as Cursor – an AI-supported code editor that practically thinks as a pair programmer – or GitHub Copilot have precisely this goal: they are designed to make developers at least twice as productive, automate routine work and even generate complete code on demand. All you have to do is roughly type in what a function should do and tab, and the AI coder fills in the rest. Programming with “autocompletion on steroids”, you could say.

AI construction kits and code writing machines

The market is exploding with such tools. In addition to Copilot (Microsoft), there is Amazon CodeWhisperer, Tabnine, Replit Ghostwriter, Codeium – they all promise to think code through to the end, find bugs, search through dozens of files in the project and suggest suitable changes. Some tools go even further: they try to compile entire applications at the prompt command. Platforms such as Bolt.new, v0.dev or Lovable.dev, for example: you describe in a continuous text what the app should be able to do (“Build me a club membership management system with login, admin portal, diagrams for usage statistics, Stripe payment module and email dispatch via SendGrid…”) – and the AI quickly conjures up a framework of code, UI and database that actually works. Sounds like science fiction, but it is working increasingly well. Although such generated applications are often rudimentary or a little jerky – the AI only builds what you tell it to, not necessarily what you actually meant – they are always good as a prototype or MVP.

Traditional no-code/low-code platforms are not sleeping either: services such as Bubble, Webflow, Adalo and Wix are integrating more and more AI assistants. Bubble, for example, is testing AI features that suggest formulas or workflows to the user. Webflow uses AI to generate design suggestions or texts. And then there are specialized offerings such as the Durable AI Website Builder, which promises to click together a complete small company website from a handful of keywords within 30 seconds – including suitable texts and images generated by AI. Create a website by entering a single line, as if you were ordering a coffee to go. This is already a reality for simple web applications.

All of this leads to an exciting question: does the start-up of today still need expensive developers at all? Or are programmers the new weavers of the industrial age, replaced by the automated looms of AI? Of course, this exaggeration is somewhat unfair – good developers will by no means become superfluous overnight. But their job profile is changing. Codersare becoming controllersand architects. Instead of typing every line themselves, they orchestrate the AI, check its output, make corrections here and there and take care of the overall structure. “It’s evolution, not extinction,” someone recently said with regard to AI vs. humans. The developer becomes the AI trainer who lets the lions jump, but holds the whip in the background – and makes sure that no one in the circus gets eaten.

Faster, cheaper, more competition?

For founders and small teams, these developments are a blessing – at least at first glance. More flexible? Faster? Cheaper? Yes, yes and yes! A one-person start-up can now dare to do things that were just wishful thinking a short time ago. The time-to-market is shrinking dramatically: an idea today, a clickable prototype tomorrow, online the day after tomorrow and collecting initial user feedback – what more could you want? Changes to the concept? Don’t panic: If large parts of the code have been generated, you can also pivot more quickly. Pivot in a week instead of six months, because the AI does most of the conversion work. That sounds like a start-up turbo, like real disruption, even within the start-up scene.

You could even say that the barriers to entry are falling. Anyone who previously failed to implement their software idea due to a lack of programming skills or money for developers now has a chance. The playing field is becoming more even; not only Ivy League graduates with tech co-founders can launch successful apps, but theoretically anyone with a good idea, some business acumen and the ability to use AI tools. This should make the start-up world more colorful and diverse.

But (and this is where the realist in me comes through): Easier accessibility also means that competition increases. If I can implement an idea with a minimal budget, then others can copy it – just as quickly and just as cheaply. Differentiation becomes more difficult. In the past, superior technology or elaborate development may have given you a head start, but today more than ever the actual idea, timing, distribution and access to the customer count. This is because pure implementation in code is hardly a limiting factor any more. If anyone can create a passable clone of my product with AI support, I either have to grow rapidly, establish my brand or offer something that cannot be easily replicated (e.g. special data, community, patented innovation – but more on that later).

Furthermore, just because it is easier to produce software does not automatically mean that each of these new applications will be successful. The quality and sustainability of the AI-fast-built products is another matter. We could see a boom in new SaaS tools and apps, but also a boom in half-baked solutions that disappear just as quickly as they appeared. “Fail fast, fail often” takes on a whole new meaning when projects can be launched (and fail) on the fly.

Hype vs. reality: what AI tools cannot (yet) replace

But before we all go into euphoria: A little reality check is in order. Is everything really as effortless with AI as it sounds? Experience shows that it is: Yes and no. The great stories about the weekend MVP and the 5-hour SaaS are true, but they often conceal the pitfalls in the details. The AI does a lot of routine work for you, yes – but you still have to think, analyze and, above all, plan properly yourself.

A developer friend of mine – sorry, now more of a prompt engineer – recently told me about his first foray into the realm of fully automated coding. He had described to ChatGPT (with GPT-4) in detail the web app he wanted to build, including the tech stack: Node.js in the backend, React/Tailwind in the frontend, connection to AWS for file storage, Google login, the whole nine yards. The result was quite impressive: the AI spit out code for the backend and frontend piece by piece and even explained to him how to set up the development environment. He dutifully copied everything together – and lo and behold, the basic functions actually worked straight away. But the joy was short-lived.

As soon as he wanted to expand or change the system, he realized that the AI quickly forgets details of its own code during the chat process. It suddenly produced contradictory changes, tearing down what it had built shortly before. When it tried to add a menu item, the AI assistant rewrote half the front end – with different buttons and ugly layouts because it didn’t “know” exactly what it should look like. Every new feature became a game of chance: sometimes accurate, sometimes a shot in the foot. In addition, ChatGPT naturally had no connection to the running development environment. If an error message appeared somewhere in the terminal (which often happened, e.g. due to an incorrect node version), my friend first had to laboriously copy the error text and explain to the AI what had happened. This “blind debugging from afar” routine cost nerves and time.

In short, he couldn’t do it without his own coding skills and manual intervention. In the end, he was exhausted – “even Dark Mode didn’t save my tired eyes,” he joked – and looked around in frustration for an alternative.

The alternative was a specialized AI app builder (the very Bolt.new I mentioned). The application actually delivered a basic framework for his idea in record time, with significantly less back and forth than the chatbot approach. Nevertheless, similar problems arose there too: When the AI was supposed to change something, it liked to rewrite large parts of the app unnecessarily, destroying working components and diligently burning expensive compute tokens in the background. Every complex feature became a casino session – you pulled the lever (“Please AI, add feature X”) and hoped that this time the right code symbols would line up without breaking anything else. A few $80 later, I had a running application (actually completed in under 5 hours), but my friend had mixed feelings: “Basically, I’ve created a new job for myself: I’m now a baby-sitter for an AI. She’s been taught to code, but I have to keep cleaning up after her.” It could hardly be said more aptly.

What do we learn from this? AI tools are powerful, but not magical. They are great accelerators, but they are not error-free and certainly not foolproof. Anyone who leaves a complex application completely to AI without a programming background is acting in much the same way as someone who lets a high-tech autopilot system take the wheel and thinks they can lie down in the back seat to sleep. That may work 99 times, but the 100th time you end up in a ditch – or worse. Someone has to stay alert, take countermeasures and intervene if necessary. In our case, this means that without a basic understanding of software architecture, logic and quality assurance, things can get dicey. The AI does not put its own code through its paces. It delivers what probably sounds right. But is it actually robust, secure and efficient? Well, the “autopilot” lacks real judgment.

This blind trust can be fatal, especially for security-critical or business-critical applications. It may be okay to accept security vulnerabilities or inefficient queries for a quick prototype – but when it comes to going live with real user data, the fun stops. Errors that an AI introduces are no less dangerous or expensive just because an AI has built them. On the contrary: they can be more treacherous because developers may be inclined to trust the machine code (“It’ll be fine, the AI generated it that way”), even though they don’t understand it 100%. A kind of false sense of security that can come at a high price.

To summarize: VibeCoding can make you incredibly productive and give small teams superpowers. But shortcuts come at a price. You save time and money in the beginning, but you might have to pay twice later – be it through extra debugging, through architecture refactoring, or in the worst case through problems with customers and investors. And that brings us to the next, less fun topic:

Legal pitfalls: who is liable, who owns the code, what does the user need to know?

I hope you’ll forgive me for putting on my legal glasses now. After all, this blog is called ITMediaLaw, not TechCrunch. Despite my fascination with vibe coding, as a lawyer I am naturally itching to ask: What does the law say about this wild goings-on? The answer: surprisingly little so far – but the risks are real and are often underestimated. Let’s take a look at the most important points to ensure that this cool AI experiment doesn’t lead to a rude awakening in court.

Liability: AI is no good as a scapegoat

Let’s imagine that your startup has used AI to put together an online service that is really popular with customers. Everything is great – until one day user data is lost due to a software error or something worse happens (perhaps your AI-generated FinTech algorithm calculates incorrect interest rates and a customer suffers financial damage as a result). Who’s in the fire now? You’ll have guessed it: Not the AI. It is difficult to sue it or take recourse; legally, it is simply a tool, nothing more. You are responsible as the person who used the tool and brought the faulty product onto the market. Period.

There are clear principles for this under German law: If your product – in this case software – causes damage to a third party, you are generally liable if you have not exercised the necessary care. § Section 823 of the German Civil Code, for example (tort), obliges anyone who negligently injures the property, life, health, etc. of another person to pay compensation. And don’t think you can get out of this with “But the AI messed it up, not me!”. Such an excuse doesn’t work, any more than a hammer manufacturer can say “It’s not my problem if the user hits his thumb with my hammer”. If you use a product and want to make a profit with it, you also have to take responsibility if it goes wrong.

Now you might argue: “Well, software always has bugs. Can’t I simply exclude liability in the contract, along the lines of: use at your own risk?” Oh, how nice that would be – unfortunately, the law on general terms and conditions puts a spanner in the works. Liability can only be restricted to a very limited extent in the general terms and conditions for customers (especially end consumers, but also B2B). German law (e.g. Section 307 BGB) prohibits the complete exclusion of liability for simple negligence when it comes to material contractual obligations. And gross negligence or even intentional acts can never be waived. In other words, even if you were to write in your terms of use “I accept no liability for any errors in my AI software”, this would be invalid in almost all cases. This is particularly unacceptable for consumers (and prohibited for personal injury or total failure anyway).

In other words: In case of doubt, you are also liable for simple, stupid errors in your software if these errors have serious consequences for the contractual partner and you had a duty to check the software sufficiently. Of course, not every small bug will immediately lead to compensation – a breach of the duty of care is required. But if, for example, there was no quality control at all and a fatal error went undetected as a result, things look bleak. The law expects you to check AI-generated code as well, at least as well as you reasonably can. There is no complete absolution just because “it was the AI”.

Incidentally, if your software error causes someone to suffer damage (person injured, thing broken), product liability would theoretically also come into play – but purely digital products without a physical component have not previously been covered by product liability law. However, the EU recently decided to consider software as a “product” in future. In the near future (as soon as the corresponding directive is transposed into German law), software manufacturers could therefore also be held liable under product liability, e.g. if an AI-controlled system in a machine causes an accident. This affects platform providers as well as start-ups. It would then be even more difficult to get out of the affair with contractual clauses, as there is no exemption from liability in product liability – in case of doubt, liability insurance will pay for this, if you have one.

The bottom line here: Liability risk remains liability risk, whether with or without AI. Anyone who operates software must bear responsibility. VibeCoding does not release you from careful testing, quality assurance and sensible error handling. It just suggests that you should perhaps do this even more intensively, because the code does not come from an experienced senior developer, but from a probabilistic text machine. To put it bluntly: the AI has no liability, but you do. So act accordingly.

Oh, and if you’re hoping to turn the tables and hold the AI tool vendor liable when their code generator screws up: Good luck. Most providers have extensive disclaimers in their terms of use. At most, they assume responsibility for the technical availability of their service, but not for indirect damages, loss of profit, etc., and often only up to the amount of the fees you have paid. In addition, the legal situation argues that a platform operator is not responsible for every mischief a user makes with their tool – just as a car manufacturer is not liable if you drive your car into a wall because you blindly followed the navigation system. Only if the platform itself has a fault (e.g. a systematic bug in the no-code engine that makes all generated apps unsafe) can the manufacturer be held liable. But even then, the providers try to protect themselves by all means. In short: If in doubt, you’re on your own when it rains. No daddy Microsoft or uncle OpenAI will come to box you out if you find yourself in court or in front of the customer because of an AI coding error.

Copyright & Co.: The intellectual owner a ghost?

Even more exciting – and many people are not even aware of it – is the question of who actually owns the code that the AI writes. More precisely: Does this code enjoy copyright protection, and if so, for whom? Or, to put it another way: can someone simply steal your AI code without any legal consequences?

The answer is a little unsettling: much of what an AI creates could not be protected by copyright due to a lack of human creativity. Under German copyright law, a work must be a “personal intellectual creation” of a human being (Section 2 (2) UrhG) in order to be considered a work. In principle, the law treats code in the same way as text – software can also be protected by copyright if it is original enough. But the catch is that fully AI-generated code lacks the personal intellectual contribution of a human being. The AI does not “create” anything in a creative sense, it only combines probabilities and existing patterns from its training data.

So if I, as the founder, tell the AI “Write me function X” and it spits out 100 lines of code, I have not designed this code creatively myself. My input (the description) may well have come from me, but in most cases this is probably not enough to be considered a co-author of the specific code – after all, I have not usually dictated to the AI word for word what it should write, but only described the goal. The concrete implementation – the choice of words, algorithms, etc. – comes algorithmically from the depths of the model.

Consequence: This output could be considered “ownerless” under copyright law because there is no human author. Neither the AI (which cannot be the author by law), nor the user (whose contribution was too abstract), nor the operator of the AI (who built the model but did not write the specific line) would therefore have a classic copyright to it. This is uncharted legal territory, but the trend is moving in this direction. There are parallels: In patent law, for example (DABUS case, 2024), the Federal Court of Justice has made it clear that an inventor in a patent must always be a human being; a machine cannot be considered an inventor. In copyright law, the creator principle has always applied – the author is the person who created the work. A fully autonomous AI creation falls through.

What does this mean in practical terms for a startup? First of all, it seems to be a relief: if your AI code is not protected, you can use it freely without asking anyone for permission – because you are not infringing anyone else’s copyright (there is none). However, this is only half the truth and has a nasty downside: if your code is unprotected, anyone else can use it. You then have no monopoly like an author who can say “I wrote this, don’t copy it!”. If a competitor gets hold of your source code (whether legally or illegally), they could copy it, modify it and use it commercially – and you would not be able to take legal action against them for copyright infringement because there is no copyright that could be infringed. Your software would be in the public domain, so to speak.

Imagine the puzzled faces in an investor meeting when the classic due diligence question “What IP do you own?” is answered with: “Um, actually none – most of our code is in the public domain because it was generated by an AI.” I’m exaggerating a little to make things clearer, but this is exactly the kind of constellation that venture capital lawyers are currently discussing. The IP valuation of a start-up changes considerably if no original copyrights can be asserted for the software. The “product” then consists more of a collection of ideas, business processes, perhaps trademark rights – but the code itself, normally a valuable asset, is difficult to protect legally. In such cases, trade secrets will be used: So keeping the source code under lock and key so that nobody gets it. Secrecy instead of copyright protection, so to speak. However, organizational measures must be taken to ensure this (access restrictions, NDAs, etc.), otherwise it is not considered a secret within the meaning of the GeschGehG.

There are still a few creative approaches to solving the IP problem: For example, founders could specifically program critical parts themselves in order to at least have copyrights for them – a “human coat of paint” on the AI facade, so to speak. Or they can apply for patents (where possible) if a technical invention exists – but here a human inventor must be named, which can be tricky if the idea actually originated from AI suggestions. You can also try to build exclusive data sets or AI models that are not available to others in order to secure a competitive advantage that goes beyond pure code. In any case, investors will be watching very closely: Any unresolved copyright or licensing issue in the code is a potential dealbreaker or at least value detractor in the valuation. The nice speed advantage of VibeCoding can then be bought at a high price later on if the investor says: “Great software, but legally a shaky deal – we’d rather pay 20% less and demand that you do an IP scan and some reprogramming before signing the contract.”

Speaking of licensing issues: another copyright aspect is even more controversial: It’s about the AI training data. AI coding models (such as GitHub Copilot or the generic models behind it) have “learned” vast amounts of existing code from the Internet. This most likely includes code fragments that are protected by copyright and/or under open source licenses. It has already happened that Copilot has spit out code to users that matched a publicly available code snippet almost 1:1 – including specific comments. With a bit of bad luck, such a snippet comes from a GPL-licensed library. If you now use these lines in your proprietary project, you are violating the license if you do not comply with the conditions (such as making the source code publicly available under GPL). And suddenly you have one foot in copyright infringement. You can’t justify it by saying that you “didn’t know” or that the AI gave it to you – legally, this is irrelevant. You have used someone else’s code, that’s it. If the copyright holder notices (or a diligent warning lawyer), you could face injunctive relief and claims for damages.

To be on the safe side, experts are already painting devils on the wall, for example in the form of a possible “co-pilot troll”: someone could deliberately place their own code under a strict license, hope that AI systems will catch it during training and output it later, and then systematically issue warnings to infringers who have adopted this AI output without checking it. Whether this actually happens remains to be seen – but the possibility exists.

For start-ups, this means they have to be extremely careful about what the AI delivers. A thorough code review is not only mandatory from a quality perspective, but also from a compliance perspective. There are tools that check the source code against known open source repositories ( code similarity scan) to find suspicious matches. You should use such tools, especially for larger blocks of code that do not appear completely generic. Sometimes you can also recognize it by stylistic things: Suddenly a completely different comment scheme appears in the code, or very specific variable names that don’t match the rest of the structure – these could be clues that “copying” has been done here (albeit by the AI as ghostwriter). In short: Trust is good, control is better, otherwise the foreign code can become a time bomb.

3 What do users need to know? – Transparency and data protection

One last legal point: do you actually have to inform your users that AI is working in the background for your website/SaaS or has even written the code? Do you need a warning along the lines of “This software was created fully automatically, use at your own risk”? Rather no – at least not across the board. There is no general obligation to disclose how the software was created. The end user is primarily interested (and concerned) in how the software works, not who or what built it. You also don’t have to tell the user which programming language you used or whether your developers drink coffee or mate – just as little as whether the AI helped.

However, there are scenarios in which transparency is indeed legally required: Namely, whenever the user himself interacts with an AI function or his data is processed by an AI and this has consequences for him. Example: Your SaaS uses an AI internally to analyse user input or make automated decisions (e.g. an AI evaluates creditworthiness, filters content or makes personalized recommendations). Regulations such as the EU AI Regulation and existing data protection rules are likely to apply here in future. Depending on the context, you will then have to disclose that AI is involved. The GDPR already stipulates that certain information rights exist in the case of automated individual decisions with legal effect (Art. 22 GDPR) – the user can request a human verifier, etc. Even if your service does not have such blatant automation, many users now expect a certain degree of honesty from a moral point of view: if they are talking to a chatbot that actually has GPT-4 in the background, you should at least not secretly pass it off as a human. Many companies therefore voluntarily use labels such as “AI-supported” or let the AI introduce itself as such so as not to gamble away trust.

Data protection is another aspect: If you use external AI APIs (e.g. you send text or code to OpenAI to have something generated), users’ personal data may flow to third-party providers. This must be stated in the privacy policy. And you need a legal basis for this, of course – usually it will be contract execution, but make sure that your contracts with the AI service are data protection-compliant (keyword order processing, third country transfer to the USA, etc.). There could be obligations to provide information here: for example, if you use analytical AI, you could explain in the privacy policy that an AI system evaluates user behavior to improve the offering, blah, blah, blah. So, please don’t forget such transparency issues in the heat of the moment.

Last but not least: If your website displays content that has been generated by AI (e.g. blog posts, product descriptions, news articles), you do not have to explicitly label this as long as no one is misled or legal requirements (such as copyright notices) are violated. Nevertheless, many choose to at least have internal quality controls and, if necessary, indicate that content is AI-generated in order to create trust. It would be a topic of its own, but remember: AI likes to hallucinate facts. If your marketing page is written by an AI, have it proofread – you don’t want to inadvertently advertise with false promises or infringe licensing rights to generated images. This will also ultimately fall back on you.

Conclusion: VibeCoding – Turbo for founders, but not a free ride

VibeCoding is awesome. That’s where I commit myself. As a tech enthusiast, I find it absolutely fascinating what is possible today. The speed at which an idea can become a product has reached a new dimension. This marks the start of a new chapter for the start-up world: more experiments, more innovation, possibly also more competition – but overall an exciting dynamic. If you’re clever, you can get in on the action with the big players without a huge budget, at least as far as the development phase is concerned. The often-cited “gap” between those with money/developers and those with just ideas is closing somewhat. This is good for equal opportunities and brings a breath of fresh air.

But (and here comes the big but, you expected it): A startup is not just code. And success is not only measured by how quickly you launch something on the market, but also how sustainably it works, how much customers trust it and how well you have risks under control. In all these aspects, AI brings speed, but also uncertainty. The proliferation of features in a very short space of time should not obscure the fact that quality assurance, legal hygiene and strategic differentiation still require hard work. Perhaps even harder than before, because the bar is higher: When anyone can launch quickly, it is all the more important to be error-free and legally compliant in order to stand out from the crowd.

For developers, VibeCoding does not mean the end, but a change. Many developers will be able to work more productively and concentrate on more interesting problems, while the AI takes care of the monotonous part. But there may also be fewer developers needed for the same task. A single talented engineer with AI support could do the work of three average coders. This could lead to a shakeout: Mediocre coders will have to work harder or specialize, and true experts will become all the more valuable as mentors of AI and architects of complex systems. And new roles will emerge: Prompt designers, AI quality managers, data curators – professions we didn’t even know we needed until recently.

Investors will take a closer look: “Do they have their AI under control?” will become a standard question. Startups that use AI for coding have to do their homework: proper documentation, license checks, backup plans. If you are naïve, you are in for a rude awakening when questions such as “Can you prove that there is no protected third-party code in the product?” or “How do you ensure that your software is protected and unique if the AI generated it?” suddenly arise during due diligence. Unique so far: you may have to explain to the investor that the code is not protectable by copyright – and still convince them that the business model is viable. A challenge, but feasible with good preparation. Transparency and proactive measures are the key here.

And what about the end users of our vibration-coded marvels? Ideally, they shouldn’t feel any of this, except that they get a functioning product. The best AI-supported apps can be recognized by the fact that they are simply good – not by the fact that the AI stamp is emblazoned on them somewhere. Nevertheless, as manufacturers, we have a responsibility to handle this new power ethically. When AI makes decisions, no user should be discriminated against. When AI generates content, we should stick to the truth and not recklessly spread fake news. Regulation will take a closer look here in the future, but we can also act sensibly on our own initiative.

Final thought: Startups can fly faster today, but they have to be careful not to hurtle into the abyss without brakes. VibeCoding is like a jet engine on your scooter: you can go faster than ever – but without a helmet and good brakes, it’s a suicide mission. The flexibility and speed are fantastic, indeed, it will probably turn the entire industry upside down. In a few years’ time, we may look back with amusement on the days when founders had to spend months coding a prototype. But some things won’t change: if you want to build a company sustainably, you still need a plan, responsibility and an eye for the big picture. AI can do a lot, but it doesn’t relieve us of responsibility – neither technically, legally nor morally.

So, dear founders: take advantage of the new opportunities! Be brave, try VibeCoding, automate as much as you can and let your ideas fly. But stay vigilant. Test your AI code, do your legal homework, educate your users fairly where necessary. Then AI is not your enemy, but your ally.

And to my colleagues in the legal sector: get ready, we are needed. While the developers may gain some free time, our to-do lists could get longer – with new contracts, new liability issues, new consultancy projects relating to AI development. It certainly won’t be boring.

VibeCoding – the vibe is real, the code (almost) writes itself, but in the end: trust is good, control is better. With this in mind, happy prompting and stay on the safe side!

Marian Härtel
Author: Marian Härtel

Marian Härtel ist Rechtsanwalt und Fachanwalt für IT-Recht mit einer über 25-jährigen Erfahrung als Unternehmer und Berater in den Bereichen Games, E-Sport, Blockchain, SaaS und Künstliche Intelligenz. Seine Beratungsschwerpunkte umfassen neben dem IT-Recht insbesondere das Urheberrecht, Medienrecht sowie Wettbewerbsrecht. Er betreut schwerpunktmäßig Start-ups, Agenturen und Influencer, die er in strategischen Fragen, komplexen Vertragsangelegenheiten sowie bei Investitionsprojekten begleitet. Dabei zeichnet sich seine Beratung durch einen interdisziplinären Ansatz aus, der juristische Expertise und langjährige unternehmerische Erfahrung miteinander verbindet. Ziel seiner Tätigkeit ist stets, Mandanten praxisorientierte Lösungen anzubieten und rechtlich fundierte Unterstützung bei der Umsetzung innovativer Geschäftsmodelle zu gewährleisten.

Weitere spannende Blogposts

End of anonymity on review platforms like Kununu?

End of anonymity on review platforms like Kununu?
26. April 2024

The issue of anonymity on online review platforms such as Kununu has repeatedly given rise to debate. A recent ruling...

Read moreDetails

Property rights to algorithms: Typical questions and legal answers

ECJ: Advocate General assesses sampling as copyright infringement
15. January 2025

Time and again, clients ask the question: "Who actually owns the rights to an algorithm?" This question is not only...

Read moreDetails

Goodbye hustle culture? Startup life between 24/7 grind and work-life balance

Goodbye hustle culture? Startup life between 24/7 grind and work-life balance
19. May 2025

Sleep? Overrated. After work? A foreign word. Not so long ago, this was the unspoken motto of the start-up scene....

Read moreDetails

Hundreds of thousands of “parking tickets” illegal?

Hundreds of thousands of “parking tickets” illegal?
7. November 2022

In a landmark decision, the Higher Regional Court of Frankfurt am Main (OLG) has declared the monitoring of stationary traffic...

Read moreDetails

Esport Association for the Promotion of Youth?

Esport Association for the Promotion of Youth?
6. December 2018

As already explained in this article, the linchpin in assessing whether an association can be recognized as a non-profit organization...

Read moreDetails

Finally, an end to “Known From” faux ads on purchased items!

Finally, an end to “Known From” faux ads on purchased items!
30. October 2023

In a landmark ruling of September 21, 2023 (case reference: 15 U 108/22), the Higher Regional Court (OLG) of Hamburg...

Read moreDetails

UG (limited liability): Legal certificate liability!

UG (limited liability): Legal certificate liability!
18. January 2019

From Limited to UG After a short trip to the Limited (Ltd.) yesterday, I would like to point out today...

Read moreDetails

NXT_APRIL Foundation Booster #2 – “Legal basics”

NXT_APRIL Foundation Booster #2 – “Legal basics”
17. May 2024

Dear Readers,Tomorrow, on April 18, 2024, I have the honor of speaking at the NXT_APRIL Start-up Booster #2 - "Legal...

Read moreDetails

Frankfurt Regional Court: Form selection with Mr./Mrs. is discriminatory

Frankfurt Regional Court: Form selection with Mr./Mrs. is discriminatory
7. November 2022

A person who feels that he or she belongs to neither the female nor the male gender, but has to...

Read moreDetails
Q&A: Legal issues for game developers
Law and computer games

5-day guide: Founding a game development studio

5. August 2025

As a support for young studios, this series summarizes the essential steps for founding a game development company. The guide...

Read moreDetails
EU Inc: Why Europe needs a unified startup society now

EU Inc: Why Europe needs a unified startup society now

22. July 2025
BGH considers Uber Black to be anti-competitive

BGH shakes up the coaching industry – What applies now?

21. July 2025
Growth hacking and viral marketing – legal requirements

Games funding 2025 – back at last!

20. July 2025
Ownership of software – Who actually owns the code?

Ownership of software – Who actually owns the code?

14. July 2025

Podcastfolge

3c671c5134443338a4e0c30412ac3270

“Digital law decoded” with lawyer Marian Härtel

26. September 2024

In this exciting 30-minute podcast, lawyer Marian Härtel decodes the complex world of digital law for the self-employed, start-ups and...

Read moreDetails
legal challenges when implementing confidential computing data protection and encryption in the cloud

Smart contracts and blockchain

15. January 2025
9e9bbb286e0d24cb5ca04eccc9b0c902

Legal challenges of innovative business models

1. October 2024
4f3597d5481e0f38e37bf80eaad208c7

The IT Media Law Podcast. Episode No. 1: What is this actually about?

26. August 2024
092def0649c76ad70f0883df970929cb

Influencers and gaming: legal challenges in the digital entertainment world

26. September 2024

Video

My transparent billing

My transparent billing

10. February 2025

In this video, I talk a bit about transparent billing and how I communicate what it costs to work with...

Read moreDetails
Fascination between law and technology

Fascination between law and technology

10. February 2025
My two biggest challenges are?

My two biggest challenges are?

10. February 2025
What really makes me happy

What really makes me happy

10. February 2025
What I love about my job!

What I love about my job!

10. February 2025
  • Privacy policy
  • Imprint
  • Contact
  • About lawyer Marian Härtel
Marian Härtel, Rathenaustr. 58a, 14612 Falkensee, info@itmedialaw.com

Marian Härtel - Rechtsanwalt für IT-Recht, Medienrecht und Startups, mit einem Fokus auf innovative Geschäftsmodelle, Games, KI und Finanzierungsberatung.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
  • Informationen
    • Ideal partner
    • About lawyer Marian Härtel
    • Quick and flexible access
    • Principles as a lawyer
    • Why a lawyer and business consultant?
    • Focus areas of attorney Marian Härtel
      • Focus on start-ups
      • Investment advice
      • Corporate law
      • Cryptocurrencies, Blockchain and Games
      • AI and SaaS
      • Streamers and influencers
      • Games and esports law
      • IT/IP Law
      • Law firm for GMBH,UG, GbR
      • Law firm for IT/IP and media law
    • The everyday life of an IT lawyer
    • How can I help clients?
    • Testimonials
    • Team: Saskia Härtel – WHO AM I?
    • Agile and lean law firm
    • Price overview
    • Various information
      • Terms
      • Privacy policy
      • Imprint
  • Services
    • Support and advice of agencies
    • Contract review and preparation
    • Games law consulting
    • Consulting for influencers and streamers
    • Advice in e-commerce
    • DLT and Blockchain consulting
    • Legal advice in corporate law: from incorporation to structuring
    • Legal compliance and expert opinions
    • Outsourcing – for companies or law firms
    • Booking as speaker
  • News
    • Gloss / Opinion
    • Law on the Internet
    • Online retail
    • Law and computer games
    • Law and Esport
    • Blockchain and web law
    • Data protection Law
    • Copyright
    • Labour law
    • Competition law
    • Corporate
    • EU law
    • Law on the protection of minors
    • Tax
    • Other
    • Internally
  • Podcast
    • ITMediaLaw Podcast
  • Knowledge base
    • Laws
    • Legal terms
    • Contract types
    • Clause types
    • Forms of financing
    • Legal means
    • Authorities
    • Company forms
    • Tax
    • Concepts
  • Videos
    • Information videos – about Marian Härtel
    • Videos – about me (Couch)
    • Blogpost – individual videos
    • Videos on services
    • Shorts
    • Podcast format
    • Third-party videos
    • Other videos
  • Contact
  • en English
  • de Deutsch
Kostenlose Kurzberatung