Private accounts on ChatGPT & Co. for corporate purposes are a gateway to data protection breaches, leaks of secrets and labor law conflicts; if you want to use AI in your company, you need clear prohibitions or a properly set up “secure enablement” with technical, contractual and behavioral rules.
Why private AI accounts become a compliance risk in the corporate context
Many teams have long been working with AI assistants. Often not via company licenses, but with private accounts. This is where the liability issues begin:
a) Loss of control over data
Depending on the provider, it is no longer possible to fully control what has been entered into a prompt. Without a contractually secured opt-out for training purposes or clear deletion periods, neither the principle of purpose limitation (Art. 5 para. 1 lit. b GDPR) nor the storage limitation (Art. 5 para. 1 lit. e GDPR) can be reliably proven. Accountability in accordance with Art. 5 para. 2 GDPR fails in practice if entries are made via private accounts for which there are no logs, no binding guidelines and no order processing contracts (Art. 28 GDPR).(EUR-Lex)
b) Unlawful international data transfers
Many AI providers process data outside the EU. Without a reliable transfer basis in accordance with Art. 44 et seq. GDPR, there is a risk of fines. Although the EU-US Data Privacy Framework is (once again) a viable adequacy decision, it only applies to certified US companies – and only if they are correctly integrated. Private use circumvents any transfer due diligence by the company.
c) Trade secrets in the insecurity loophole
Trade secrets are only protected if “appropriate confidentiality measures” have been taken (Section 2 No. 1 b GeschGehG). Tolerating private AI channels counteracts precisely these measures: There is no contractually secured confidentiality standard, no technical access control and no audit trail. In the event of a dispute, there is no protection – with considerable consequential claims.
d) Pitfalls under employment and works constitution law
As soon as the use of AI tools is controlled, monitored or evaluated, the works council is usually involved: Section 87 (1) no. 6 BetrVG (technical equipment for monitoring behavior/performance) regularly imposes a co-determination obligation here – regardless of the intention, because the objective suitability for monitoring is sufficient.
e) Liability for incorrect content and rights chains
Hallucinated facts, license ambiguities in generated code or images and unauthorized use of confidential information can trigger contractual and tortious liability. Without approval processes and source documentation, it is difficult to prove that work/services have been provided with due care.
Interim conclusion: Private AI accounts are convenient from an organizational point of view, but legally a “blind flight”: No one knows what data goes where, who accesses it, how long it is stored – and whether its use is compatible with the GDPR, the GeschGehG or the company’s own confidentiality architecture.
Legal framework: GDPR, GeschGehG, employee data, co-determination
a) GDPR obligations of the controller
- Legal basis & purpose (Art. 5, 6 GDPR): Corporate data processing needs a viable legal basis and a clear purpose. Private accounts are not subject to this control.
- Special data (Art. 9 GDPR): Even harmless prompts can contain health, trade union or biometric data. There is no protection architecture for private use – a categorical exclusion of such content is not reliable without technical control.
- Order processing (Art. 28 GDPR): If an external provider is used, a data processing agreement is mandatory – with minimum content (subject matter, duration, type of data, TOMs, etc.). With private accounts, there is no effective contract between the controller and provider.
- Security (Art. 32 GDPR) and DPIA (Art. 35 GDPR): Depending on the process and risk, a data protection impact assessment is required; in any case, appropriate TOMs must be implemented – technically not possible if employees use private tools without control.
- International transfers (Art. 44 ff. GDPR): No transfer compliance package without corporate governance (DPF certification, SCCs, TIA).
b) Employee data protection
Specific requirements apply to employee data. § Section 26 BDSG is interpreted more narrowly in some cases following ECJ ruling C-34/21; the general legal bases of the GDPR must often be applied. For private AI use, this means that consent is only voluntary to a limited extent in the employment relationship; legitimate interest (Art. 6 para. 1 lit. f GDPR) requires careful consideration and technical protective measures.
c) Trade secrets
Protection under GeschGehG requires proactive measures: policies, training, access restrictions, technical barriers. Private AI channels undermine these elements. Anyone who tolerates private use considerably weakens their own claim position (Sections 2, 3 GeschGehG; willful violations may result in criminal prosecution, Section 23 GeschGehG).
d) Co-determination under the BetrVG
The introduction and use of AI tools, logging, proxy blocks or DLP rules is typically subject to co-determination (Section 87 (1) No. 1, 6 BetrVG). Without a works agreement, both bans and “permission with conditions” can be challenged.
e) EU AI Act (outlook)
The AI Act regulates the obligations of providers, distributors and users (“deployers”) of high-risk AI. Initial bans have been in force since February 2025; obligations for general-purpose AI and further stages will take effect in stages from August 2025/2026. For companies, this means that processes for model labeling, risk assessment, logging and incident handling will become standard – improvised private use does not fit into this compliance framework.
Practical anchor: The EDPB ChatGPT task force emphasizes transparency, legal bases, data accuracy and minimization – precisely the fields that are structurally undermined in private use.
Typical risk scenarios – and how they arise
Scenario 1: “Just a quick check”
An account manager copies customer data into a prompt in order to obtain a tonality check. Problem: Personal reference, possibly special categories, no DP basis, unknown transfer paths. Result: Violation of Art. 5, 6, 28, 32 GDPR; confidentiality at risk.
Scenario 2: Pitch concept with confidential figures
A creative director validates price sheets, margin and product roadmap via the private AI account. This information is regularly business secrets. Without appropriate measures (Section 2 No. 1 b GeschGehG), protection is no longer applicable – the company is cutting into its own claims.
Scenario 3: Code snippets and Git links
A developer has code explained via a private tool and attaches Git links for contextualization. In addition to possible license/copyright risks, the link itself can reveal secrets (repo structure, branch names, tickets). Depending on the provider, meta/access data may be sent to third countries.
Scenario 4: HR texts with employee data
HR generates employment references via a private account and feeds in internal performance data. Employee data is subject to strict rules; consent in the employment relationship is problematic, especially if it is not clear where the data ends up.
Scenario 5: Monitoring “by mistake”
IT tries to prevent private use, but activates proxy logging without a BV, which records entries. This is a technical device within the meaning of Section 87 (1) No. 6 BetrVG – tricky without co-determination.
Scenario 6: False statements in the customer project
A privately used AI tool hallucinates technical content. Without a documented source/review obligation and without versioning, diligence cannot be proven; contractual liability risks escalate.
Prohibit or allow in a controlled manner? – A governance model that works
There are two viable options: (A) a clear ban with technical enforcement or (B) “secure enablement” via approved company accounts. Mixed forms create friction.
Clear ban on the private use of AI for business purposes
Objectives: Protection of personal data, protection of business secrets, compliance with co-determination and contractual chains.
Building blocks:
- Policy: General ban on the use of private AI accounts for company purposes; ban on entering personal data, customer data, source code, confidential documents and non-public roadmaps into external tools. Reference to Art. 5, 6, 28, 32, 44 et seq. GDPR and Section 2 No. 1 b GeschGehG.
- Technology: DNS/proxy blocks for known AI domains; DLP rules (copy-paste block for sensitive classes), secrets scanner; browser policies; mobile device management.
- Organization: training with negative/positive examples, whistleblower interface for incidents; defined approval process for exceptions.
- Labor law: Enforcement via right of direction (§ 106 GewO) + contractual clauses; coordinated with works council (BV according to § 87 para. 1 no. 1, 6 BetrVG).
Pros & cons: A ban is legally secure and can be communicated quickly, but inhibits innovation and efficiency.
“Secure enablement” – controlled authorization, but the right way
Goals: Utilize productivity gains without sacrificing data protection and confidentiality.
Building blocks (minimum standard):
- Approved providers & licenses
Only enterprise contracts with DPAs in accordance with Art. 28 GDPR, documented TOMs (Art. 32), opt-out from training, clear data residency, clear deletion periods and support SLAs. For US providers: DPF certification or SCCs + TIA (Art. 44 ff. GDPR). - Identities & access
SSO/MFA, role-based authorizations, tenant isolation, logging, key management; no private accounts. - Use case catalog
Permitted: generic text optimization without personal reference, boilerplates, code explanations with synthetic examples.
Prohibited: Personal data, customer dossiers, confidential financial figures, unresolved IP assets, health data, company/trade secrets.
Yellow zone (only with approval/DPIA): internal evaluations with pseudonymization, production-related prototypes. - Prompt hygiene & output review
Mandatory instructions against sharing sensitive content; red flag list; dual control approval for external use; source references and versioning. EDPB guidelines (transparency, accuracy) are thus anchored in the organization. - Company agreement
Rules on use, logging, purpose limitation, deletion periods, training, incident processes, co-determination; clear demarcation from performance/behavioral monitoring (no “micro-monitoring”). - DPIA & risk register
Pre-assessment (Art. 35 GDPR) for each sensitive use case; assignment of responsibilities; annual re-certification of providers. - AI act readiness
“AI-supported” labeling, risk assessments, logging, data source transparency – tailored to the relevant obligations and transition periods.
Pros & cons: High security with simultaneous usability, but implementation costs (technology, contracts, BV).
Model modules for guidelines, contracts and technology
Note: Formulations are intended as practical building blocks and must be adapted to the size of the company, sector, works council situation and existing policies.
Policy principle
- Scope and objective
This policy regulates the business use of AI systems. Employees’ private accounts may not be used to process company information or personal data. The aim is to ensure compliance with data protection (in particular Art. 5, 6, 28, 32, 35, 44 ff. GDPR) and the protection of business secrets (Section 2 No. 1 b GeschGehG). - Categorization of information
Information is divided into public, internal, confidential and strictly confidential classes. Entries in AI systems are only permitted for the “Public” and “Internal” classes, provided there are no personal references. “Confidential”/”Strictly confidential” are generally excluded. - Prohibited content
It is prohibited to enter personal data (including special categories within the meaning of Art. 9 GDPR), customer data, source code, passwords, access tokens, financial/price lists, roadmaps, internal legal documents or confidential third-party data into AI systems. - Permitted use
Allowed are generic formulation, structuring and ideation aids without personal reference, with approved company licenses and opt-out from training. - Approval procedure
Use cases that are not covered require prior approval from data protection, information security and – where relevant – the works council (check DPIA obligation). - Review and labeling
Content created by KI is always reviewed by experts; external use is labeled if required by law or contractually guaranteed.
Contract modules
Order processing (Art. 28 GDPR) – minimum points vis-à-vis the AI provider:
- Subject matter/type/purpose of processing; categories of data/data subjects; duration.
- TOMs (including encryption at rest/transport, client separation, key management, role models, incident handling, sub-processor approval).
- Sub-processors: list, pre-approval procedure, information obligations in the event of changes.
- Data deletion/return: deadlines, formats, proof.
- Audit and information rights; support with data subject rights, DPIA, notifications.
- Third country transfers: DPF certification or SCCs + TIA, supplementary measures.
Tip: Many AI enterprise offerings provide training opt-out, data residency and zero retention modes. Confidential data cannot be used without these options.
Company agreement
- Purpose and validity: Efficiency gains through defined AI applications, no performance/behavior profiling.
- Permitted tools/use cases: Whitelist, change management.
- Data protection/TOMs: logging scope, pseudonymization, deletion concept, access only for defined roles.
- Transparency/information: informing the workforce, documentation, training.
- Monitoring/reporting: Aggregated usage reporting, no individual monitoring; procedure for violations; incident management.
- Evaluation: Review after 12 months or in the event of legislative changes (take AI Act Roadmap into account).
Technical protective measures
- Identities: SSO/MFA, conditional access, role-based approvals.
- Data flow control: DLP rules in the browser/end device, clipboard control for sensitive classes, secret scanner in IDEs/repos.
- Network: Proxy release only for whitelisted domains of the released providers; block for known public AI endpoints.
- Client protection: Separate tenants, key sovereignty; logging with data-saving pseudonymization.
- Sandboxing: Internal “AI sandboxes” with synthetic/depersonalized data for experiments.
- Lifecycle: Version control for prompts/outputs, binding review checklists, archiving according to retention periods.
5.5 Training & communication
- Case studies instead of a desert of paragraphs: What is allowed in prompts – and what isn’t?
- “Red flags”: personal references, customer lists, pricing models, source code, secret agreements, health information.
- Alternative courses of action: Internal templates, pseudonymization, synthetic dummies, secure enterprise models.
- Reporting channels: Low-threshold incident reporting (“false prompt”), no fear of error culture – but clearly regulated remedial action.
Conclusion
Allowing the private use of AI for corporate purposes creates a whole host of legal and security risks: lack of AV contracts, unclear third-country transfers, loss of confidentiality, conflicts under works constitution law and a lack of verifiability of due diligence. Two approaches are viable: a























