Issue 01 — March 2026
IT EN DE
The European magazine on private AI
Regulation

AI-generated content must be labelled from 2 August 2026. What changes for businesses

The European Commission published the second draft Code of Practice on AI content labelling. From 2 August 2026, watermarks, metadata and icons become mandatory. Here's what businesses producing AI content need to do.

AI ActCode of PracticeAI labellingdeepfakewatermarkEuropean Commission

The second draft: what happened on 5 March 2026

On 5 March 2026, the European Commission published the second draft Code of Practice on marking and labelling AI-generated content.

This isn’t an academic draft: it’s the document that will define how businesses must mark text, images, audio and video produced with generative AI. The final version arrives in June 2026. The obligation kicks in on 2 August 2026.

What the AI Act says (Article 50)

Article 50 of the AI Act imposes two distinct obligations:

For providers of generative AI systems (Article 50(2)):

  • Generated content must be machine-readable marked
  • Watermarks, secured metadata and optionally fingerprinting are required
  • The system must include marking by default

For deployers of generative AI systems (Article 50(4)):

  • Deepfakes (audio, video, images simulating real people) must be visibly labelled
  • AI-generated text on matters of public interest must carry a clear label

The Code of Practice: technical details

Section 1 — For providers

A two-layer marking approach:

Layer What How
Secured metadata AI generation information embedded in the file Open standards, non-removable metadata
Watermark Invisible signal in the content itself Manipulation-resistant, verifiable

Plus: generation logging, verification protocols, and a proposed standardised EU icon for uniform labelling across the Union.

Section 2 — For deployers

Content type Obligation
Deepfakes (video, audio, images of people) Mandatory visible label
AI text on matters of public interest Clear disclaimer
Artistic, creative, satirical, fictional works Simplified regime
Content under human editorial control Simplified regime

Key dates

Date Event
5 March 2026 Second draft published
30 March 2026 Feedback deadline
June 2026 Final Code of Practice
2 August 2026 Article 50 AI Act obligations take effect

What this means for businesses

If you produce content with generative AI

From 2 August 2026, if your company uses ChatGPT, Claude, Midjourney or any AI tool to produce externally published content, you must assess whether you fall under the obligations:

  • Website text, blog posts, press releases → if touching matters of public interest, disclaimers needed
  • AI-generated marketing images → must contain watermarks and metadata
  • Synthetic video or audio → mandatory labelling, especially showing real people
  • Internal documents → not covered

The privacy angle

There’s an aspect many businesses overlook: to produce content with ChatGPT or other cloud tools, you send company data to the provider’s servers. Briefings, strategies, market data, client information — all passes through external servers.

With on-premise AI solutions like ORCA by HT-X, content is generated within the company infrastructure. No data leaves. And marking can be implemented directly in the internal workflow, with full control over metadata and watermarks.

How to prepare

  1. Inventory: identify all business processes using generative AI for external content
  2. Classification: determine which content falls under obligations
  3. Tools: verify your AI tools support watermarking and metadata
  4. Internal policy: define a company policy on AI use for content production
  5. Training: ensure content producers know the obligations

The deadline is 2 August 2026. Five months isn’t long to adapt processes and tools.

Frequently asked questions

It depends on context. If your company publishes AI-generated text on matters of public interest (press releases, reports, editorial content), they must be labelled from 2 August 2026 under Article 50(4) of the AI Act. Internal content, emails and working documents are not covered. AI-generated images, audio and video always require watermarks and metadata.

It's a voluntary code of conduct developed by the European Commission with independent experts to facilitate compliance with Article 50 of the AI Act. It defines technical standards for marking AI-generated content: watermarks, secured metadata, visible icons and verification protocols. The final version is expected June 2026, with application from 2 August 2026.

The AI Act provides for fines of up to 15 million euros or 3% of annual global turnover for violations of transparency obligations (Article 50). The Code of Practice is voluntary, but adhering demonstrates compliance and reduces sanction risk.

Looking for a private ChatGPT for your business?

ORCA is the on-premise AI platform by HT-X (Human Technology eXcellence): your data stays yours, GDPR and AI Act compliant.

Discover ORCA