The European Commission has launched work on a new code of practice for the marking and labelling of AI-generated content, laying the groundwork for how deepfakes and synthetic media will be disclosed under the EU’s landmark AI Act.
What the new code will do
Under the AI Act, content created by artificial intelligence -including text, images, video and audio- must be clearly marked as such to ensure users can distinguish it from human-made material. The upcoming code of practice will serve as a voluntary tool to help developers and deployers comply with these transparency obligations, aiming to reduce misinformation, fraud, impersonation and consumer deception while strengthening trust in the information ecosystem.
The code will include practical guidelines on how to label AI-generated content in machine-readable formats, allowing platforms and media outlets to detect synthetic material more easily. It will also cover the use of deepfakes, ensuring that deployers using such content, especially in contexts of public communication or political relevance, explicitly disclose AI involvement.
How it will be developed
The process was formally launched today during a kick-off plenary meeting bringing together independent experts appointed by the European AI Office. Over the next seven months, these experts will lead an inclusive, stakeholder-driven drafting process, drawing on input from public consultations and contributors selected through an open call.
When the rules take effect
The transparency obligations for AI-generated content will become fully applicable in August 2026, complementing other provisions of the AI Act, such as those regulating high-risk systems and general-purpose AI models.
By promoting clear labelling and traceability of synthetic media, the European Commission hopes to ensure that Europe’s digital environment remains transparent and trustworthy as artificial intelligence becomes an integral part of information production and public discourse.