logo

Mandatory Labelling of AI-Generated Content: A New Era of Digital Responsibility

LetsLaw / Digital Law  / Mandatory Labelling of AI-Generated Content: A New Era of Digital Responsibility
AI-Generated Content

Mandatory Labelling of AI-Generated Content: A New Era of Digital Responsibility

Artificial intelligence has evolved from being an experimental tool to becoming a constant presence in the digital sphere. It is increasingly common for texts, images, videos or audio we consume on a daily basis to be generated entirely or partially by automated systems. In response to this reality, European lawmakers have introduced a new legal framework that imposes transparency obligations, among which the mandatory labelling of AI-generated content stands out.

We are witnessing a regulatory shift that redefines how digital information is produced and distributed, with a clear focus on user protection and the integrity of the online environment.

What the legal framework requires regarding AI labelling

The obligation to disclose the use of artificial intelligence in content creation is primarily governed by two key European regulations: the Artificial Intelligence Act, approved in 2024, and the Digital Services Act, in force since February of the same year. Both texts are part of the European Union’s strategy to ensure the safe, transparent and ethical use of AI technologies.

The Artificial Intelligence Act sets out that users must be clearly informed when they are interacting with an AI system or when they are exposed to content generated or modified by such systems. This requirement goes beyond conversational tools and includes images, videos, audio, and text that could mislead users about their origin.

The Digital Services Act complements this by reinforcing transparency obligations in the context of digital platforms. Although broader in scope, it includes the need to inform users when content is presented through automated processes, particularly when such content may influence decision-making, as is the case with advertising or algorithm-driven content curation.

Together, these regulations require service providers, AI developers, content creators and online platforms to implement effective mechanisms that clearly inform users when content has been generated by machines rather than by humans.

When AI labelling is mandatory

The law does not impose a blanket obligation to label every piece of AI-generated content. Instead, it applies in specific cases where there is a significant risk of confusion, manipulation or infringement of users’ rights.

Labelling becomes mandatory especially when content simulates a real person, creates synthetic images or videos that may appear authentic, or offers recommendations in sensitive areas such as healthcare, justice, politics or employment. It is also required when users engage with automated systems that mimic human interaction, such as chatbots, virtual assistants or AI-generated written content. The same applies to political, institutional or commercial messages created by artificial means that could mislead users regarding their origin.

In all these situations, the regulations demand that the use of AI be disclosed in a way that is clear, accessible and understandable to any user, regardless of their technical knowledge or digital literacy.

Protecting users from AI-generated content

The purpose of these measures is not to restrict the use of artificial intelligence, but to ensure a digital environment in which fundamental rights are respected. European lawmakers have identified significant risks associated with identity impersonation, misinformation, public opinion manipulation and a general erosion of trust in digital content. To mitigate these risks, a general principle of transparency has been established, requiring that AI-generated content be properly identified.

This approach reinforces citizens’ right to accurate information, protects their ability to make informed decisions, and safeguards the integrity of public discourse. It also imposes specific responsibilities on platforms and content distributors, who must ensure that users can clearly recognise when content has been artificially produced.

The entry into force of the Artificial Intelligence Act and the Digital Services Act marks the beginning of a new era in the governance of digital ecosystems. It is no longer enough to produce quality content or to adopt cutting-edge tools; it is now essential to do so responsibly, ethically and in compliance with the law.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info