IA Generated Content: should it be labeled?
Content Generated on Social Media
The rise of Artificial Intelligence (AI) is transforming the social media landscape, opening up new possibilities for content creation, user interaction, and advertising. Many users, including influencers, use this kind of tool to generate content for two reasons: it saves them time and allows them to reach a wider audience.
One of the most striking applications of AI in social media is the creation of virtual influencers. These digital avatars, like Shudu or Aitana López, interact with users and generate content that is sometimes indistinguishable from that created by humans.
However, these practices can lead to misinformation if not done transparently and responsibly.
The AI Regulation: Transparency Obligations for AI Systems That Generate Content
In this regard, on May 21, the Council of the European Union gave its final approval to the Artificial Intelligence Regulation, which sets the rules for the marketing, commissioning, and use of AI systems in the European Union.
The primary objective of the regulation is to promote the development and use of AI in the EU while ensuring a high level of protection, especially in terms of safety and fundamental rights of individuals.
Among its provisions, we find the Transparency Obligations for certain AI systems.
In Article 52, the Regulation states that “Users of an AI system that generates or manipulates image, sound, or video content that closely resembles existing people, objects, places, or other entities or events, and that could misleadingly induce a person to think they are authentic or true (ultrafalsification), must disclose that the content has been artificially generated or manipulated.”
In other words, all AI-generated publications that could make us believe they are real must be clearly labeled or marked in such a way that users can distinguish them from content created by humans.
To whom does this obligation apply?
- To AI system providers, that is, companies or entities that develop or make available to third parties AI systems that generate content for social media.
- To users of AI systems, that is, individuals or entities that use AI systems to generate content for social media and then publish it.
The Case of META
META already announced last April that it will identify any image or audio content generated by artificial intelligence to combat misinformation.
According to information provided by META, this identification can be done in two ways:
- When META detects the use of AI. Any content that contains standard signals indicating it has been generated by AI will have the label “Created with AI“, including content created using META’s AI tools.
- When users tag their own AI-generated content. Social media users can also mark their own content with the same “Created with AI” label when they publish content generated or modified by AI.
META clarifies that labeling will only be mandatory when the published content contains photorealistic videos or audios that appear realistic, in line with the criteria of the AI Regulation.
Some examples that require labeling are:
- A realistic-looking video of a group of people walking.
- An audio of two people talking.
- A song created with AI-generated voices.
- A reel with a realistic AI-generated voiceover.
Towards Responsible Use of AI on Social Media
The use of AI on social media has the potential to transform the way we communicate and interact with the world. However, it is crucial that this advancement is carried out responsibly and transparently. The Artificial Intelligence Regulation is an important step in this direction, but much remains to be done to ensure its use is ethical.
IP/IT Lawyer