
Regulation of deepfakes and disinformation
The digital age has revolutionized the way we consume content. From social media to video platforms, access to visual information is easier and faster than ever. However, with these innovations, new challenges have also arisen in terms of veracity and accountability. One of the most worrying phenomena is deepfakes, a type of visual content created using artificial intelligence that can alter images and videos so realistically that they are almost impossible to distinguish from the originals.
Visual content that looks real
Deepfakes are a form of manipulated content created using artificial intelligence and machine learning techniques. Using neural networks and advanced algorithms, extremely realistic images, audio, and videos can be generated that mimic a person’s appearance and voice. This content can be used to create fake speeches, performances, or even historical moments that never happened.
The most disturbing thing about deepfakes is that, due to their realism, they can deceive the audience, making it nearly impossible to distinguish between what is real and what is manipulated. This is especially dangerous when it comes to information that circulates quickly on social media, where users don’t always verify the veracity of content before sharing it. Deepfakes can be used to manipulate opinions, defame public figures, or even influence electoral processes.
Regarding regulation, Article 18 of the Spanish Constitution guarantees the right to honor, personal and family privacy, and one’s own image. This implies that any manipulation or distortion of a person’s image through deepfakes , without their consent, could be considered a violation of this right. Organic Law 1/1982 on the right to honor, personal and family privacy, and one’s own image reinforces this aspect, considering that the alteration of an image without consent constitutes an illegal act.
Legal challenges to regulating disinformation
The LOPDGDD establishes the protection of personal data, including images and recordings. If deepfakes are used to alter or create content that affects a person’s integrity or manipulates their identity, their right to the protection of their personal data could be violated.
However, existing legislation on disinformation is not designed to specifically address deepfakes. Laws regulating fake news or media manipulation typically focus on written or other media content, but do not directly address new forms of visual manipulation. This makes regulating deepfakes uncertain territory and, in many cases, without clear legal precedents.
New Artificial Intelligence (AI) Law comes into play. On March 11, 2025, the Government approved the draft AI governance law, with the aim of ensuring the ethical, inclusive, and beneficial use of this technology. The AI Law aims to provide a legal framework that addresses the risks associated with AI algorithms, such as those that generate deepfakes , and to ensure transparency, reliability, and fairness in their implementation.
One of the main concerns of this law will be the classification of risky AI technologies, imposing stricter obligations on those that generate or distribute manipulative visual content, such as deepfakes. In particular, special attention will be paid to protecting the integrity of personal data, which will help more effectively regulate the use of AI-based tools to manipulate images and videos. The AI Law also encourages the creation of mechanisms to identify and report this fake content, allowing technology platforms to take greater responsibility for its distribution.
On the other hand, there is a jurisdictional issue. Deepfakes , like other digital content, can be created and distributed from anywhere in the world. This further complicates the task of enforcing local laws on disinformation, since distribution platforms, such as social media, operate globally and are not always subject to the same regulations in different countries.
Likewise, the liability of digital platforms is also a topic of debate. Social media and online video services, such as YouTube and Facebook, play a key role in the distribution of content, including deepfakes. The LSSI establishes the liability of information society service providers for illegal content. If a platform facilitates the distribution of deepfakes, it could be held liable if it fails to take adequate measures to prevent their dissemination.
Detection and reporting mechanisms
To combat the misinformation generated by deepfakes , it is crucial to have effective detection and reporting mechanisms. Fortunately, technology has also advanced in this area. Researchers and technology companies are developing artificial intelligence tools capable of identifying deepfakes by analyzing inconsistencies in the image or sound. These tools look for patterns that are difficult to replicate, such as eye movements, blinking, or micro-details in the skin that deepfakes cannot yet accurately replicate.
In conclusion, regulating deepfakes and disinformation represents a significant challenge that requires joint action between governments, institutions, technology platforms, and users. While technology has advanced impressively, solutions to mitigate the risks associated with deepfakes are in their infancy. However, with appropriate regulation, the implementation of detection technologies, and the promotion of digital education, we can mitigate the negative effects of deepfakes and protect the integrity of the information circulating in digital society.
At Letslaw, we are experts lawyers in digital law, so we can advise you on anything you need.

Carmen Araolaza es abogada especialista en derecho digital, propiedad intelectual y protección de datos.
Graduada en Derecho con especialidad TIC por la Universidad de Deusto, completó su formación con un máster en acceso a la abogacía y otro en propiedad industrial, competencia y nuevas tecnologías en ISDE Law & Business School y PONS. Asesora en comercio electrónico, marketing digital y derecho de la competencia, con un enfoque dinámico y orientado al entorno tecnológico.