Responsibility of digital platforms in the fight against illegal content on social media
Digital platforms have transformed the way we interact, share information and communicate. However, their global reach and the freedom they offer users have created a significant challenge: the proliferation of illicit content.
Role of platforms in detecting illegal content
Digital platforms play an essential role in combating illegal content, such as hate speech, material inciting terrorism, child sexual exploitation and copyright infringement. Although these companies are not the direct creators of the content posted on their sites, their infrastructure allows for mass dissemination, which places them at the centre of the debate on responsibility.
To address this issue, many platforms have implemented automated detection tools based on AI and machine learning. These systems analyze large volumes of data, identifying patterns or terms associated with illicit activities. For example, algorithms designed to detect images can identify and block explicit material that infringes the law.
In addition, platforms often establish dedicated content moderation teams. These units, made up of human experts, work in conjunction with automated systems to assess posts flagged by users or identified by algorithms. A prime example is Facebook, which has specialized teams to address issues such as terrorism and child abuse.
However, the role of platforms is not limited to passive moderation. In many cases, these companies have an obligation to cooperate with judicial and law enforcement authorities , providing information about suspicious users or removing content in accordance with legal mandates. The Digital Services Directive (DSA) in the European Union is a regulatory framework that underlines this obligation, requiring large platforms to exercise a level of proactive diligence in combating illegal content.
Challenges in the fight against illegal content
Despite the measures implemented, the detection and removal of illegal content presents technical, legal and ethical challenges. One of the main problems is the definition and delimitation of illegal content, which varies significantly between jurisdictions.
Another important challenge is the balance between freedom of expression and censorship. Platforms must ensure that their efforts to combat illicit content do not result in the unjustified removal of legitimate content.
On the technical side, AI-based detection systems are not foolproof. These algorithms can generate false positives. Human moderation, while essential, faces limitations in terms of scale and speed, especially on platforms with millions of daily active users.
Furthermore, the use of end-to-end encryption in messaging apps makes it difficult to identify illicit content. While encryption protects users’ privacy, it can also be exploited by malicious actors to distribute illegal material, leaving platforms with few tools to intervene.
Criminal and civil liability
The issue of liability of digital platforms for the dissemination of illegal content is a matter of intense legal debate. Traditionally, many jurisdictions have adopted the approach that platforms are intermediaries and therefore not responsible for the content posted by their users, unless they have actual knowledge of its illegality and do not act to remove it.
In criminal terms, platforms’ liability can arise when it is proven that they acted negligently or deliberately allowed the dissemination of illegal content. For example, if a platform repeatedly ignore reports of criminal activity on its network, it could face sanctions. However, such criminal liability is rare and, in most cases, national laws focus on punishing the authors of illegal content rather than the platforms.
In civil matters, platforms can be held liable for damages caused by illegal content if it is proven that they did not take the necessary measures to prevent its dissemination. This includes cases related to copyright violations, defamation or the publication of false information.
A notable example is the case of the European Union, where case law has evolved towards a stricter approach with the implementation of the DSA. This regulation establishes clear obligations for platforms in the removal of illegal content, imposing significant fines for non-compliance.
Digital platforms face a fundamental and complex role in the fight against illicit content. While they have made significant progress in implementing technologies and policies to address this issue, challenges persist in technical, legal and ethical terms.
Ultimately, ensuring a safe digital environment requires a joint effort by platforms, governments and users, seeking a balance between freedom of expression and preventing abuse on social media. The responsibility lies not only with the platforms, but also with collaboration between all actors in the digital ecosystem.
Desde que Carmen Araolaza empezó la carrera se familiarizó con el derecho tecnológico al haber estudiado Derecho + Especialidad TIC en la Universidad de Deusto.
Le apasiona el derecho digital, en concreto, el Comercio electrónico, la Propiedad Intelectual, la Protección de Datos, la Competencia y el Marketing Digital.