logo

Legal Notice: Artificial Intelligence

LetsLaw / Digital Law  / Legal Notice: Artificial Intelligence
Nota legal IA

Legal Notice: Artificial Intelligence

New Transparency Requirements for the Use of Artificial Intelligence: What Must Be Included in Your Terms and Conditions and Privacy Policy

Introduction and Purpose

The purpose of this document is to inform operators of digital platforms and controllers of personal data about the new transparency obligations arising from Regulation (EU) 2024/1689 of June 13, 2024, establishing harmonized rules on artificial intelligence (hereinafter the “AI Act“), as well as its interaction with current regulations on personal data protection, in particular Regulation (EU) 2016/679 of April 27, 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (hereinafter the “GDPR“) and Organic Law 3/2018 of December 5 on the Protection of Personal Data and the Guarantee of Digital Rights (hereinafter the “LOPDGDD“).

The AI Act establishes a phased framework of obligations, which unfolds over different time periods:

  • February 2, 2025: entry into force of the prohibitions regarding AI practices considered to pose an unacceptable risk.
  • August 2, 2025: application of the rules regarding general-purpose AI models and the penalty regime.
  • August 2, 2026: full implementation of the transparency obligations established in Article 50 of the AI Act, which are in addition to the information requirements set forth by the GDPR and the LOPDGDD and are applicable in a complementary manner.

 

In particular, when AI systems involve the processing of personal data, it must be ensured that the information provided to users is clear, accessible, understandable, distinct, and sufficient, without this obligation replacing the technical transparency mechanisms required by the AI Act.

This document is intended to serve as a reference for adapting the Terms and Conditions and Privacy Policy of platforms, with special attention to AI systems, their functionality, automated decisions, and associated risks, thereby ensuring compliance with applicable regulations and the protection of users’ rights.

Legal Analysis

A) Preliminary Considerations

The AI Act introduces a risk-based regulatory approach that directly impacts the transparency obligations of operators who develop, integrate, or use artificial intelligence systems. This approach does not replace, but rather complements, the existing framework for the protection of personal data.

In this regard, it should be noted that transparency obligations in the field of artificial intelligence have a dual dimension:

  • On the one hand, the informational transparency required by the GDPR, which mandates providing the data subject with information about the processing of their personal data in a concise, transparent, intelligible, and easily accessible manner.
  • On the other hand, the technical and functional transparency introduced by the AI Act, which requires informing the user about interactions with AI systems, the nature of the content generated or manipulated, and certain characteristics of the system’s operation.

 

Both dimensions must be interpreted systematically and complementarily, such that compliance with one does not exempt compliance with the other. Consequently, operators must establish information mechanisms that integrate both requirements, avoiding duplication while ensuring a sufficient, consistent level of information adapted to the context of use.

Likewise, these obligations must be framed within the principle of proactive accountability set forth in Article 5.2 of the GDPR, which requires not only substantive compliance with the regulations but also the ability to demonstrate such compliance through internal policies, records of processing activities, traceability of automated decisions, and mechanisms for controlling and supervising AI systems.

Finally, compliance with transparency requirements cannot be addressed in isolation but must be considered in conjunction with the risk-based approach that underpins the AI Act. In this regard, information obligations must be tailored to the nature, purpose, and potential impact of the AI system; they are insufficient on their own in cases where the system can be classified as high-risk, where additional obligations regarding risk management, assessment, and mitigation are required.

It should be noted that supervisory authorities, in particular the European Data Protection Board (EDPB), have emphasized the need to avoid generic or abstract privacy policies, promoting an approach based on the granularity of information and the appropriateness of processing to the risk, especially in environments involving AI technologies.

B) Covered Entities and Scope of Application

The scope of application of the AI Act is broad and covers various operators within the value chain of an artificial intelligence system. In particular, the following are covered:

  • AI system providers, defined as natural or legal persons who develop or place AI systems on the market or put them into service under their own name or brand.
  • Deployers, who use AI systems in the course of their professional or business activities.
  • Importers and distributors, to the extent that they place AI systems on the European Union market.

 

In the specific context of digital platforms, the role of the deployer is particularly relevant, to the extent that they integrate AI systems (their own or third-party) into their services, interacting directly with end users.

From a substantive perspective, the transparency obligations under Article 50 of the AI Act apply, among other scenarios, when:

  • Users interact with AI systems without this fact being evident.
  • Content is generated or manipulated using AI in a way that may mislead users as to its artificial nature (including deepfakes).
  • Emotion recognition systems or biometric categorization of individuals are used.

 

However, these obligations must be interpreted within the framework of the AI Act’s risk-based approach, distinguishing between systems subject to transparency obligations—that is, low-risk AI systems—and those classified as high-risk systems, which are subject to additional requirements regarding governance, documentation, conformity assessment, and supervision.

Furthermore, when the use of AI systems involves the processing of personal data, the GDPR is fully applicable, which requires:

  • The proper identification of the roles of data controller and data processor.
  • The formalization of contracts.
  • Ensuring compliance with the principles of lawfulness, fairness, transparency, data minimization, and purpose limitation.

 

Regarding this last point, the use of personal data for the training or improvement of AI models is of particular relevance, as it may constitute a purpose distinct from the provision of the main service, requiring a separate legal basis and a specific compatibility assessment.

It should be emphasized that the use of third-party AI models does not exempt the operator from its legal obligations; it is necessary to evaluate the safeguards offered by such providers, especially regarding the processing of personal data, international transfers, and compliance with the AI Act itself.

C) Active Transparency in Interactions with AI Systems

One of the fundamental pillars of the AI Act is the obligation to ensure active transparency in interactions between artificial intelligence systems and users, which must be integrated into the user experience itself and not limited to inclusion in legal texts.

To this end, the AI Act stipulates that users must be informed, clearly and in a timely manner, when they are interacting with an AI system, unless this circumstance is evident from the context. This obligation is particularly relevant in the case of chatbots, virtual assistants, recommendation systems, or any other interface that simulates or replaces human interaction.

Active transparency involves implementing an information system structured in complementary layers, which includes:

  • A first layer of information within the interface, through visible notices, labels, or indicators at the moment of interaction (just-in-time notice).
  • A second layer in legal texts, particularly in the Terms and Conditions and the Privacy Policy, detailing the system’s characteristics, purposes, and operation.
  • A technical layer, through the use of metadata, watermarks, or other machine-readable mechanisms, especially regarding content generated or manipulated by AI.

 

Regarding the latter, the AI Act requires that certain content generated or altered by AI—particularly content that could be misleading regarding its authenticity—be clearly identifiable as such, thereby preventing user confusion.

From the perspective of the GDPR, this obligation is reinforced when interaction with AI involves the processing of personal data, in which case additional information must be provided regarding:

  • The purposes of the processing and its legal basis.
  • The existence of automated decision-making, including profiling, where applicable.
  • The general logic applied, as well as the significance and anticipated consequences of such processing for the data subject.

 

In this regard, active transparency must be understood as an operational principle that requires the controller not only to inform, but also to make the functioning and impact of AI on the user experience understandable, avoiding deceptive or ambiguous practices, such as the use of ambiguous language, excessively complex structures, or interface designs that hinder understanding or access to relevant information (dark patterns).

Finally, in cases where AI systems may have a significant impact on individuals’ rights and freedoms, these measures must be supplemented with impact assessments and, where appropriate, with additional risk mitigation measures, in line with the preventive approach of the AI Act.

Implications for the Platform’s Terms and Conditions

The Terms and Conditions (hereinafter, the “T&Cs“) constitute the ideal legal instrument for regulating the contractual relationship with the user and establishing legitimate expectations regarding the use of artificial intelligence systems within the platform.

To this end, it is not sufficient to include generic references to the use of AI; rather, the T&Cs must incorporate specific clauses that clearly, comprehensibly, and consistently reflect the functionalities, limitations, and responsibilities associated with such systems.

In particular, it is recommended to include, at a minimum, the following elements:

A) Information on Interaction with AI Systems

An explicit clause must be included informing the user that certain features of the service involve interaction with automated systems, such as chatbots, virtual assistants, recommendation systems, or content-generation tools.

Likewise, the degree of AI involvement must be specified in each case, distinguishing between systems that:

  • Are limited to suggesting or recommending content.
  • Prioritize or personalize information.
  • Execute actions or generate results autonomously.

 

This information must be consistent with the transparency mechanisms implemented in the interface.

B) Systems Limitations and Disclaimer of Warranties

The T&Cs must expressly state the limitations inherent in AI systems, including:

  • The possibility of errors, biases, or inaccurate results.
  • The absence of guarantees regarding the reliability, completeness, or suitability of the generated results.
  • A warning that, in certain fields (e.g., legal, medical, or financial), the results do not replace qualified professional advice.

 

Likewise, a liability limitation clause may be included, always within applicable legal limits, regarding the use of AI-generated content.

C) Integrity of Transparency Mechanisms

In line with AI Act obligations, the T&Cs must prohibit:

  • Removing, altering, or concealing labels, watermarks, metadata, or other elements that identify content generated or manipulated by AI.
  • Using the platform’s outputs in a manner that could mislead regarding their artificial origin when there is an obligation to identify them.

 

This provision is essential to ensure the effectiveness of the transparency measures implemented by the provider.

D) Intellectual Property and Use of Generated Content

The regime applicable to AI-generated content must be expressly regulated, including:

  • Ownership or, where applicable, the licensing regime for the generated outputs.
  • The terms of use by the user, including the prohibition on infringing third-party rights.
  • The user’s liability for the use, dissemination, or exploitation of such content, particularly in public or professional contexts.

 

E) Responsible Use and Regulatory Compliance

It is recommended to include a clause requiring users to use AI features responsibly, in compliance with applicable regulations, including the AI Act, the GDPR, and any other sector-specific provisions.

In particular, the terms should prohibit the use of the platform for:

  • Generating illegal or misleading content.
  • Violating the fundamental rights of third parties.
  • Circumventing obligations regarding transparency or the identification of AI-generated content.

 

F) Updating AI Features

Given the evolving nature of AI systems, the T&Cs must provide for the possibility of modifying, updating, or removing AI-based features, establishing appropriate mechanisms for informing users when such changes have a significant impact on the service or their rights.

Implications for the Privacy Policy

The Privacy Policy must specifically, granularly, and in a manner adapted to the use of AI systems, address the processing of personal data, moving beyond generic approaches and incorporating a specific section regarding the use of artificial intelligence on the platform.

For this purpose, it is recommended to structure the information as follows:

A) Use of AI on the Platform: Functionalities and Purposes

A specific section must be included that clearly identifies:

  • The functionalities that use AI systems (automated support, recommendations, personalization, moderation, fraud prevention, content generation, etc.).
  • The specific purposes of each processing activity, avoiding generic descriptions.
  • The legal basis applicable to each purpose, in accordance with Article 6 of the GDPR.

 

When data is used for training or improving AI models, this purpose must be expressly distinguished from the provision of the main service.

B) Categories of Data Processed

The Privacy Policy must detail the categories of data processed in the context of AI, including:

  • Identifying and registration data.
  • User-provided content (messages, prompts, documents, images, audio).
  • Data derived from interaction (generated results, usage histories).
  • Metadata, technical identifiers, and activity logs.

 

Likewise, it is recommended to distinguish between:

  • Data actively provided by the user.
  • Data collected automatically or inferred from the use of the platform.

 

C) Automated Decision-Making and Profiling

If the platform uses AI systems to make automated decisions that have legal or similarly significant effects, it must explicitly provide information regarding:

  • The existence of such decisions.
  • The general logic applied and the main criteria.
  • The significance and anticipated consequences for the user.
  • The safeguards available, including the right to obtain human intervention, express one’s point of view, and challenge the decision.

 

D) Roles, Recipients, and AI Providers

The following must be clearly identified:

  • Who acts as the data controller.
  • Which third parties act as data processors, including technology and AI model providers.
  • The possible existence of sub-processors and other recipients of the data.

 

Additionally, information must be provided regarding the safeguards required of such providers, particularly in relation to compliance with the GDPR and the AI Act.

E) International Data Transfers

In the event that personal data is transferred outside the European Economic Area, the following must be indicated:

  • The existence of such transfers.
  • The legal mechanism underpinning them (adequacy decisions, standard contractual clauses, binding corporate rules, etc.).
  • The applicable safeguards and how to obtain further information regarding them.

 

F) Retention Periods

The retention periods applicable to the following must be specified:

  • Conversations and content generated by AI.
  • Prompts and data entered by the user.
  • Security, audit, and system control logs.

 

Furthermore, the duration of these periods must be justified based on the purposes of the processing.

G) Technical Transparency and Labeling

The Privacy Policy must describe the mechanisms used to identify content generated or manipulated by AI, such as:

  • Visible labels on the interface.
  • Metadata or technical identifiers.
  • Watermarks or other labeling systems.

 

Furthermore, the purpose of these mechanisms and their relationship to the AI Act’s transparency obligations must be explained.

H) High-Risk Processing and Special Categories of Data

When the platform uses AI systems involving:

  • Emotion recognition.
  • Biometric categorization.
  • Processing of special categories of personal data.

 

Express information must be provided regarding such processing, as well as the enhanced safeguards applied, including the conduct of impact assessments and risk mitigation measures.

I) Data Subjects’ Rights

Finally, users must be reminded of their ability to exercise their data protection rights (access, rectification, erasure, objection, restriction, and portability), as well as the right not to be subject to automated decision-making under the terms set forth in the GDPR.

Conclusion

The progressive implementation of the AI Act, in coordination with the General Data Protection Regulation (GDPR) and the LOPDGDD, represents a substantial change in the transparency requirements applicable to companies that develop or integrate artificial intelligence systems into their products and services.

In particular, it requires organizations to systematically review not only their legal texts but also the way they inform and interact with users in digital environments.

In this context, the Terms and Conditions and the Privacy Policy cease to be static or merely formal documents, becoming essential tools for regulatory compliance that must faithfully, consistently, and comprehensibly reflect the actual use of AI systems, their purposes, limitations, and associated risks.

Consequently, it is highly recommended that organizations conduct comprehensive reviews and updates of their legal texts, within the framework of privacy and AI system audits, to enable them to:

  • Identify the AI functionalities deployed and their impact on users’ rights.
  • Verify the adequacy of the legal bases for processing, especially regarding the use of data for model training.
  • Assess compliance with transparency obligations, both in documentation and in the user interface.
  • Review relationships with technology providers and third-party models, as well as potential international data transfers.
  • Detect risks associated with automated decisions or high-impact processing and adopt the corresponding mitigation measures.

 

The absence of a proactive review not only increases the risk of regulatory non-compliance and exposure to the penalty regime of the AI Act and the GDPR, but may also give rise to contractual and reputational liabilities stemming from a lack of effective transparency toward users.

In short, adapting to this new regulatory framework requires a comprehensive, preventive, and dynamic approach, in which updating the Terms and Conditions and the Privacy Policy, along with conducting periodic audits, serves as a key tool for ensuring regulatory compliance, strengthening user trust, and minimizing legal risks in the use of artificial intelligence.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info