logo

What are the misconceptions about artificial intelligence according to the AEPD?

LetsLaw / Data Protection  / What are the misconceptions about artificial intelligence according to the AEPD?
artificial intelligence

What are the misconceptions about artificial intelligence according to the AEPD?

Generative Artificial Intelligence (GAI) represents one of the most disruptive technological developments of our time, with profound implications for how data is processed, reused, and generated. However, as a method for developing applications and services, it relies heavily on data usage, raising fundamental questions under the General Data Protection Regulation (GDPR). As Recital 7 of the GDPR itself warns, technological progress requires ‘a stronger and more coherent data protection framework in the Union’, in order to foster and maintain public trust.

In this context, it is essential to examine the challenges posed by GAI from the standpoint of personal data protection. The Information Commissioner’s Office (ICO), the United Kingdom’s data protection authority, conducted a public consultation on the sector’s use of specific areas of GAI, subsequently publishing a series of clarifications regarding common misconceptions identified during the process. These clarifications aim to guide developers towards compliance with data protection obligations, given the growing importance of these systems in the digital sphere.

AEPD’s commentary on the ICO’s consultation

The Spanish Data Protection Agency (AEPD), for its part, has sought to comment on and emphasize these clarifications, analyzing them in light of its prior 2022 joint publication with the European Data Protection Supervisor (EDPS), which had already identified common misunderstandings related to machine learning.

Ultimately, the AEPD underscores that these clarifications are intended to steer developers toward full compliance with data protection rules, particularly in light of the increasing relevance of these technologies in digital environments.

Misconceptions regarding generative AI

The following are among the key misconceptions identified:

1. The ‘incidental’ or ‘agnostic’ processing of personal data still constitutes personal data processing and is therefore subject to data protection law.

GAI developers must conduct a prior and precise assessment to determine whether their models process personal data and, if so, must ensure compliance with applicable legislation.

That is to say, the assertion that personal data is processed accidentally or unintentionally does not exempt a developer from the requirements of the GDPR. Any processing of personal data, even incidental, is subject to data protection rules.

2. Common practice does not equate to meeting individuals’ reasonable expectations.

The principle of transparency under the GDPR requires controllers to inform data subjects in a concise, transparent, intelligible, and easily accessible manner, using clear and plain language. This principle applies equally to the training of models and to any secondary use of personal data for purposes not originally disclosed.

Hence, using personal data for AI model training without adequately informing data subjects beforehand breaches the transparency principle. Even when such data is obtained via web scraping or web crawling, it is essential to clearly and accessibly inform individuals about the intended use of their personal data.

3. There is no distinction between ‘personally identifiable information’ (PII) and ‘personal data’.

To ensure lawful processing under the GDPR, any form of “personal data” must be considered. This is a broader and legally defined term under the GDPR, encompassing any information relating to an identified or identifiable natural person.

4. Case law concerning search engines does not directly apply to GAI.

Some developers have attempted to rely on Court of Justice of the European Union (CJEU) rulings concerning search engines to justify certain GAI-related practices. However, this analogy is legally flawed.

Whereas search engines index and retrieve existing content, generative AI synthesizes and creates new outputs based on large volumes of data, thus introducing additional risks.

Moreover, mechanisms for exercising data subject rights—such as the right to erasure—are well established in the search engine context but remain underdeveloped in GAI systems. This necessitates a more rigorous and context-specific legal analysis to uphold data subject rights effectively.

5. AI models can retain and disclose personal data.

A common defence is that AI models do not “store” personal data but only process it for training purposes.

This position is untenable when models are capable of reproducing—either verbatim or approximately—segments of personal data used during training. This risk, which has been technically documented in various studies, engages the data minimization principle and mandates the implementation of safeguards to prevent the unintended disclosure of sensitive or identifiable information.

6. Data protection is not a tool for assessing legality under other legal regimes.

Although GDPR compliance may intersect with other legal frameworks (such as intellectual property, employment law, or AI regulation), data protection authorities are not competent to interpret or enforce those regimes.

The GDPR is exclusively concerned with the processing of personal data and cannot be used to determine the broader legality of a technology’s use. While controllers must undertake a cross-cutting legal assessment, the jurisdiction of data protection authorities is clearly delimited.

7. There is no ‘exemption’ for GAI under data protection law.

Organisations must be fully aware that there are no general exemptions or derogations for generative AI. If personal data is being processed in any context, the entire data protection framework applies.

Moreover, Article 25 GDPR imposes a clear obligation to implement ‘data protection by design and by default’. In the GAI context, this entails defining limits from the development phase, conducting risk assessments, and establishing mechanisms for oversight, control, and transparency.

Data protection as a pillar of responsible AI

The misconceptions identified by the ICO and echoed by the AEPD are not mere technicalities. They reveal a significant misalignment between technological innovation and the existing legal framework.

Generative AI does not operate in a legal vacuum—it is bound by clear rules grounded in fundamental principles such as transparency, proactive accountability, and the effective safeguarding of data subject rights.

At a time when innovation is rapidly accelerating, it is incumbent upon data controllers and developers to embed data protection as a core component of technological design. Only through such integration can a legally compliant and democratically anchored digital transformation be achieved.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info