logo

AEPD Guidance on the Use of Agentic AI

LetsLaw / Digital Law  / AEPD Guidance on the Use of Agentic AI
Abogados IA agéntica

AEPD Guidance on the Use of Agentic AI

On 18 February 2026, the Spanish Data Protection Agency (AEPD) published its “Guidance on Agentic Artificial Intelligence from a Data Protection Perspective”, aimed at helping controllers and processors identify the risks that arise when AI agents are used in personal data processing operations. The underlying message is clear: agentic AI requires a reassessment of how processing operations are governed and how individuals’ rights are protected.

The AEPD makes clear that it is not seeking to resolve any particular case, but rather to provide a framework for managing the specific features introduced by this technology. It is not guidance on prompts, but on how processing changes when the system plans, consults tools, accesses memory and performs actions with varying degrees of autonomy.

What the AEPD Means by Agentic AI

The AEPD defines agentic AI as systems capable of acting autonomously in order to achieve objectives. Unlike a reactive model, an agent can break down tasks, use tools, consult memory and perform actions in several steps.

Put differently, agentic AI is not just “another chatbot”. There are at least four elements that explain why the AEPD has devoted specific guidance to it:

  • It works towards objectives, not just isolated instructions. The agent does not merely respond: it can plan subtasks and chain steps until it reaches an outcome, with different systems and trust levels involved.
  • It can use tools and connect to the outside world. These systems can interact with multiple services and external sources, which broadens the exposure surface and explains why the analysis cannot be limited to the LLM in isolation.
  • It can incorporate memory. Memory makes it possible to contextualise future actions, but it may also carry forward personal data and bias if there are no clear retention and deletion rules.
  • It can operate with different levels of autonomy. The AEPD distinguishes between agents that merely propose and agents that execute. The greater the autonomy, the greater the need for supervision, minimisation, explainability and reversibility.

 

How the Data Protection Approach Changes

Introducing agentic AI into a processing operation changes its nature: it may reduce existing risks, but it may also create new ones. For that reason, the AEPD requires the risk analysis and management process to be reopened.

The autonomy of the agent shifts the analysis to the system’s overall behaviour: what it consults, remembers, shares, infers and executes. As a result, the parties involved, the data, the flows, the retention periods, the transparency and the purposes may all change.

The guidance also clarifies that agentic AI does not automatically require a DPIA in every case, but it may require one — or require the review of an existing one — where it alters the risk initially assessed.

The Risks That Most Concern the AEPD

The guidance identifies specific risks and does not stop at a generic warning about “using AI carefully”.

  • Opacity and a false sense of control. Users and developers may not fully understand how the agent makes decisions. The combination of distributed inferences, external tools and memory may create an appearance of reliability while making explainability, auditing and human oversight more difficult.
  • Data excess, persistent memory and profiling. Retaining too much context or reusing memory across cases may carry forward irrelevant data and enable the profiling of system users if records are not limited, pseudonymised and subject to retention periods.
  • Excessive access to information and breach of the minimisation principle. Agents with autonomous access to multiple sources may engage in mass scraping or forward more data than is necessary.
  • Prompt injection and indirect threats. Malicious instructions may be embedded in a website, an email or a document consulted by the agent.
  • Shadow leaks or silent leakages. The AEPD also warns about shadow leaks: partial outputs that make it possible to reconstruct confidential information.

 

What Measures the AEPD Recommends to Organisations

The AEPD’s response is structural: governance and design, rather than mere notices or final-stage validations.

First, it requires agentic AI to be integrated into the governance of the processing operation and the DPO to be involved from the design stage.

Second, it insists on data protection by design: processing only the data that is necessary and maintaining traceability and explainability.

Third, it calls for specific technical measures: granular minimisation, filtering between components and the removal of unnecessary metadata.

It also calls for limiting memory and logs, pseudonymising records and setting retention periods.

Lastly, the controller must define and document the agent’s level of autonomy according to the context and the risk. The “rule of 2” warns against combining uncontrolled input, sensitive data and automatic action.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info