The Spanish Data Protection Agency (AEPD) has published detailed technical guidance on agentic artificial intelligence from a data protection perspective. It is one of the first supervisory authorities to focus specifically on “agentic” systems: AI that can plan, sequence and execute tasks autonomously across tools and environments.
The AEPD’s message is clear. Agentic AI is neither something to be adopted uncritically nor rejected out of hand. If designed with data protection by design and by default, agentic AI can actually strengthen compliance, including by acting as a privacy-enhancing technology and helping organisations monitor how data and services are used in practice.
Below we set out the main themes and what they mean for controllers.
1. Why agentic AI is different
Agentic AI goes beyond traditional generative AI. Instead of generating a single output, it follows a “reasoning chain”: breaking down goals into smaller steps, calling different tools and services, and storing information in memory so it can act over time.
From a data protection perspective, this matters in two ways:
Traceability opportunity: if controllers implement robust logging and data catalogues, reasoning chains can provide rich “data lineage” across the lifecycle of personal data: source, use, transformation, storage, legal basis and retention. This traceability is not automatic. It depends on deliberate design.
“BYOAgentic” risk: easy to use agent platforms can tempt staff to build their own workflows outside governance structures. The AEPD warns that “bring-your-own agent” initiatives can underestimate legal and technical complexity and lead to uncontrolled processing.
2. Where the main risks lie
Environmental interaction and data flows
Agentic systems interact constantly with their environment. To complete tasks, they call external application programming interfaces (APIs), databases and websites, locally and remotely. Each call is, in effect, a partial data export that may be invisible to users and even to the controller. Without tight access controls aligned to internal policies, this can lead to excessive or unjustified processing, use of outdated data and complex data flows with different retention periods and contractual terms.
Memory as both vulnerability and control point
Memory is another double-edged feature. Agents typically use operational memory (to keep context for tasks) and management memory (to log their activity). Poorly managed memory can accumulate excessive, irrelevant or obsolete personal data, and broad access rights can result in unauthorised disclosures. Well-designed memory, however, can support compartmentalisation, “no log” zones, automatic cleaning of long‑term memory and disabling memory entirely for high-risk subtasks. Where memory contains personal data, the system must from the outset support the full set of data subject rights, including access and erasure.
Autonomy
Autonomy is the third key feature. Controllers decide how much freedom to grant: for example, systems where the agent proposes actions, but a human executes them, versus systems where the agent executes actions directly and humans merely monitor outcomes. As autonomy increases, so does the risk that controllers cannot reliably predict outputs, particularly in complex environments. The AEPD also warns about the “illusion of reliability”: an agent that looks efficient and consistent, but whose behaviour is not properly understood or evidenced.
3. Key compliance building blocks
Role allocation and data flows
The AEPD underlines that controllers remain responsible for ensuring compliance and for assessing whether the impacts of agentic processing are proportionate.
The role of each third-party service depends on how it is used. Where an agent only retrieves non-personal data and the service cannot identify any user, that service falls outside data protection law. Where a service processes personal data to provide agentic functionality, it will typically act as a processor. Where an agent accesses external registers held by other entities, the interaction may be a controller-to-controller communication. The guidance emphasises case-by-case analysis, backed by mapping of data flows and clear documentation of roles.
Transparency and updates to documentation
Transparency obligations remain central. If agentic AI introduces new recipients of personal data, changes retention periods, triggers new automated decisions or involves new international transfers, individuals must be informed. The same applies where agents are used for further processing for a new purpose. In many cases, controllers will need to update their privacy notices and their records of processing activities to reflect how agentic AI is used, which data it processes, which recipients and third countries are involved and which security measures apply.
Data subject rights and logs
Agentic AI should not make it harder to exercise rights. Controllers need a clear view of where personal data sits in the agentic architecture, including in memories and logs, and which external processor services may also hold logs or memory containing personal data. The AEPD notes that prompts themselves can constitute personal data and may therefore be covered by access or other rights requests.
Automated decision‑making and the “rule of 2”
Introducing agentic AI does not automatically bring a processing operation within Article 22 GDPR. The key question remains whether there are decisions based solely on automated processing that produce legal effects, or similarly significant effects, for individuals.
To help structure risk analysis, the AEPD adopts a simple “rule of 2”, originally developed in a cybersecurity context. It looks at three elements: whether the system processes uncontrolled information automatically, whether it accesses sensitive information and whether it performs automatic actions. A configuration that combines all three, without corresponding safeguards, is considered unacceptable.
At least one constraint should always apply, for example human approval before automatic actions where information is uncontrolled, or strong integrity and security guarantees where sensitive data is involved. On top of this, the familiar data protection principles of minimisation, quality and fairness still apply.
4. Threat landscape
The guidance distinguishes between threats arising from authorised processing and those from attacks or misuse.
On the authorised side, the central concern is failure to integrate agentic AI into the organisation’s broader information governance. Without this, there may be no clear view of who processes personal data, for what purpose, on what legal basis and with which safeguards. The AEPD also points to complex, non‑reproducible behaviours and feedback loops that can entrench biases or create “bubble effects”, agent misalignment with organisational goals and automation bias, where humans over-trust system outputs.
On the unauthorised side, agentic systems can be vulnerable to a range of attacks: prompt injection through external data sources, data exfiltration hidden in URL parameters, session hijacking and lateral movement across services, attacks on long reasoning chains where malicious input triggers later in the process, and attacks on memory such as poisoning or illicit access to logs. Each new interface or modality (email, web, documents, images and so on) opens a fresh potential attack vector, and combinations of attacks can significantly compound risk.
5. Protective measures and governance
Accountability
The AEPD sets out an extensive catalogue of safeguards. The common thread is “proactive accountability”: controllers should not treat agentic AI as a one-off project, but as an ongoing lifecycle that demands planning, monitoring and adjustment.
Governance and DPO involvement
Agentic AI should be brought within an information governance framework that involves functional leaders, information technology teams, quality managers and the data protection officer. The DPO should be involved early and must understand both the regulatory framework and the technical options for data protection by design and by default.
Designing for failure and ongoing evaluation
Systems should be designed on the assumption that failures and unforeseen impacts will occur. This means defining performance metrics, using structured testing, reviewing provider contracts, building in explainability and ensuring that meaningful human intervention is always possible where required.
Access control, compartmentalisation and memory
Access control, compartmentalisation and memory management are central. Controllers need clear rules on which services and repositories an agent can access for each use case, coupled with a “need to know” policy, catalogues of data sources, filtering of data in transit and robust management of both operational and management memory, including retention rules and “no log” zones where appropriate.
Reasoning chains and autonomy limits
Reasoning chains and autonomy limits require conscious design. Chains should be validated; in some contexts, it may be safer to hardcode certain steps. Autonomy levels should be set per use case, with documented justification and clear points where human approval is mandatory, particularly before high-impact or irreversible actions. Monitoring and escalation routes, including “four-eyes” checks in high-impact scenarios, can be valuable.
Building agentic AI literacy
The AEPD also emphasises agentic AI literacy. Executives need enough understanding to take informed decisions about whether and how to use agentic systems. Technical teams must be able to identify data protection implications and implement appropriate safeguards. End-users should understand the capabilities and limitations of the tools they use. DPOs and data protection advisers have a dual role: they must build their own understanding of these technologies and use it to advise controllers, processors and staff.
Looking ahead
The AEPD’s guidance is a significant step in shaping how regulators expect agentic AI to be deployed. It is technically demanding and assumes close collaboration between legal, technical and operational teams.
For controllers, the main message is that agentic AI should be deployed deliberately, not experimentally at the margins. With mapped data flows, clear roles, robust governance and thoughtful design, agentic systems can support stronger privacy and more accountable use of data than many traditional processes. Without that foundation, they can just as easily magnify existing weaknesses.
If you would like to discuss what this guidance means for your organisation or need support assessing your agentic AI use cases, please contact your usual Linklaters contact or any member of our team.

/Passle/5c4b4157989b6f1634166cf2/MediaLibrary/Images/2026-01-28-11-42-24-951-6979f620da2c44bd51323e05.jpg)
/Passle/5c4b4157989b6f1634166cf2/SearchServiceImages/2026-02-18-09-26-38-715-699585ce9c1959162315c13f.jpg)
/Passle/5c4b4157989b6f1634166cf2/SearchServiceImages/2026-02-12-13-12-38-049-698dd1c64d2faa8c1a88c2ed.jpg)

