logo

“Cuidado con lo que le confIAs”, the new AEPD ten-point guide

LetsLaw / Data Protection  / “Cuidado con lo que le confIAs”, the new AEPD ten-point guide
Commandments from the AEPD

“Cuidado con lo que le confIAs”, the new AEPD ten-point guide

On 27 January 2026, the day before International Data Protection Day (28 January), the Spanish Data Protection Agency (AEPD) published the “Cuidado con lo que le confIAs” ten-point guide, with practical recommendations to reduce privacy risks when we interact with AI systems.

Objectives of the AEPD ten-point guide

Aware of the growing use ofand the already tangible potential of Artificial Intelligence systems, the Agency considers it important to provide a set of tips to help people understand and prevent privacy risks arising from the improper use of these tools.

In the Agency’s own words, this ten-point guide “aims to offer the public key pointers to promote a safe, responsible and informed use of artificial intelligence and to foster a digital environment that respects people’s fundamental rights.”

In addition, this initiative follows the direction set out by the AEPD in its 2025-2030 Strategic Plan on Responsible Innovation and the defence of dignity in the digital era, where it reaffirmed its commitment to promoting a culture of privacy and data protection among both citizens and organisations, as well as supporting technological innovation with safeguards.

Responsible use of artificial intelligence

Talking about “responsible use” is not only an ethical matter; it is also a practical one. In day-to-day use of generative AI, there are four ideas worth keeping in mind:

1. Your prompt is not always “just text”

When you write a query, it is not only the content of the message that travels. In many services, use may involve technical and contextual data (browsing data, identifiers, metadata, etc.). In other words, even if your question is harmless, the surrounding ecosystem might not be.

2. Privacy is not breached only by sharing your name and surname

Some data may not look personal at first, but can become personal through accumulation: habits, frequent locations, routines, concerns, or preferences. With enough repetition, small clues add up to a profile.

3.AI doesn’t “understand” like a professional

These tools can sound convincing even when they are wrong. And in sensitive matters (health, legal advice, psychological support), the risk is not only privacy-related: it can also lead to poorly informed decisions.

4. It’s not only your privacy: you are also responsible for other people’s data

A common mistake is to think “this isn’t mine” and let your guard down: a client’s data, a candidate’s details, a supplier’s information, a colleague, a minor, a screenshot with names, a forwarded email… If you input these into an AI tool, you are processing personal data and may be exposing third-party information without a legal basis, without necessity, and without control.

The good practices recommended by the AEPD

The value of the ten-point guide lies precisely in the fact that it does not stay at generalities: it proposes concrete habits. These are the 10 recommendations set out by the Agency:

1. Don’t upload your personal information to AI

Avoid including information that directly identifies you (e.g., contact details, documents, personal images). If you need to describe a case, anonymise it or use a fictional scenario.

2. Be especially careful not to upload sensitive or delicate information

Some categories are best kept out by default: health data, financial information, contractual matters, locations or stays. These are high-impact data if exposed.

3. Respect the privacy of third parties

If your query involves other people, remove any element that could identify them. And as a rule of thumb: don’t upload images of third parties to generate new content, especially when minors are involved.

4. Don’t include professional information

If you use AI in a professional context, apply the “as if you were going to paste it into a public channel” standard (because, in practice, the risk of exposure exists). No contracts, reports, strategies, client data, or employee information.

5. Review the AI service’s terms before using it and choose the safest options

Before using a tool, check what happens to your information (retention, use for improvement, privacy settings, permissions). Prioritise solutions that collect only what is strictly necessary and provide clear controls.

6. If you need specialised professional advice, emotional support or psychological help, go to a professional rather than AI

If you need a diagnosis, clinical guidance, legal advice, or psychological support, don’t replace it with a conversation with AI. You can use AI as support, but not as “the professional”.

7. Don’t believe everything an AI says: keep a critical stance towards its answers

Maintain a critical mindset. Don’t delegate important decisions without verification, and cross-check against reliable sources (especially for matters with legal, financial, or personal impact).

8. Advise and guide the minors in your care

Explain what risks exist, what types of data should not be shared, and encourage critical thinking. Here, prevention means practical digital education.

9. Use different accounts and delete your history

If you are “testing” tools, avoid mixing them with your personal or professional email. Use separate accounts, review deletion options, and remove conversations regularly when the service allows it.

10. Your questions can define you

You don’t need to type “my ID number” to leave a trail. Repeated questions about habits, fears, likes or routines can build a very precise profile. Practise the “minimum necessary” principle in what you ask as well.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info