Data Protection and its connection to Artificial Intelligence

LetsLaw / Digital Law  / Data Protection and its connection to Artificial Intelligence
La Protección de Datos y su conexión a la Inteligencia Artificial

Data Protection and its connection to Artificial Intelligence

In today’s digital age, the convergence of data protection and artificial intelligence (AI) has given rise to a number of challenges and opportunities. The rapid development of this technology and the increasing complexity of AI applications requires attention to how data is collected, stored and used, to ensure that the benefits of this technology do not undermine the privacy of the individuals who use it.

How does data protection affect Artificial Intelligence?

Artificial intelligence relies on data: the richer and more diverse the dataset used to train AI tools, the more efficient and accurate the model will be. However, this reliance on data inevitably clashes with the need to protect the privacy of individuals.

Various privacy regulations, such as the General Data Protection Regulation (GDPR) in the European Union, require informed consent to collect and process personal data.

In the context of AI, this implies that companies developing this technology must be transparent about the origin of the data used, obtaining consent from data subjects, as well as indicating the purpose of the data and ensuring that measures are taken to avoid using data for which consent has not been obtained.

The truth is that, given the need for a large volume of data for AI training, AI models are being trained from public information by different companies (such as OpenAI or Google).

It should be noted that not only privacy is being questioned, but also the possible infringement of intellectual property and copyright of both the information with which the AI is trained and the results obtained by the different tools.

The results of the use of AI tools would, a priori, belong to the users, but this could lead to possible infringements of third-party rights, as third-party content is used without consent in the training of language models.

Therefore, caution is essential when using results derived from AI tools, especially generative ones, given that there could be an author with rights over the content in question, and it should be noted that according to the opinion of the European Parliament and the Intellectual Property Registry in Spain, works generated autonomously by artificial intelligence would not, a priori, be protected by copyright.

Key principles and implications

The fundamental principles of data protection that are at risk of being breached are the data minimisation principle and the purpose limitation principle.

The data minimisation principle

The data minimisation principle, as set out in Articles 5 and 4 of the GDPR, implies that only data necessary to fulfil a specific purpose should be collected, which helps to reduce the risk of processing unnecessary and potentially sensitive data or information.

The truth is, the development of AI models currently requires a large volume of data for its own efficiency, so how much data is minimally necessary for the training of these tools? The principle of data minimisation is an issue on which Data Protection Authorities and Supervisory Agencies have yet to pronounce their opinion.

The Purpose Principle

The Purpose Principle, on the other hand, implies the processing of data for one or more specified, explicit and legitimate purposes and, on the other hand, prohibits data collected for specified, explicit and legitimate purposes from being further processed in a way incompatible with those purposes.

AI using data available on the internet, for example, or in other places for which the training of this technology was not collected as a purpose of processing, may involve incompatibility and non-compliance with this processing principle.

Returning to the case of Google as a developer of tools that incorporate Artificial Intelligence, on 3 July 2023, it proceeded to update its Privacy Policy to include the processing of public data for the specific purpose of training its AI models.

This could be understood as an extension of its previous purpose of ”improvement of services”, this time in compliance with the purpose principle.

Consequences of breaching Data Protection

Failure to comply with data protection regulations can have significant consequences for organisations. Fines for violations can be substantial, especially under the GDPR, under which penalties can reach up to 4% of a company’s annual global turnover.

In addition to the financial implications, reputational damage and loss of customer trust are critical factors to consider. Public exposure of a data breach can have lasting consequences on a company’s reputation and customer relationships.

In conclusion, the development of Artificial Intelligence in favour of streamlining internal processes, efficiency and productivity is the order of the day, but it is crucial to comply with data protection regulations to ensure that technological advancement does not come at the expense of privacy and data security.

From a user perspective, the privacy and confidentiality risks of using such tools need to be borne in mind.

Companies should understand the consequences of non-compliance with data protection regulations and work diligently to comply with them by promoting employee training. The golden rule is not to share information that is not already in the public domain.

At Letslaw by RSM we are specialists in personal data protection, and we can help you make your international transfers safely.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info