GDPR & Artificial Intelligence

Cole PrudenArtificial Intelligence, GDPR, GDPR Compliance, PII

The EDPB’s Opinion Piece on GDPR and AI by PII Tools

A Breakdown of the EDPB’s Official 28/2024 Opinion Piece

Rapid increases in technology demand rapid updates to personal data regulations and PII protection guidelines. Let’s dive into how the GDPR’s official governing body views the safe deployment of AI and what potential new rules we can expect going forward.

EDPB Opinion on AI Models

Issued in Brussels on 18 December 2024, the European Data Protection Board adopted an opinion on “the use of personal data for the development and deployment of AI models”.

This opinion piece is a response to the accelerated rise in artificial intelligence (AI) models and how their interactions and learning patterns using personal data are viewed under the GDPR. But don’t worry, you don’t have to read the entire 35-page Opinion of the Board because this article provides a simplified overview of its 3 main points.

1. AI Anonymity

The first topic tackled by the EDPB opinion is AI anonymity, or how it suggests an AI model should be defined as anonymous depending on a case-by-case assessment based on the specific Data Processing Agreement (DPA; a legally binding document between the data controller and processor).

According to the opinion, a model is considered anonymous when it’s deemed to “be very unlikely (1) to directly or indirectly identify individuals whose data was used to create the model, and (2) to extract such personal data from the model through queries”.

This stance falls right in line with what we’ve come to expect from the GDPR. In other words, data isn’t considered PII if it can’t be used to directly identify any individuals. And this exact same principle should also apply to the data that AI models store and learn from.

GDPR & Artificial Intelligence legitimate interest infographic by PII Tools

Source: PII Tools

2. Legitimate Interest

The second issue addressed by the EDPB Opinion on AI Models is dubbed “legitimate interest”. As the name makes pretty clear, an AI model should only be permitted access to personal data if it’s required for its development and deployment.

Now, you might be thinking that legitimate interest is a bit too subjective, and many people might try to claim their AI model needs access to all personal data available in a given organization. To solve this issue, the EDPB’s opinion piece also provides a 3-step test to help asses the use of legitimate interest as a legal basis.

The test is accompanied by another 37-page set of guidelines, but, luckily for us, all of this information can be broken down into three simple questions:

Establishing “Legitimate Interest” Questionnaire

  • Is there a legitimate interest by the controller or a third party?
      1. “As a general rule, the interest pursued by an organisation or a third party should be related to their actual activities and should not be contrary to EU or member state law.”
  • Is the processing really necessary for the legitimate interest?
      1. “The organisation should examine if the legitimate interest pursued can be achieved by other means less restrictive of the fundamental rights and freedoms of individuals.”
  • Are the interests or fundamental rights and freedoms of individuals overridden by legitimate interest?
    1. “The legitimate interest in question must not be overridden by the interests or fundamental rights and freedoms of individuals including, for example, for example, financial interests, social interests, or personal interests.”

GDPR & Artificial Intelligence processing infographic by PII Tools

Source: PII Tools

3. Unlawfully Processed Personal Data

The third and final main point addressed by this opinion answers the question: “What happens if an AI model is developed using personal data that was processed unlawfully?”. In other words, what do you do if the PII cat is already out of the bag?

Well, when an AI model was developed with unlawfully processed personal data, this could impact the lawfulness of its deployment, unless the model has been duly anonymized. And the EDPA explains how such an assessment is reached:

“The Opinion genally recalls that SAs (Supervisory Authority) enjoy discretionary powers to asses the possible infringement(s) and choose appropriate, necessary, and proportionate measures, taking into accoun the circumstances of each individual case.”

GDPR & Artificial Intelligence fundamental interests infographic by PII Tools

Source: PII Tools

Regulatory Compliance and Data Protection

Of course, this has only been an overview of the many complexities discussed by the EDPB via its opinion piece on the GDPR and AI. No official updates or laws have been added at this time, but we can expect some changes to come in the future as AI plays a larger and larger role in our lives.

But the general basis remains the same: When collecting, storing, and using personal data, it’s our responsibility to do everything within our power to protect it and follow the rules set out by the GDPR (if it pertains to our data subjects).

And this sentiment rings just as true for AI models and how they’re trained today. Sensitive data protection should always remain our #1 priority.

Discover Sensitive Data Using PII Tools and Reach GDPR Compliance!