In September of this year, the Irish Data Protection Commission ("DPC") used its power under Article 64(2) of the GDPR to request an EDPB opinion in relation to the processing of personal data in connection with AI models. This marked the first time that the DPC has used this power, and potentially heralds a shift in the way in which the DPC will approach novel questions about the application of the GDPR going forward. It isn't clear if the DPC deliberately timed its request to provide an early Christmas present to data protection enthusiasts, but we have one, nonetheless.
Scope of the request
The DPC asked the EDPB to consider four questions, which can be briefly summarised as:
What the Opinion doesn't cover
The Opinion notes that it has been drafted to cover the specific questions raised by the DPC, and it therefore does not address certain other key principles under the GDPR that must be considered when developing or deploying AI models. In particular, it is noteworthy that the Opinion does not address:
Some interesting points
Whilst the Opinion as a whole is worth reading, we set out below some of the more interesting points that are considered by the EDPB.
Anonymous AI Models
Unsurprisingly, the EDPB notes that the question of whether an AI model should be considered to continue to process personal data once trained will need to be assessed on a case-by-case basis. When undertaking this assessment, the EDPB distinguishes between models that are designed to provide personal data regarding individuals whose personal data was used to train the model (e.g. an LLM that is designed to respond to a query like who has scored the most tries for Ireland), and those which are not designed to provide access to the personal data that is used to train the model. For the first type of model, it is clear that this involves the processing of personal data, but for the latter, the EDPB sets out some useful considerations for assessing whether personal data can be extracted from the model with "means reasonably likely to be used".
Processing based on legitimate interests
The Opinion helpfully acknowledges that legitimate interests can potentially operate as the legal basis for processing personal data in all stages of the development and then the deployment of AI models. In order to rely on legitimate interests, the EDPB reiterates that three cumulative conditions must be met (i) the pursuit of legitimate interests by the controller or by a third party, (ii) the processing is necessary to pursue the legitimate interests, and (iii) the legitimate interest is not overridden by the interests of fundamental rights of the data subjects. Taking these in turn:
Using an AI model that has been unlawfully trained
Where an AI model has been unlawfully trained, the Opinion distinguishes between AI models that retain personal data and models which are effectively anonymised. It also distinguishes between AI models used by the controller that developed the model, and AI models deployed by separate controllers.
Conclusion
Given the timeframe to produce the Opinion, and the range and complexity of AI systems, there is still a great deal of uncertainty as to the exact impact of data protection obligations on the training and use of AI systems. It is likely to be at least a year, if not longer, before we start to see decisions from supervisory authorities in this area. In the meantime, there is much to digest in the EDPB's Opinion for the developers of AI systems and organisations that deploy them.