The UK Information Commissioner (ICO) was reportedly set to sound a note of caution recently, at Politico’s Global Tech Day, regarding the potential privacy risks that can arise in the context of generative artificial intelligence (AI).  

Privacy risks of generative AI

While acknowledging the potentially significant advantages and benefits that generative AI can bring, both to organisations and society more generally, the ICO’s Exec Director of Regulatory Risk, Stephen Almond, was expected to reiterate to businesses the need to consider the potential data protection issues around generative AI, noting that ensuring the compliance of such technologies with applicable data protection laws needs to be robustly scrutinised.

Mr. Almond was proposing to highlight the fact that the ICO will be confirming whether businesses have tackled privacy risks before utilising generative AI systems and will act where there is a risk of harm to individuals through inappropriate processing of their personal data.  The ICO’s view is that organisations should be able to demonstrate how they have tackled risks that arise in their particular environments.

Data protection by design and by default

Earlier this year, the ICO published a set of eight questions that organisations should address when creating or utilising generative AI that involves the processing of personal data in order to fulfil their obligations in respect of data protection by design and default.  

Questions that organisations should consider include:

  • What is your lawful basis for processing personal data (for example, can the legitimate interests basis be relied upon, or has consent been sought from the relevant data subjects)?
  • Are you a controller, joint controller or processor (this question will impact upon the nature and extent of organisations’ data protection obligations around generative AI technologies)?
  • Have you prepared a data protection impact assessment (DPIA), (a DPIA should be carried out before any personal data is processed in the context of a generative AI system and kept updated)?
  • How will you ensure transparency (the preparation and making available of appropriate privacy notices must be considered)?
  • How will you mitigate security risks?  The ICO highlighted not only the risk of personal data leakage, but also other risks such as model inversion attacks (where attackers who already have access to some personal data of certain data subjects can infer additional personal information about such individuals by monitoring the inputs and outputs of machine learning (ML) models), membership inference attacks (where attackers try to establish whether a particular individual’s personal data has been used in the training data of ML models), data poisoning and other types of adversarial attacks.
  • How will you limit unnecessary processing?  Only data that is adequate for identified purposes, which is relevant and limited to what is necessary should be collected.
  • How will you comply with individual rights requests (for example, requests in respect of data subject access and the “right to be forgotten” (erasure of personal data))?
  • Will you use generative AI to make solely automated decisions?  Data subjects have additional rights under the UK GDPR in respect of any automated decision-making, including profiling, which has legal or similarly significant effects which should be considered in the context of generative AI systems.

Comment

It appears that ensuring the responsible development and use of generative AI systems will remain a hot topic for the foreseeable future, although approaches as to exactly how to achieve this continue to differ between jurisdictions.  For example, the European Union is getting closer to agreeing certain AI specific legislative proposals, with the European Parliament approving the text of the draft proposal for a Regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (AI Act) earlier this week.  

The UK has favoured a more flexible approach based around the introduction of a broad set of principles (as set out in the UK Government’s recent white paper entitled “A pro-innovation approach to AI regulation” published in March 2023), although recently there have been signs that this approach is evolving somewhat. 

Notwithstanding the emergence of the various different regulatory approaches to AI, it is clear that those businesses and organisations developing or using generative AI systems should already be considering their existing data protection compliance obligations in respect of such technologies from the outset in order to address any relevant privacy risks appropriately and to avoid regulatory and enforcement headaches pursuant to existing data protection rules.

Ropes & Gray is closely following AI’s rapid technological advances, as well as the accompanying regulatory and legal challenges. Visit our AI practice page for more information on those developments.