Data is often described as AI's "lifeblood" but there's widespread concern about personal data being unlawfully exploited or processed using AI tools. While the future approach to regulation of AI is still being heavily debated, existing data protection legislation, such as the GDPR and its UK equivalent, is likely to play an influential role – not least because regulators already have powers which they can use to oversee the new technology. In this briefing, we look at how big a role data protection can play in regulating AI and what lessons it may have for other areas. We also discuss how regulators in the UK and EU are responding and what businesses need to do to comply if they are considering the use of AI tools to handle personal data.

1. How does data protection fit into the bigger picture on AI regulation?

In our previous briefing, we wrote about the UK Government's White Paper published in March 2023, "A pro-innovation approach to AI regulation", and how the UK's flexible approach, which relies on existing regulators and regulatory frameworks, differed from the more prescriptive, legislative approach of the EU in its draft AI Act.

Since then, the White Paper has received some criticism. A House of Commons' report, published on 31 August 2023, stated that the White Paper's approach "risked the Government's good intentions being left behind by other legislation – like the EU AI Act". The report called for a "tightly-focussed" AI Bill to be announced in the King's Speech in November 2023. A swift shift in the regulatory approach within that timeframe looks unlikely, particularly as the UK is about to host the AI Safety Summit at Bletchley Park that month. The summit will bring together representatives from key countries, leading technology organisations and experts, to discuss how best to manage the risks from the most recent advances in AI, focussing mainly on "frontier AI" (general purpose AI - the large language models like ChatGPT), but also covering narrow AI with dangerous capabilities.

If the UK sticks to its plan of relying on existing regulators, the role played by the Information Commissioner's Office (ICO) is a good example of how the White Paper's approach could potentially be made to work well - harnessing the ICO's subject matter expertise and understanding of context-specific risks (the ICO's guidance, as described further below, is detailed and practical) and combining it with the requisite level of coordination with other regulators (the ICO has experience of cooperating with Ofcom, the Competition and Markets Authority and the Financial Conduct Authority, as part of the Digital Regulation Cooperation Forum, to develop joint approaches to issues of AI and digital regulation). The ICO has been working for years on principles such as fairness and bias, transparency and explainability, security and safety that are common to data protection law and proposed AI regulation, or are at least very similar in those two contexts.

But the concern with the White Paper's approach is that not all regulators have the ICO's level of expertise and experience, nor are they so used to cooperating with fellow regulators, and the jury is also out on whether the White Paper's proposed central government monitoring function would be effective to plug the gaps.

AI and automated decision-making – UK data protection law reform

UK data protection law reforms laid out in the Data Protection and Digital Information (no. 2) Bill (DPDI No. 2 Bill) are in the pipeline, including proposed changes to the profiling and automated decision-making (ADM) provisions of the UK GDPR. The rules on ADM, i.e. decisions made without any human involvement, are highly relevant to AI and the proposed changes evidence the UK Government's more flexible approach to AI, even though they are more modest than those which were originally mooted.

Article 22 of UK GDPR currently provides that data subjects have the right not to be subject to ADM, including profiling, that produces legal effects concerning them or that similarly affects them. If the DPDI (No. 2) Bill becomes law, ADM will only be subject to a general prohibition (in the absence of explicit consent, contractual necessity, or legal obligation as a basis for processing) when special category data are involved. Other significant, solely automated decisions will be permitted, provided certain safeguards are put in place, including the ability for data subjects to contest the decision (i.e. after the fact) and require human intervention.

There may therefore be greater scope for UK businesses to rely on the more flexible "legitimate interests" condition as their lawful basis when processing personal data for ADM (except where special category personal data are used). This approach would not work, however, for ADM subject to EU GDPR.

For a more general overview of the DPDI No. 2 Bill, please see our briefing here.

2. Is AI fundamentally incompatible with the protection of personal data?

GDPR principles and responsible AI principles are very similar and, except for a potential conflict between minimisation and fairness principles outlined below, are largely compatible with each other. Nevertheless, the nature of AI means that these principles are often difficult to meet. For example, AI presents unique challenges when it comes to:

  • transparency and explainability principles, which require data controllers to provide meaningful information about the logic involved in an AI system and the potential impact on individuals affected by its output. "Black box" AI models, whose inner workings and rationale are inaccessible to human understanding, will require supplementary interpretability tools to help explain the model (and "black box" models will not be appropriate in some contexts).

  • compatibility of purpose. AI users need to consider carefully whether inputting personal data into an AI solution, as well as any subsequent processing of those data, is compatible with the purpose for which the relevant data was originally collected.

  • fairness, which is about processing data only in a manner which individuals would reasonably expect, and not using data in a way that would cause unjustified adverse effects on individuals. It is relatively common knowledge that AI models can perpetuate bias that stems from their training data or programming. Controllers need to ensure (throughout the system's lifecycle) that the system is sufficiently statistically accurate to avoid discrimination, and that training data is representative and free from error and bias.

  • the practicalities of fulfilling the requests of data subjects exercising their rights, such as the rights of access, rectification, erasure and the right to object. For some generative AI in particular, personal data inputted into the AI can be difficult to trace or extract. This needs to be factored into design decisions - either the tool needs to be designed to facilitate data subject rights or built so that personal data is not integrated into the system.

As we mentioned above, there are some tensions too between the GDPR and AI fairness principles. For example, the GDPR requires only the minimum amount of personal data that is necessary for the purpose to be processed and for data only to be retained for as long as necessary. Indiscriminately training models on vast quantities of personal data would certainly be inconsistent with the minimisation principle. But even responsible AI-use relies on large volumes of diverse data for training to ensure that data is representative and avoids bias, as well as for data to be retained for traceability, audit and monitoring purposes. Businesses will need to balance these potentially competing principles (the ICO calls them "trade-offs") and, where necessary, there needs to be a way to mediate between regulators and prioritise where there are conflicts. The White Paper proposes that this mediation role should fall to a central government monitoring function.

3. What's the UK's regulator doing in response to AI?

AI is a priority area for the ICO: it has identified several areas of particular focus, including fairness, dark patterns, AI-as-a-service, AI and recommender systems, biometric data and technologies, and privacy and confidentiality in explainable AI. The ICO has already made substantial progress towards delivering detailed, practical guidance to help businesses develop and use AI in a way that is compliant with data protection laws. In recent years, its AI-specific guidance includes:

4. AI and Data Protection Impact Assessments

The ICO's guidance provides a helpful steer on Data Protection Impact Assessments (DPIAs) in the context of AI. In most cases, the use of AI will trigger the legal requirement to undertake a DPIA.

When is a DPIA required for AI?

A DPIA is required under the UK GDPR where processing is likely to result in a high risk to the rights and freedoms of individuals. A DPIA is also required if the use of AI involves:

  • systematic and extensive evaluation of personal aspects based on automated processing, including profiling, on which decisions are made that produce legal or similarly significant effects;

  • large-scale processing of special category data; or

  • systematic monitoring of publicly accessible areas on a large scale.

The ICO has also published a list of processing operations that require a DPIA.

Even where a DPIA is not strictly required, it is still a useful tool to document how the organisation is complying with the rules. The DPIA needs to describe the nature, scope, context and purposes of any processing of personal data and how and why you are going to use AI. The ICO's updated guidance makes it clear that you need to show that you have considered less risky alternatives that achieve the same purpose as the AI, and why the alternative was not chosen.

5. AI and lawful basis for processing

Whenever you are processing personal data, you must have an appropriate lawful basis to do so, and this means breaking down each processing operation and ensuring that there's a lawful basis for each type of processing – for example, there may be separate, different legal bases for training the AI and for its subsequent use.

Relying on "legitimate interests" as a lawful basis for processing is often seen as one of the more flexible legal bases for processing but it requires an assessment of the impact of your processing on individuals and for you to demonstrate that there is a compelling benefit to the processing (which you should document as part of a "legitimate interests assessment"). It will not be an appropriate legal basis in all instances.

Certainly, if AI is to be used to process special category data (e.g data about someone's health, race, religious beliefs, politics, sex life or sexual orientation, or biometric data used for identification purposes) or data about criminal offences, then you will need to identify an additional legal basis under Articles 9 and (for criminal offence data) 10 of the UK GDPR, as well as the DPA 2018. Whilst these additional legal bases potentially include explicit consent, explicit consent under the GDPR is a high bar to meet and is not appropriate in some contexts (e.g. within an employment relationship). Other legal bases are often preferred, such as "substantial public interest" (prevention or detection of crimes) for AI used in surveillance technology but considerations of necessity and proportionality often form part of these additional processing conditions too and so their applicability will depend heavily on the context in which the AI is to be used.

Inferences and special category data

The ICO has clarified that where AI is used to guess or predict information about individuals or groups of individuals, whether this would count as special category data and trigger Article 9 depends on how certain that inference is, and whether you are deliberately drawing that inference. So, the inference is likely to be special category data, if your use of AI means you:

  • can (or intend to) infer relevant information about an individual; or

  • intend to treat someone differently on the basis of the inference (even if it's not with a reasonable degree of certainty).

6. Divergence from the EU?

So far, the rules in the UK and the EU in relation to the data protection aspects of AI, the enforcement action by regulators against AI users in these jurisdictions, and respective guidance of the ICO and the European Data Protection Board issued in relation to AI, have been broadly consistent. However, in future we can expect to see the rules themselves diverge and/or some different interpretations and applications of the rules by courts and regulators in the UK and the EU. Take for example AI used in live facial recognition technology.

A case study – Live Facial Recognition Technology (LFRT)

In the high-profile case of R (Bridges) v South Wales Police in 2020, the Court of Appeal found that the use of LFRT, which screened against "watchlists" of wanted persons in police databases, was unlawful. The Court decided that South Wales Police had breached the right to privacy under Article 8 of the European Convention on Human Rights (the right to respect for private and family life), that their data protection impact assessment was inadequate, and that the police had also breached the public sector equality duty because they had taken no steps to satisfy themselves that the underlying software didn't contain bias.

The ICO has also taken a robust approach to enforcement in relation to Clearview AI, a company which indiscriminately collected billions of images of people from the web and from social media to train and develop its LFRT solution to sell to governments and law enforcement authorities. As discussed in our briefing here, the ICO fined Clearview £7.5m for breaches of fairness and transparency and ordered them to delete all data of UK residents. Other European jurisdictions have taken similar enforcement action against Clearview.

However, in recent months, the ICO has appeared to take a slightly more permissive approach to LFRT. The ICO decided not to take enforcement action in March 2023, following its investigation into security firm, Facewatch, even though the ICO identified several breaches of the UK GDPR. Facewatch's LFRT solution is used to screen for known offenders, predominantly by Facewatch's customers in the retail sector to combat shoplifting. The ICO decided that Facewatch's product had a legitimate purpose in the prevention and detection of crime, and that Facewatch had taken significant steps towards compliance, although the ICO emphasised that its decision to close the investigation should not be treated as a blanket endorsement of Facewatch's product.

In future, the UK and the EU's respective positions on LFRT look likely to diverge still further. The European Parliament has proposed banning LFRT in public spaces outright under the draft AI Act.

7. Starting out on the right path to AI compliance

While future regulatory frameworks in respect of AI will stretch beyond privacy issues, those organisations that have embedded robust GDPR compliance programmes, such that they are well versed in risk-assessing and balancing business interests against the rights of the individual, are likely to be better equipped to face AI regulatory demands to come in the UK and beyond. That said, they should also be prepared for some differences between the legal and regulatory approaches of the UK and the EU to privacy issues in an AI context, as well as for the rules themselves in those jurisdictions to begin to diverge.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.