The Court of Appeal decided that South Wales Police had not done enough to make sure that the technology it was using was not biased in this way. The Court said:

'The fact remains, however, that SWP has never sought to satisfy themselves, either directly or by way of independent verification, that the software program in this case does not have an unacceptable bias on grounds of race or sex. There is evidence, in particular from Dr Jain, that programs for AFR can sometimes have such a bias.'

Addressing the interference in people's privacy, the Court also said that the fact that the new technology was only being trialled was no defence to the requirement to act in accordance with the law.

So, developers and purchasers of AI in the public sector will need to find ways of demonstrating that they have investigated the question of whether the technology may be biased against different groups of people, and ensure that measures are put in place to mitigate against the risk of bias or discrimination. It is important to understand that the Court of Appeal does not go so far to say that the technology must be completely free from bias. Rather, the organisations using it must have investigated whether there is any bias and taken this into account when deciding whether to adopt the technology. Given that the relevant data protection, privacy and equalities duties are ongoing, rather than just a one-off assessment, organisations should also be able to audit the use of such technology in order to evaluate their compliance with these duties.

The ruling adds legal weight to existing guidance issued by the ICO, which strongly emphasises the need to address the risks of bias and discrimination when using AI systems to process personal data (guidance on AI and data protection, July 2020). The guidance, and the ICO's framework for auditing AI compliance generally, covers the relevant data protection requirements and their links to equalities duties in particular, the need to process personal data fairly, lawfully and transparently. It also sets out suggested steps to mitigate against and audit the risk of bias and discrimination.

While this case arose in the context of a controversial use of AFR by the police, the implications above will apply to the use of AI systems throughout the public sector and beyond.

Read the full case. The discussion on equalities appears at paragraph 163 onwards.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.