Researchers at the Information Commissioner's Office (ICO) have started a series of blogs discussing the ICO's work in developing a framework for auditing artificial intelligence (AI). In the first blog of the series, the discussion revolves around the degree and quality of human review in AI systems, specifically, in what circumstances human involvement can be truly "meaningful" so as to create non-solely automated AI systems.

Risks inherent in complex AI systems

The ICO and European Data Protection Board (EDPB) have both published guidance on automated individual decision-making and profiling. The main takeaways are that human reviewers must actively check a system's recommendation, consider all available input data, weigh up and interpret the recommendation, consider any additional factors and even use their authority and competence to challenge the recommendation if necessary.

In some circumstances, human input should also consider the likelihood of additional risk factors which may cause a system to be regarded as solely-automated under the GDPR. More often, these risks appear in complex AI systems and can lead to (1) automation bias and (2) a lack of interpretability.

  • Automation bias occurs in complex AI models because human users often trust the computer-generated output as being an objective, and therefore accurate, result of mathematics and data-crunching. When human users stop using their judgment or stop questioning whether the AI's result may be wrong, that is when the system could become solely automated.

    How to address this concern: Design requirements to reduce automation bias and to support a meaningful human review must be developed during the design and build phase of AI systems. Organisations (in particular, front-end interface developers) need to consider how human reviewers think and behave so as to give them the chance to intervene. It may also be helpful to consult and test options with human reviewers early on.
  • A lack of interpretability may occur when the human reviewers, again, stop judging or challenging a system's recommendation, but this time because of the intrinsic difficulty of interpreting the recommendation of the AI system. This would trump any human effort to meaningfully review the output, leading to the decision becoming 'solely automated'.

    How to address this concern: The challenge of interpretability should also be considered from the initial design phase. Organisations should define and explain how to measure interpretability in the specific context of their AI system. This could include, for example, an explanation of a specific output rather than the model in general or the use of a confidence score attached to each output, which would indicate to the reviewer that more involvement is needed for a final decision.

Comment – solely vs non-solely automated AI systems

ICO recommends that an organisation decides at the outset of its design phase if its AI application is intended (i) to enhance human decision-making or (ii) to make solely automated decisions. This decision requires management or board members to fully understand the risk implications of choosing one way or the other. Additionally, they need to ensure that accountability and effective risk management policies are in place from the outset.

Other key recommendations to take away include: the training of human reviewers to ensure they understand the mechanisms and limitations of AI systems and how their own expertise enhances the systems; and the monitoring of reviewers' inclinations to accept or reject the AI's output and the analysis of such approaches.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.