Even the CEOs of the big AI firms, like Sam Altman of OpenAI (the developers of ChatGPT), say that AI needs regulating, but there is very little consensus about how to go about regulating it. Governments across the globe are grappling with how to balance promoting innovation and economic growth with protecting citizens' privacy, safety and other human rights. This briefing explores the UK's approach to the regulation of AI, highlights its key differences from the EU's proposed AI Act and looks at steps organisations can consider taking now, amidst the regulatory uncertainty.

  1. The UK's approach to regulating AI
  2. What's happening with the EU's AI Act?
  3. Amidst the regulatory uncertainty, what can businesses do to prepare?

The UK's approach to regulating AI

The UK Government's approach to regulating AI is outlined in "A pro-innovation approach to AI regulation", a white paper published in March 2023 (White Paper). The White Paper followed hot on the heels of the "Pro-innovation Regulation of Technologies Review - Digital Technologies" report, a report to the Government presented by Sir Patrick Vallance. The Government's objective is to drive growth, ensuring that the UK remains attractive to investors, by making responsible innovation easier and avoiding unnecessary burdens for businesses and regulators. It is therefore only legislating where it absolutely needs to, but it recognises that trust is a critical driver for AI adoption, so it needs to address risks, albeit in a proportionate way.

A heavy-handed and rigid approach can stifle innovation and slow AI adoption. That is why we set out a proportionate and pro-innovation regulatory framework. Rather than target specific technologies, it focuses on the context in which AI is deployed. This enables us to take a balanced approach to weighing up the benefits versus the potential risks.

Five core principles

The Government has identified five overarching principles to guide its regulatory framework in the White Paper.

Five principles

  • Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed.
  • Transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of the AI.
  • Fairness: AI should be used in compliance with the UK's existing laws, for example equality and data protection laws and not discriminate against individuals or create unfair or anticompetitive commercial outcomes.
  • Accountability and governance: there must be appropriate oversight of the way AI is being used and clear accountability for the outcomes.
  • Contestability and redress: people should be provided with clear routes to dispute harmful outcomes or decisions generated by AI.

Existing regulators and regulatory framework

One of the key differences between the UK and EU approaches to regulating AI is that the UK is not proposing to introduce any broadly applicable AI-specific regulations. Instead, the UK government plans to rely on existing regulators and regulatory structures, arguing that those regulators are best placed and have the expertise to apply the overarching principles to AI use cases that fall within their sector(s). Existing regulators will be encouraged to issue relevant guidance to explain how the principles link with existing legislation and what good compliance looks like.

Regulatory coordination in practice?

If the co-ordination and co-operation aspects of the Government's plans in the White Paper don't come together, it is likely to lead to inconsistency and uncertainty.

The UK's approach relies on regulators having sufficient expertise and funding and cooperating with each other. Some regulators already co-ordinate (the Digital Regulation Cooperation Forum between the Information Commissioner's Office (ICO), the Competition and Markets Authority, the Financial Conduct Authority and Ofcom, for example). While regulators such as these may well be equipped for the task, it's possible that other regulators won't have the requisite resources or expertise to see through their roles effectively.

This approach also relies upon the Government providing substantial central oversight and coordination – the Government refers in the White Paper to a central monitoring and evaluation framework, cross-sectoral risk function and risk-register and multi-regulator AI sandbox (following another recommendation from the Vallance report). Centralised oversight is required to:

  • identify new AI risks (particularly those that cut across the sectors);
  • broker agreement on which regulator addresses which risks and prioritise between potentially conflicting principles (e.g. it will be difficult to assess an algorithm's fairness without access to special category data about the subjects of the processing, requiring fairness principles to be weighed against privacy principles); and
  • identify measures to plug gaps where an AI use case falls between regulators' respective remits.

Context-specific

The UK government's approach seeks to regulate the use of AI rather than the technology itself - dubbing this approach as "context-specific". It has some key advantages over the more rigid risk-level categories proposed in the EU's AI Act. Take, for example, an AI-powered chatbot. In a retail, customer-satisfaction context, the use of that technology is not so high risk as the use of that same technology in a medical diagnostic context. The latter use case arguably merits tighter regulation than the former. Moreover, this approach can be adapted more quickly to new developments in AI and removes the need constantly to update AI regulations as technology advances.

Suck it and see...

The Government's approach is iterative: it will see what works (and what doesn't) before intervening further.

There is a concern amongst UK regulators that if the framework is not placed on a form of statutory footing it will not be enforceable. The Government is therefore considering (but only after an initial implementation period if it still considers it necessary) imposing a statutory statutory duty on regulators to have "due regard" to the principles.

The Government also says that it's "too soon" to make decisions about the liability regime for AI "as it is a complex, rapidly evolving issue which must be handled properly to ensure the success of [the] wider AI ecosystem" and so does not propose to make changes at this stage. Nonetheless, the paper recognises that there are areas where the lack of clarity around liability may prove to be an issue - it provides a case study on automated healthcare triage systems, noting that there is "unclear liability" if such a system provides incorrect medical advice, which may affect the patient's ability to seek redress. Some might say that this is a case of the Government "kicking the can down the road" and leaving the important task of allocating responsibility between actors in the supply chain to regulators. Once again, this contrasts with the EU's proposed new AI Liability Directive to address liabilities for harms that may arise from the use of AI, which sets out a rebuttable presumption of causality between a failed duty of care and the harm caused by the AI system.

There's also nervousness that the Government's iterative approach may take too long, at a time when the capability of the technology is advancing at a terrific pace.

Foundation Models

The Government has set up a £100m AI Foundation Model Taskforce charged with leading AI safety research, developing responsible standards and governance that can be used to underpin the White Paper and the UK's approach to regulation of foundation models. This taskforce is designed on the same model as the Vaccine Taskforce that was launched at the start of the Covid-19 pandemic.

The European Parliament, on the other hand, has chosen to add specific requirements on generative AI systems to the proposed AI Act, such as an obligation to disclose that content was generated by AI, designing the AI system in a way that prevents it from generating illegal content and publishing summaries of copyright material used for training.

A change in the political rhetoric since the White Paper?

During Rishi Sunak's visit to the US in June 2023, he seemed to be contemplating a more hands-on regulatory approach, whilst also seeking to carve out a role for the UK as an international hub for AI regulation. He pitched for the UK to become home to any future international regulatory body, talking about a CERN for AI based on the international particle physics research project or an international regulatory body based on the International Atomic Energy Agency. He also announced that the UK will host a global summit on AI regulation later this year.

Meanwhile, the Labour Party prefers the idea of a licensing framework for AI, regulated like medicines or nuclear power - a far cry from the light-touch regulatory approach in the White Paper. A potential drawback of a licensing model is that it creates a barrier to entry that could block innovation from smaller AI firms and entrench the strategic position of those who are already dominant in the market.

A House of Lords report published on 18 July 2023, Artificial intelligence: Developments, risks and regulation" (that relies heavily on an earlier report by Sir Tony Blair and Lord Hague of Richmond, "A new national purpose: AI Promises a World-Leading Future of Britain") advocates creating a national AI laboratory called Sentinel to research and test safe AI. The House of Lords report recommends:

  • diverging from the EU but ensuring that the UK's regulatory system allows for UK companies to voluntarily align with EU standards;
  • in the near term, broadly aligning with US regulatory standards, while building a coalition of countries through Sentinel, with a view to diverging from the US approach as other international approaches mature;
  • in the medium term, establishing an AI regulator in the UK to work in tandem with Sentinel.

What's happening with the EU's AI Act?

In essence, the EU's approach is much more detailed and prescriptive than the UK's approach. The EU's proposed AI Act takes a risk-based, product-safety-type approach to regulating the technology through a four-tiered model:

  • The top tier comprises AI systems that are an unacceptable risk and are banned outright, such as systems for social scoring, harmful behavioural manipulation, real-time biometric identification systems in public spaces, predictive policing, emotion recognition systems in law enforcement, border management, workplace, and educational institutions and scraping of biometric data from social media or CCTV footage to create facial recognition databases.
  • The second tier is for "high risk" systems that have a negative impact on safety or fundamental rights, for example, systems used in, or that are a product subject to, EU product safety legislation, systems used in the management of critical infrastructure, or in employment and HR management, or that influence access to essential services, education or training. These systems are subject to the most onerous obligations, including conformity assessments, a prior registration regime, risk management systems and ensuring appropriate human oversight.
  • The third tier are limited risk and are mainly subject to transparency requirements, for example, chatbots.
  • The final tier or minimal-risk AI systems are not subject to any restrictions, such as AI used in gaming or spam filters.

The changes to the AI Act introduced by the European Parliament

In June 2023, the European Parliament approved its version of the AI Act. It has made significant changes. For example, it has:

  • introduced additional obligations on providers of foundation models;
  • changed some key definitions and established a set of six high-level core principles that are applicable to all AI systems regulated by the Act;
  • extended the list of prohibited and high-risk systems and changed some of the obligations relating to them, including requiring certain deployers of high-risk systems to undertake a Fundamental Rights Impact Assessment before using those systems. It has, however, limited the classification of high-risk systems to those posing a "significant risk" to people's health, safety or fundamental rights; and
  • increased the (already eye-watering) maximum fine to the higher of €40m or 7% of worldwide turnover.

There's still some way to go before the AI Act becomes law – next comes a "trilogue" negotiation stage between the European Commission, Parliament and Council of Ministers. These discussions are expected to be complex because there's still disagreement over some fundamentals, like the definition of AI and the lists of prohibited and high-risk AI. Even if the AI Act becomes law later this year/early 2024, a 24-month transition period is currently contemplated before organisations are required to comply with it.

Amidst the regulatory uncertainty, what can businesses do to prepare?

It's not a regulatory desert out there. GDPR and the UK's equivalent are highly relevant to how organisations use and develop AI where processing personal data. The first priority should therefore be to comply with existing requirements (and the ICO has issued a series of detailed AI guidance to help with this).

The Government has said that it intends for the UK regulatory framework to be compatible with regulatory frameworks in other jurisdictions, including the EU's proposed AI Act. Indeed, in terms of core principles, there is a lot of common ground between those different frameworks - even if the regulatory approach is different – so shaping governance around these principles should help with compliance down the line.

Of course, UK organisations that also do business in the EU will need to comply with the EU's more detailed rules once those apply and, from this perspective, the UK's regulatory approach is likely to play second fiddle to the EU's AI Act as the "gold standard" (just as many businesses subject to both the EU GDPR and the UK GDPR may choose to continue with existing GDPR-compliant processes rather than flexing them in the ways contemplated by UK data protection reforms to be introduced by the Data Protection and Digital Information (No.2) Bill). While the EU's AI Act is still a moving feast, it at least contains more detail to plan towards. Businesses that operate in the EU may wish to look now at how the AI they use or develop, or plan to use or develop, is likely to map to the risk tiers and consider how they'd address the associated requirements.

Building AI risk considerations into processes

Organisations could consider embedding as part of their processes:

  • AI registers to map where and how AI is being used in their business;
  • AI risk assessments (similar to Data Protection Impact Assessments) to help explore the potential risks associated with AI, such as biases and discrimination, and implement measures to mitigate those risks;
  • appropriate record keeping so there's an effective audit trail regarding the data input into AI tools, the purposes for which AI is being used and any decisions taken on the basis of the output;
  • appropriate human oversight when using AI tools;
  • transparency requirements so that people know when AI is being used (and not just in a data protection context);
  • addressing acceptable use of AI in employee and supply chain policies to be clear about what is (and isn't) acceptable use of AI.

Before using third party AI tools, businesses will also want to know, for example:

  • the source of the original data that was fed into the tool to understand whether the tool's outputs can be trusted and if and how it's been tested for bias;
  • whether there's been any independent evaluations of the tool's reliability and accuracy; and
  • whether there will be any human review of the tool's outputs.

We can expect to see more detailed protections reflecting these issues in contracts, along with an appropriate allocation of liability for AI's outputs.

We also look at the interaction between AI and intellectual property law in our briefing here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.