I'm increasingly asked the following question: who is responsible for AI within my organisation? The short answer is that it shouldn't be one individual. Rather, your AI governance framework needs to comprise stakeholders from — at a minimum — the compliance, technology, products, HR and legal teams. And within the legal department, voices from privacy, IP, antitrust and commercial all need to be heard.

With that in mind, you may have seen that Lord Holmes of Richmond last week introduced the "Artificial Intelligence (Regulation) Bill" to the UK House of Lords. The Bill (link here) contains a number of proposals for AI regulation in the UK that fall into the category of Big, If True. The proposals include:

  • Requiring any business that develops, deploys or uses AI to have a designated "AI Officer" who ensures that their organisation's use of (i) AI is safe, ethical, unbiased and non-discriminatory, and (ii) data for AI purposes is unbiased.
  • Requiring any business involved in training AI to obtain informed consent to use all third-party data and IP, and comply with all applicable IP and copyright obligations.
  • Requiring any business supplying a product or service involving AI to give customers clear and unambiguous "health warnings, labelling and opportunities to give or withhold informed consent in advance".
  • Creating an "AI Authority" to regulate AI by, among other things, (i) ensuring that relevant regulators take account of and are aligned in their approach to AI, (ii) coordinating a review of product safety, privacy and consumer protection legislation to ensure its suitability for AI, (iii) accrediting independent AI auditors, and (iv) promoting interoperability with international regulatory frameworks.

On its face, the AI Officer requirement is new and significant. But it's important to understand that the Bill will almost certainly not become law, for two (inter-related) reasons.

  1. The UK Parliamentary system allows non-government ministers (i.e., MPs and Lords) to table laws that are independent of the executive's legislative and policy agenda — and that's what Lord Holmes did last week. The vast majority of Private Members' Bills don't become law (the success rate in a given year is usually less than one in 20), and it's likely that the same fate will befall the AI Regulation Bill.
  2. That's especially so because the UK Government confirmed earlier this month that it isn't going to regulate AI "in the short term". In other words, it won't propose AI-specific legislation nor will it create a new regulator to oversee the technology. (That position reaffirms the Government's White Paper on AI, released in March 2023, in which it espoused a principles-based approach to AI regulation.) So unless the Government performs a drastic U-turn, a Bill that seeks to do both of those things isn't going to receive its support.

That being said, Private Members Bills are often introduced to stimulate debate and bring public focus to a particular issue, so it will be interesting to see if the Government decides to address or otherwise engage with the spirit and/or content of Lord Holmes's Bill.

*****

From a proposed law that probably won't make the statute book, to one that probably will: the Data Protection and Digital Information (No. 2) Bill, which is being debated in the House of Commons on Wednesday (29 November 2023). The Bill, which aims to "update and simplify" the UK's post-Brexit data protection framework, contains a number of substantive changes to the GDPR framework that go beyond the scope of this article, with one exception.

If the Bill becomes law, organisations will no longer be required to designate a data protection officer. Instead, they must appoint a "senior responsible individual" if carrying out processing which is likely to result in a high risk to individuals' rights and freedoms. This proposal was received negatively by most respondents to the Government's consultation on a previous iteration of the Bill (the Government itself admitted this), but it remains in the (No. 2) Bill and so is likely to feature in the final text of the legislation.

The DPO and SRI roles are different in number of important respects. For example, the DPO can't hold a position that involves determining the means and processing of personal data, and must report to the highest level of senior management, whereas the SRI must be a member of senior management and thus will be involved in making decisions about the processing of personal data. And unlike a DPO, whose duties can be outsourced, the SRI can't be an external appointment.

If the (No.2) Bill becomes law, things will get complicated for organisations in the UK that are subject to, and required to appoint a DPO under, the EU GDPR. If they appoint a SRI who delegates to the DPO, the reduction in red tape that the Government claims the Bill will help to effect isn't realised; if anything, it increases. Businesses won't be able to promote the DPO to senior management and have them take on a dual DPO-SRI role, given the inherent conflicts of interest this would create. And the many excellent DPOs operating in the UK will see disruption in their roles, responsibilities and reporting lines.

I raise this because the present moment feels very similar to the months leading up to May 2018, when even the likes of the Wall Street Journal wrote about the talent war for DPOs. Those same articles are now being written, and conversations had, about AI. So it almost doesn't matter that the AI Officer probably won't become a legal obligation in the UK (yet, at least...). At organisations across Britain, somebody is — or somebodies are — being asked to have responsibility for their employer's AI strategy. It's a good time to be thinking about what that means for you.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.