Welcome to the fifth issue of the HLK Newsletter tracking the AI legal landscape. In this issue, we take a look at copyleft licences and their role in AI, reflect on the comings and goings at OpenAI, and update you on discussions regarding the AI Act, as well as a recent judgement that might be good news for those looking to protect AI innovation in the UK. 

Copyleft Licences and AI

We've explored in past editions how copyright might affect AI and its users, but what about copyleft? It could mean that after finishing your big coding project with a little AI help you find out that your code must be made available for others to use and develop because you inadvertently used some copyleft code.

Software made available under a copyleft licence (for example, a GPL (General Public Licence)) may be used and modified but the resulting derivative works must be made available for others to use and modify in the same way. The way that copyleft licences spread to any derivative works, and then to any derivatives of those derivatives, and so on, has earned them the name "viral licences".

This all seems straightforward when the simple case of modifying some copyleft software is considered. But sometimes it's not quite so easy – when exactly is code a derivative work of another piece of code? The picture is muddied further when we look at AI.

For example, consider GitHub Copilot, an AI tool that autocompletes code. Similar to how predictive text on your phone can suggest the next word of your message, or how Microsoft Word suggests the next word or phrase as I type this sentence, GitHub Copilot suggests snippets or lines of code based on the code a user has already typed. According to GitHub's website, Copilot "has been trained on source code from publicly available sources, including code in public repositories on GitHub". This includes code which carries with it copyleft licencing obligations, e.g. "GPL code" which has been made available under a GPL licence.

What if Copilot autocompletes a snippet of code using GPL code? In that case, it might be argued that the resulting software inherits the licence of the GPL source code and should thus be made available in the same way, and that if it isn't there is a violation of the licence terms.

An important point in the question of whether software generated with the help of GitHub Copilot inherits any GPL obligations is whether Copilot can "copy and paste" code that it has been trained on. GitHub answers this on their website with a resounding no:

"No, GitHub Copilot generates suggestions using probabilistic reasoning... The AI models that create Copilot's suggestions may be trained on public code, but do not contain any code. When they generate a suggestion, they are not "copying and pasting" from any codebase."

However, even if GitHub Copilot doesn't actually copy and paste code, there is still the risk that the suggestion Copilot makes may happen to be GPL code. GitHub acknowledges that Copilot's suggestions may sometimes (less than 1% according to their own research) match examples of code used to train Copilot, and offers an optional filter to detect and suppress such suggestions. When it comes to GPL code, GitHub claims to be previewing a feature which would assist users in finding and reviewing GPL-style licences which might be relevant to a Copilot suggestion.

Might GitHub's new licence-detection feature be motivated by the possibility of a Copilot suggestion "infecting" the resulting code with a copyleft licencing obligation? Perhaps it was spurred by a class-action lawsuit filed in the US Federal Court in November 2022 which alleged, among other claims, that GitHub Copilot violates the terms of GPL licences protecting code used as training data through its suggestions of the same code (you can read more about the lawsuit and its motivation here). Interestingly, Github and Microsoft (which acquired GitHub in 2018) claim to offer indemnity for customers hit with copyright litigation. As you might expect, however, the conditions under which this indemnity are offered are murky and you might well be left on your own (you can read some analysis about this here).

We will continue to monitor this topic and the related lawsuit.

Legislation watch

Although not strictly a legislation update, the spotlight this week has been on the tumultuous board wranglings at OpenAI – the company behind ChatGPT. It was virtually impossible to miss the news about the sudden and unexpected firing of CEO and Silicon Valley darling, Sam Altman, last Friday. The story goes that OpenAI's board had lost confidence in Sam with the line being that he wasn't being "candid in his communications with the board". We suspect the board is now regretting its hasty actions. Chaos followed the dismissal. Appointment of interim CEOs, announcement that Sam would be leading a new AI research team at Microsoft, and threats of mass employee resignations in protest at Sam's treatment. By the following Tuesday, clearly bowing to mass investor pressure, it was announced to the world's press that Sam had been reinstated as OpenAI's CEO and a significant number of the board were being replaced. It will definitely be worth keeping an eye on this story to see how things unfold in the coming weeks and months.

In other news, prospects of the EU AI Act reaching an agreed form by the end of this year are now looking a little more promising. This was not the case earlier in the month when a digital policy advisor for a German MEP predicted a less than 10% chance of the draft being agreed this year following a difficult trilogue meeting between the EU Council, Parliament and Commission. It was reported that the most recent trilogue meeting was forced to finish earlier than scheduled after France and Germany called for significant changes to the proposed regulatory approach for foundation models, arguing that homegrown generative AI companies would suffer due to the current proposed form of the AI Act. This is not the only issue. Another key sticking point remains law enforcement's use of remote biometric identification in public spaces – currently considered an unacceptable risk in most use cases. If agreement is to be reached this year, then 6 December is a key date as this will be the last trilogue meeting of the year and before EU parliamentary elections and changes in the Commission. If these outstanding issues cannot be resolved during the 6 December meeting, then it looks like further delays may become inevitable as we head into 2024.

Finally, for those interested in the patentability of AI systems, a potentially significant judgement has been handed down by the England and Wales High Court, overturning a rejection by the UKIPO of a patent application relating to an Artificial Neural Network (ANN). The judgement addresses the statutory exception to patentability of computer programs "as such", and the extent to which an ANN is caught by this exception. The findings will make for encouraging reading for those hoping for a more favourable approach to patentability of AI subject matter in the UK.

AI application of the week

It's clear to all of us now that AI is going to become part of daily life, so it's interesting to see how those more ubiquitous tech companies are folding it in to their models. Take YouTube for example. Over the last two weeks, they have announced an AI tool that clones pop stars' voices – with their consent, of course – embracing the creative possibilities of AI. On the other hand, as a mature internet presence, it's not all fun and games: YouTube creators will have to disclose use of generative AI in the near future or risk being banned.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.