Artificial Intelligence (AI) is big news at the moment. It's difficult to avoid talk of AI chatbots, AI deepfakes, and even AI's existential threat to humanity. It's no surprise then that governments worldwide are starting to tackle the social, ethical, and economic questions that AI raises.

The European Union (EU) is working on one of the world's first AI legal frameworks that aims to bolster regulations for the development and the use of AI. In April 2021, the European Commission put forward the first draft of the AI Act, which says that AI systems are to be analysed and classified according to the risk that they ultimately pose to end-users. Throughout an extensive discussion and revision process over the last two years, this risk-based classification system has remained at the heart of the proposed regulation, and the final form of the law is expected to be agreed by the end of this year.

What is the EU's AI Act?

The EU's AI Act aims to "strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use"1.

At the core of the EU's AI Act is a classification system, which categorises AI technologies based on the level of risk the technology could pose to the health and safety or fundamental rights of an end-user. The classification system includes four risk tiers: minimal, limited, high, and unacceptable.

1363878a.jpg

At one end of the risk spectrum, the EU are firm in their position that systems that are seen to establish an unacceptable risk are to be prohibited. Systems in this category include those relating to cognitive behavioural manipulation of people or vulnerable groups, social scoring, and real time and remote biometric identification systems in public places.

High risk AI systems are to be permitted, but developers must adhere to regulations that require documenting data quality, rigorous testing, and an accountability framework that includes human oversight. AI deemed to be high risk includes autonomous vehicles, medical devices, and critical infrastructure machinery.

At the other end of the risk spectrum, AI systems with minimal or limited risk – such as video games and spam filters – are to be allowed with few requirements other than transparency obligations and observance of a code of conduct.

The proposed legislation also outlines regulations for general purpose AI that can be used for different purposes with varying degrees of risk. These technologies include large language models (LLMs) and generative AI systems such as OpenAI's ChatGPT.

In June 2023, EU legislators agreed on changes to the draft EU's AI Act, to now include a ban on the use of AI in biometric surveillance, and to require users and developers of generative systems like Midjourney and ChatGPT to reveal when content has been generated by AI. The changes also require that generative AI companies are to provide summaries of copyrighted material scraped and utilised in the training of each system.

In a recent open letter signed by more than 150 executives, European companies including Renault, Siemens, Airbus, and Heineken warned of the impact that the draft legislation could have on business2. They argue that the proposed laws may heavily regulate generative AI and "could lead to highly innovative companies moving their activities abroad". The companies are calling for the formation of a dedicated regulatory body, formed of experts at the EU level, that can monitor how new laws are applied and take into account new technological advances.

Of course, the final details of the EU's AI Act are not yet known, and requirements may change during the coming closed-door negotiations, known as trilogues, needed to finalise the act.

How is this likely to affect innovation outside the EU?

Like General Data Protection Regulation (GDPR), the territorial jurisdiction of the EU's AI Act is to be expansive. The AI Act will govern not just deployers of AI in the EU, but also providers who place AI systems on the market or into service in the EU. So, despite the concerns of the signatories of the recent open letter, companies based outside the EU will still be required to comply with the requirements of the AI Act if they intend to make their products available within the Union.

The extensive lobbying of the US tech giants in relation to the EU's AI Act is a clear indicator of the important role this upcoming regulation will play in the global development of AI regulation 3. A post-Brexit UK will also be heavily impacted, as the EU's AI Act is likely to be crucial to the UK AI industry's aspirations as exporters to the EU or providers of 'AI-as-a-Service'.

The EU is clearly hoping once again to leverage the "Brussels Effect", to see the AI Act gradually adopted as a global benchmark, and it has been reported recently that they are lobbying Asian countries to follow the EU lead on AI regulation 4.

Against this noisy EU backdrop, other jurisdictions are advancing with their own regulatory AI proposals, including the UK, the US, and China.

Over the last few years, the UK government have consistently voiced their intention to promote data-related innovation and curb administrative burdens on business arising under the existing UK data protection regime, which derives from the EU GDPR.

The rhetoric is reflected in the government's wider digital policy. For example, the UK's National Data Strategy, published in 2020, aims in part to better enable UK businesses to use data to innovate – by enabling greater access to data and by addressing barriers to data sharing. The sentiment is also reflected in the plans the UK government set out recently in its AI Regulation white paper, for the future regulation of the use of AI in the UK. The UK government describe their plans as a new "pro-innovation framework", and say the plans will "bring clarity and coherence to the AI regulatory landscape".

Recently (July 2023), the Cyberspace Administration of China released Interim Measures for the Management of Generative Artificial Intelligence Services. The Chinese government sets out rules to regulate those who provide generative AI capabilities to the public in China. Many sections focus on traditional AI safety measures, such as IP transparency and discrimination. Other sections, however, are fairly unique to China, and include a requirement to for the provision and use of generative AI to "uphold the core socialist values".

The US released a Blueprint for an AI Bill of Rights in late 2022, which provides a framework for how government and technology companies can work together to ensure automated systems "protect civil rights, civil liberties, and privacy". The US have taken a comparatively hands-off approach to AI regulation so far; it remains to be seen if US Congress and US federal agencies will pass binding legislation or issue binding regulations based on the guidance set out in in the Blueprint.

How will the EU's AI Act and other upcoming regulation affect patenting?

While navigating the law regarding patenting AI innovations can be tricky, securing patent rights for AI innovations is certainly not impossible, and AI patenting activity has been rising steadily around the world. Patent applications in the field of AI must be drafted carefully to ensure that sufficient details of key aspects of the AI innovation are included.

Rather than patenting, many innovators in the AI space currently rely on trade secret provisions to protect their AI innovations, because AI innovations in commercial use can often remain undetectable by others. Whatever risks this strategy may bring in terms of evidencing these innovations, employee mobility between competing companies, and the impact of others' patents on freedom to operate, it is likely also that categorisation of an AI innovation as being limited or high risk according to the EU's AI Act will soon require some level of disclosure about how the AI innovation works. Trade secret protection may – in some cases – no longer even be a possibility.

As soon as key details of an innovation have been disclosed to the world, starting the patenting process is no longer an option. A keystone of patenting legislation worldwide is the notion that an innovation must be new – that is, not known to the public before the filing date of a patent application. But innovators in the AI space must now cope with the possibility that they may need to disclose key aspects of their AI creations.

Getting the right advice early on will mean that AI innovators can comply with any potential disclosure obligations that AI innovations might face under the EU's AI Act, and – where commercially sensible – also protect their AI products with a patent.

As AI innovation advances at pace, patents and trade secrets will continue to play an important role in protecting commercial advantages. For innovators in this space, wrestling with the implications of the new and evolving regulatory landscape for AI simply add another factor to consider in any overarching AI IP strategy.

Footnotes

1. European Commission – Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence (https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682)

2. Open letter to the representatives of the European Commission, the European Council and the European Parliament (https://drive.google.com/file/d/1wrtxfvcD9FwfNfWGDL37Q6Nd8wBKXCkn/view)

3. TechCrunch – Report details how Big Tech is leaning on EU not to regulate general purpose AIs (https://techcrunch.com/2023/02/23/eu-ai-act-lobbying-report/)

4. Reuters – Exclusive: EU's AI lobbying blitz gets lukewarm response in Asia (https://www.reuters.com/technology/eus-ai-lobbying-blitz-gets-lukewarm-response-asia-officials-2023-07-17/)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.