EU legislators reached agreement on the AI Act on 9 December last year, paving the way for the EU to become the first region in the world to regulate the technology. It is expected to have a 'Brussels effect', whereby other countries around the world will be encouraged to follow suit.

The AI Act has yet to be formally adopted in a plenary vote of the EU Parliament in April, after which the text will be published in the EU's Official Journal. Once done, the AI Act will be introduced in phases over the next two to three years. There are a variety of 'actors' who fall within the scope of the Act: Providers, Deployers, Authorised Representatives, Importers, Distributors, Operators and the 'Affected Person'.

The AI Act has an extra-territorial effect meaning that it still applies to developers based outside the EU if the output of their AI systems is used within the EU.

There are some exclusions to the AI Act – for example, where AI is used for defence or national security purposes or AI models developed for scientific research.

RISK-BASED APPROACH TO CLASSIFICATION

There are different sets of risk-based classification rules for AI systems:

  • Prohibited AI systems – these are systems which violate human rights in the EU such as social scoring, exploitation of vulnerabilities and real-time facial recognition in public spaces.
  • High-risk AI systems – AI systems that are a product or safety component of a product that is subject to EU sectoral legislation on product legislation and are required to undergo a third-party conformity assessment, or AI Systems that pose a significant risk of harm to the health, safety or fundamental rights of natural persons which must undergo a fundamental rights risk assessment.
  • General purpose AI models and generative AI which are subject to varying degrees of compliance depending on whether a GPAI model poses a "systemic risk" in the EU.
  • Low risk AI systems – AI systems with specific transparency requirements such as chatbots or deep fakes.
  • Unregulated AI systems such as video games and spam filters.

High-risk AI Systems will be required to carry out assessment, registration, risk management and human oversight, governance and technical documentation and record-keeping.

Generative AI is the category of AI systems specifically intended to generate content such as complex text, images, audio or video. There are a number of transparency obligations for Generative AI.

General purpose AI models will follow a two-tiered approach whereby higher obligations apply for models with systemic risk than those with no systemic risk.

ENFORCEMENT

  • Penalties could include fines of up to 15 million euros or 3% of an entity's annual turnover if they fail to comply with regulations for high-risk AI systems
  • Some enforcement will be carried out at the EU level, some by national regulators.
  • The European Commission is establishing an AI Office which is still in its early days.
  • Whilst Spain has already established its regulatory body, the remaining EU members have yet to do so.

PRACTICAL NEXT STEPS

  • Assess your organisation's needs and potential use of AI.
  • Consider where AI sits within your governance structure.
  • Put in place some form of compliance including a risk assessment of your AI tools and systems.

This article is based on the first in a series of webinars on the topic of AI Fieldfisher is hosting throughout the year.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.