The AI Act aims to guarantee that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. By taking a risk-based approach, it seeks to achieve a balance that would foster customer trust, as well as investment and innovation in the field of AI within Europe.

The AI Act has an extraterritorial reach, applying to AI providers regardless of their location, users within the EU, and providers and users outside the EU when the output produced by the system is used in the EU.

In summary, the AI Act will, if enacted:

  • Prohibit certain AI systems, such as cognitive behavioral manipulation, emotion recognition used at the workplace, and social scoring by governments or companies;
  • Impose strict requirements on high-risk AI systems,including risk-mitigation systems, high quality of data sets, activity logs, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity;
  • Outline transparency obligations on other AI systems.  For instance, when employing AI systems, such as chatbots or "deep fakes," companies should inform users that they are interacting with a machine; and
  • Introduce specific rules for general-purpose AI models and foundation models in order to guarantee transparency. General-purpose AI models will have to comply with requirements on technical documentation, EU copyright law, and rules regarding content used for training. Powerful models that could pose systemic risks will have to comply with additional obligations related to managing risks and monitoring serious incidents, performing model evaluation, and adversarial testing.

The AI Act would establish a governance structure, with an AI Office within the European Commission tasked to oversee general-purpose AI models, participating in the development of standards, and testing practices and enforcing the common rules across EU Member States. National competent authorities would oversee AI systems and gather in the AI Board, which would act as a coordination platform and advisory body to the European Commission.

The fines for violations of the AI Act could reach up to €35 million or 7% of the company's global annual turnover.

The political agreement will now have to be formally approved by EU legislators. It does not require national implementing measures and will start applying directly after a two-year transition period. However, the prohibition of certain AI systems will already apply after six months and the rules on general-purpose AI after 12 months. A number of "harmonized standards" for translating the legal requirements of the AI Act into specific technical requirements are also likely to be developed in advance of the date of application.

Following the formal adoption of the AI Act, the European Commission will launch an AI Pact aimed at bringing AI developers from Europe and across the globe to voluntarily commit to key obligations of the AI Act before the legal deadlines.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.