This morning, Luca Bertuzzi of Euractiv made the final text of the AI Act available.

As noted by Bertuzzi, the final text was circulated to Member States yesterday evening, 21 January 2024. It is rapidly progressing towards a vote by COREPER on 2 February, after a discussion in the Telecom Working Party. With limited time for review, national delegates will focus on key aspects. Bertuzzi points out that France seeks to delay this vote or obtain concessions, currently without success in forming a blocking minority. If unsuccessful now, Bertuzzi suggests that France intends to influence the AI law's implementation, particularly regarding secondary legislation, reflecting its national priority.

We have gone through the final text and below we have outlined our initial thoughts, identifying some of the big changes, and points which will be important for our clients.

Prohibited AI Systems

Based on the final text of the AI Act, the following AI practices will be prohibited:

  • AI systems using subliminal techniques or manipulative or deceptive methods to distort behaviour and impair informed decision-making, leading to significant harm.
  • AI systems exploiting vulnerabilities due to age, disability, or social or economic situations, causing significant harm.
  • Biometric categorisation systems inferring race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation, except for lawful labelling or filtering in law enforcement.
  • AI systems evaluating or classifying individuals or groups based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts or unjustified to their behaviour.
  • 'Real-time' remote biometric identification in public spaces for law enforcement, except for specific necessary objectives like searching for victims or missing persons, preventing threats to safety, or identifying suspects in serious crimes.
  • AI systems assessing the risk of individuals committing criminal offences based solely on profiling or personality traits, except when supporting human assessments based on objective, verifiable facts linked to criminal activity.
  • AI systems creating facial recognition databases through untargeted scraping from the internet or CCTV footage.
  • AI systems inferring emotions in workplaces or educational institutions, except for medical or safety reasons.

High Risk AI Systems

A substantial change in the final text of the AI Act is that AI systems will not be considered high-risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This will be the case if one or more of the following criteria are fulfilled:

  • the AI system is intended to perform a narrow procedural task;
  • the AI system is intended to improve the result of a previously completed human activity;
  • the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or
  • the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

However, an AI system shall always be considered high-risk if the AI system performs profiling of natural persons. Profiling means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements.

A provider of an AI system who considers that an AI system referred to in Annex III (listing high-risk systems) is not high-risk will have to document its assessment before that system is placed on the market or put into service. This means that any AI systems being put on the EU market will probably need to undergo an AI Impact Assessment, which will need to be documented to ascertain whether the Act applies.

The registration requirement outlined in the Act mandates that providers, before launching or using an AI system that they have determined to be non-high-risk, must register themselves and the AI system in the EU database as specified in Article 60. Additionally, if national authorities request it, providers are obliged to furnish the documentation of their risk assessment.

In evaluating whether an AI system presents a risk to health and safety or fundamental rights equivalent to or greater than high-risk AI systems listed in Annex III, the Commission considers several criteria. These include the AI system's:

  • intended purpose;
  • its usage extent;
  • the nature and volume of data it processes, especially sensitive personal data;
  • its autonomy level and the feasibility of human intervention in its decisions;
  • past instances of harm or fundamental rights impact, supported by reports or allegations;
  • the potential severity and scope of such harm or impact, particularly its intensity and potential to affect many people or specific vulnerable groups; any power imbalance or vulnerability of those impacted by the AI system, considering factors like status, knowledge, or socio-economic circumstances;
  • the reversibility of the AI system's outcomes, with a focus on outcomes impacting health, safety, or fundamental rights; the potential benefits of the AI system to individuals, groups, or society, including safety improvements; and,
  • the extent to which existing EU legislation offers effective redress measures or risk mitigation for the risks posed by AI systems.

High-risk AI systems that continue to learn after being placed on the market or put into service must be developed in such a way to eliminate or reduce, as far as possible, the risk of possibly biased outputs influencing input for future operations ('feedback loops') and should be addressed with appropriate mitigation measures.

Interestingly, the final text of the AI Act now provides that not only does the AI Act apply to providers placing AI systems on the market, but also to providers of general-purpose AI models. It applies to deployers of AI systems who have their place of establishment or who are located in the EU. It also applies to deployers or providers of AI systems who are not based in the EU, but where the outputs of the systems are used in the EU.

General Purpose AI

General purpose AI models (GPAI models) are a new addition to the final text of the AI Act, with an entire new chapter of the Act dedicated to them. GPAI models are defined as an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.

Many companies have been concerned that the AI Act might apply to models in development or being used for research etc, but the AI Act does not cover AI models that are used before release on the market for research, development, and prototyping activities.

General-purpose AI systems are AI systems which are based on a GPAI model, which have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.

A substantial change is that the AI Act will not apply to AI systems and models, including their output that are specifically developed and put into service for the sole purpose of scientific research and development.

Importantly for organisations using open-source AI systems, the AI Act only applies to open-source AI systems if they are prohibited or high-risk AI systems.

The Act addresses the classification and obligations of providers of GPAI models, particularly those with systemic risk. A GPAI model is classified as having systemic risk if it has high impact capabilities or is identified as such by the European Commission, particularly if its training involves a significant amount of computational power (measured in floating point operations).

A GPAI model is considered to have high impact capabilities if the total computational power used for its training, measured in floating point operations (FLOPs), exceeds 10^25.

Providers must notify the European Commission if their model meets these criteria, including arguments that their model, despite meeting the criteria, does not present systemic risks. The European Commission can designate a model as having systemic risk based on specific criteria and can reassess this designation upon request from the provider.

GPAI models are subject to certain obligations, such as maintaining technical documentation and providing information to AI system providers who use these models. These obligations, except for specific cases, do not apply to AI models available under a free and open licence. Providers must also cooperate with the European Commission and national authorities, maintain public summaries of their training content, and adhere to copyright laws.

For models with systemic risk, providers must perform standardised model evaluations, assess, and mitigate potential systemic risks, track, and report serious incidents, and ensure adequate cybersecurity protection. Compliance can be demonstrated through codes of practice or European harmonised standards, with alternative compliance methods requiring European Commission approval. Confidentiality of information and documentation is mandated.

Furthermore, providers outside the EU must appoint an authorised representative in the European Union for compliance with these regulations. This representative is responsible for various tasks, including verifying technical documentation and cooperating with the AI Office and national authorities. The representative must terminate their mandate if the provider is non-compliant and inform the AI Office. This obligation does not apply to models available under a free and open-source licence unless they present systemic risks.

Deep Fakes

Something which will be especially important in a year of elections is the concept of deep fakes, defined as AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.

Regarding deep fakes, deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, must disclose that the content has been artificially generated or manipulated. This obligation does not apply where the use is authorised by law to detect, prevent, investigate, and prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

Watermarking is covered in the final text of the AI Act. Providers of AI systems, including those that create synthetic audio, images, videos, or text content, are required to mark the outputs of their AI systems in a machine-readable format to clearly indicate that the content is artificially generated or manipulated. Providers must ensure that their technical solutions for this marking are effective, interoperable, robust, and reliable, as much as technically possible. This requirement considers the various types of content, implementation costs, and the current state-of-the-art technology, potentially reflected in relevant technical standards.

However, this obligation does not apply in certain cases. It is exempted when the AI systems are used for standard editing assistance or when they do not significantly alter the input data or its meaning as provided by the deployer. Additionally, the requirement is waived in situations where the law authorises the use of these systems for detecting, preventing, investigating, and prosecuting criminal offences.

Human Oversight

When it comes to human oversight, the oversight measures shall be commensurate to the risks, level of autonomy and context of use of the AI system. To the extent deployers exercise control over the high-risk AI system, they must ensure that the natural persons assigned to ensure human oversight of the high-risk AI systems have the necessary competence, training, and authority as well as the necessary support.

Employer Obligations

Importantly for organisations planning to deploy AI at the workplace, before putting into service or use a high-risk AI system at the workplace, deployers who are employers shall inform workers representatives and the affected workers that they will be subject to the system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in European Union and national law and practice on information of workers and their representatives.

Codes of Practice

The AI Office plays a key role in developing codes of practice to support the proper application of the AI Act. These codes should cover obligations in specific articles, focusing on issues such as keeping information up to date, identifying systemic risks at the European Union level, and managing these risks proportionately and effectively.

The AI Office can involve AI model providers, national authorities, and other stakeholders like civil society, industry, academia, and independent experts in creating these codes. These codes should have clear objectives and commitments, including key performance indicators, and consider the needs and interests of all relevant parties, including those affected by AI at the Union level.

All AI model providers are invited to participate in the codes of practice, with some limitations for providers of non-systemic risk models unless they express interest in full participation. Participants are expected to report regularly on their implementation efforts and outcomes, considering the varying capacities of different participants.

The AI Office and the AI Board will regularly monitor and evaluate the effectiveness of these codes in achieving their objectives and contributing to the Regulation's proper application. They will publish assessments on the adequacy of these codes, and the Commission may grant general validity to approved codes within the Union through implementing acts.

The AI Office is also tasked with encouraging the review and adaptation of codes, especially in response to emerging standards, and will assist in assessing available standards. If a Code of Practice is not finalised or deemed inadequate, the Commission may provide common rules for implementing obligations in the Regulation, including the issues outlined in the Article.

Testing AI Systems

The Act outlines the conditions and procedures for testing high-risk AI systems in real-world conditions outside AI regulatory sandboxes. Providers or prospective providers of high-risk AI systems can conduct these tests following a comprehensive plan approved by the market surveillance authority. The testing should not exceed six months unless extended for valid reasons, and should adhere to ethical and legal guidelines, including the protection of vulnerable groups and informed consent from test subjects.

The testing plan must be approved by the relevant market surveillance authority, and certain conditions must be met, including registration in the EU database, establishment of the provider in the Union or appointment of a legal representative within the Union, adherence to data transfer safeguards, and ensuring the reversibility of the AI system's outcomes. Providers must also inform and instruct any cooperating deployers about the test.

Participants can withdraw from the testing at any time, and providers are required to report serious incidents to the national market surveillance authority. They must also notify authorities of any suspension or termination of the test and its final outcomes. Providers are liable for any damages caused during the testing.

Informed consent is a crucial aspect of testing, ensuring participants are fully aware of the testing's nature, objectives, duration, their rights, and the means to reverse or disregard the AI system's decisions. This consent must be documented, and a copy provided to the subject or their legal representative.

SMEs and start-ups are given priority access to regulatory sandboxes.

Third Party Agreements

In a move that reflects GDPR obligations pertaining to data processing, providers of high-risk AI systems must have a written agreement with third parties supplying AI systems, tools, services, components, or processes used in high-risk AI systems. This agreement must clearly define the information, capabilities, technical access, and support needed, based on current industry standards, to ensure the high-risk AI system provider can fully meet the obligations of this regulation. However, this requirement does not extend to third parties who make tools, services, processes, or AI components (other than general-purpose AI models) available to the public under a free and open licence.

The AI Office has the authority to create and suggest optional standard contract terms for agreements between providers of high-risk AI systems and third parties supplying tools, services, components, or processes used in these systems. In formulating these standard terms, the AI Office will consider any sector-specific or business-specific contractual needs. Once developed, these model contract terms will be made publicly accessible, free of charge, in a user-friendly electronic format.

Technical Documentation

When it comes to technical documentation, there is some respite for SMEs and start-ups, most of whom have been concerned about the costs of compliance with the AI Act. SMEs, including start-ups, are allowed to submit the technical documentation required in Annex IV in a more simplified manner. To facilitate this, the European Commission will create a simplified form specifically designed for small and micro enterprises. If an SME or start-up chooses to present the information in this simpler format, they must use this designated form. Notified bodies will accept this simplified form for the purposes of assessing conformity.

For providers of high-risk AI systems, maintaining documentation is crucial. Providers are required to keep the following documents available for national authorities for 10 years after the AI system has been launched or put into use: (a) the technical documentation (b) details about the quality management system; (c) records of any changes approved by notified bodies, if relevant; (d) any decisions or documents issued by notified bodies, where applicable; and (e) the EU declaration of conformity.

AI Definition

We now have a final definition of an AI system, which is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as content, predictions, recommendations, or decisions that can influence physical or virtual environments.

Emotion Recognition

There is a very broad definition of emotion recognition systems, which could be problematic, particularly for any organisations relying on biometric data for sentiment analysis. An "emotion recognition system" is defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.

Facial Recognition

There is a lot of new additions to the AI Act in relation to real-time facial recognition used by law enforcement, which will only be authorised in limited circumstances, likely based on a warrant, and subject to other stipulations.

Data Governance

In terms of data governance, organisations must now also consider data collection processes and origin of data, and in the case of personal data, the original purpose of data collection.

Standards

When it comes to standardisation, the European Commission is tasked with promptly issuing standardisation requests that encompass all the requirements of a specific section of the AI Act. These requests will also include guidelines on reporting and documentation processes aimed at enhancing the resource efficiency of high-risk AI systems throughout their lifecycle. This includes reducing energy and other resource consumption, as well as focusing on the energy-efficient development of GPAI models.

When preparing these standardisation requests, the European Commission is required to consult with the AI Board, the Advisory Forum, and other relevant stakeholders. The European Commission must ensure that the requested standards are consistent with existing and future sector-specific standards for products under current EU safety legislation. The objective is to make these standards clear and effective in ensuring that AI systems or models released or used in the EU comply with the regulation's relevant requirements.

Fines

Non-compliance with the prohibited AI systems obligations is subject to administrative fines of up to €35,000,000 or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Non-compliance with the requirements of high-risk systems will be subject to administrative fines of up to €15,000,000 or, if the offender is a company, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

The supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to €7,500,000 or, if the offender is a company, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. In the case of SMEs, including start-ups, each fine referred to in the relevant articles of the AI Act shall be up to the percentages or amount referred to in the fine's language, whichever of the two is lower.

Timelines

Once entered into force, the AI Act will apply to prohibited systems from six months, of the commencement date, to GPAI from 12 months, and high-risk AI from 24 months for the high-risk systems listed Annex III (e.g. AI used in education, law enforcement, etc), and 36 months (where it's a safety component of regulated products).

Codes of practice must be ready at the latest by nine months from when the AI Act entered into force.

Next Steps for your Organisation

Organisations should adopt a proactive approach to align their AI systems with these ethical and legal standards.

To adapt to these changes, organisations should consider if they need to:

  1. Conduct Rigorous Evaluations: Assess AI systems to ensure they comply with prohibitions on manipulative or exploitative practices and determine their risk classification under the high-risk criteria.
  2. Develop Robust Documentation: Establish thorough documentation and risk assessment processes, especially for general-purpose AI models, to demonstrate compliance and manage potential risks.
  3. Implement Disclosure Measures: Be aware of obligations regarding AI-generated content, such as deep fakes, and adopt measures, like watermarking, to ensure transparency and build trust.
  4. Invest in Human Oversight: Enhance training and expertise for effective oversight of AI deployments, which is crucial for both compliance and building reliable AI offerings.
  5. Ensure Transparent Communication: Communicate transparently with employees about AI deployment in the workplace, adhering to European Union and national laws to foster trust and mitigate the risk of legal disputes.
  6. Manage Data Governance: Pay meticulous attention to data collection processes and data origin, focusing on maintaining data integrity and trust in AI systems.
  7. Prepare for Compliance Costs: Be aware of the potential financial risks due to non-compliance, including substantial fines, and develop comprehensive compliance strategies.
  8. Stay Informed and Adapt: Keep abreast of evolving standards and be prepared to adapt AI strategies and operations in response to regulatory changes.
  9. Strategic Planning for Timelines: Begin compliance efforts in advance, considering the different timelines for prohibited systems, general-purpose AI models, and high-risk AI systems.

In summary, the AI Act requires organisations to carefully evaluate and adapt their AI strategies to meet new regulatory demands. By taking these steps, organisations can ensure compliance, maintain ethical standards, and position their AI offerings for success in a rapidly evolving legal landscape.

Conclusion

In conclusion, the AI Act marks a significant shift in the regulatory environment for organisations involved in AI and IP law, particularly within the Irish and EU contexts. The AI Act necessitates a proactive approach from organisations to ensure ethical compliance, especially in light of prohibitions on certain AI practices like manipulative methods and exploiting vulnerabilities. The careful assessment of high-risk AI systems and general-purpose AI models is essential, not only for compliance but also for shaping product development strategies and managing systemic risks.

The advent of sophisticated deep fakes and the requirement for transparency in AI-generated content bring new challenges, particularly for the media and entertainment sectors. Human oversight becomes increasingly crucial in ensuring accountability and reliability of high-risk AI systems, requiring a significant investment in training and expertise. Additionally, organisations need to maintain transparent communication about AI deployment in workplaces, adhering to both European Union and national laws to build trust and avert legal disputes.

Simplified technical documentation for SMEs and start-ups does not lessen the importance of accurate compliance, where advisory support can be highly beneficial. Robust data governance practices are imperative in maintaining data integrity and trust in AI systems. Organisations must also be vigilant about the substantial fines for non-compliance and stay updated on evolving standards to develop comprehensive compliance strategies.

Preparing for the AI Act's timelines is critical for strategic planning, with early efforts in compliance recommended, especially regarding prohibited systems and general-purpose AI models. Overall, the AI Act presents multifaceted challenges and opportunities, demanding careful navigation and strategic positioning for organisations to thrive in this new, AI-driven regulatory landscape.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.