Today, the European Commission ("Commission") released a set of documents on Europe's digital future. Alongside the Commissions' communications on "Shaping Europe's digital future" and "A European strategy for data," the package also includes the much expected "White Paper on Artificial Intelligence ("AI") – A European approach to excellence and trust"1 ("Paper").
So what is the Paper about—or, rather, not about? First, and as it was anticipated, the Commission dropped—at least for now—the idea of a temporary ban on the use of facial recognition technologies in public spaces. This follows the concerns expressed by various stakeholders about such a prohibitive approach immediately following the leak of a draft version2 of the Paper, which mentioned the idea of a ban, in January this year.
The Commission proposes to limit its legislative action(s) to high-risk AI applications (while at the same time confirming that not-high-risk AI applications will remain entirely subject to already existing European Union rules). These high-risks applications would be identified on the basis of two criteria: (1) the sector where the AI application would be deployed and (2) its uses (with some exceptions for AI applications that are considered as high-risk per se, e.g., those used for remote biometric identification).
For high-risk AI applications, specific obligations relating to, inter alia, training data, data and recordkeeping, robustness and accuracy, and human oversight would be introduced in the European Union ("EU"). The Commission aims to distribute those obligations in the way that best reflects the position of relevant actors in the AI system lifecycle. (For example, while developers of AI may be best placed to address risks in the development phase, they will have no or less availability to control risks during the use phase—those are aspects that the Commission is willing to factor in.) In addition, the Paper suggests that the upcoming legislative proposal will have some extraterritorial reach, as it will be directed to all AI-enabled products or services in the EU, regardless of whether the operators providing them are established in the EU. The Commission also considers implementing mandatory conformity assessments mechanisms, such as testing, inspection or certification for all economic operators addressed by the requirements, irrespective of their place of establishment.
After the GDPR ( and some aspects of the EU Cybersecurity Act, today's development paves the way for further attempts of (data) imperialism by the EU.
The proposals set out in the Paper are subject to public consultation until May 19, 2020; input can be provided on the Commission's website. The legislative proposal is expected to be released in the last quarter of 2020.3
Visit us at mayerbrown.com
Mayer Brown is a global legal services provider comprising legal practices that are separate entities (the "Mayer Brown Practices"). The Mayer Brown Practices are: Mayer Brown LLP and Mayer Brown Europe – Brussels LLP, both limited liability partnerships established in Illinois USA; Mayer Brown International LLP, a limited liability partnership incorporated in England and Wales (authorized and regulated by the Solicitors Regulation Authority and registered in England and Wales number OC 303359); Mayer Brown, a SELAS established in France; Mayer Brown JSM, a Hong Kong partnership and its associated entities in Asia; and Tauil & Chequer Advogados, a Brazilian law partnership with which Mayer Brown is associated. "Mayer Brown" and the Mayer Brown logo are the trademarks of the Mayer Brown Practices in their respective jurisdictions.
© Copyright 2020. The Mayer Brown Practices. All rights reserved.
This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.