• There are increasingly divergent ways that governments across the world are seeking to regulate AI in the workplace.
  • In March 2024, the European Parliament approved the European Union's Artificial Intelligence Act, the world's first comprehensive legal framework on AI.
  • In the United States, although congressional AI legislation seems unlikely, there is more regulatory activity at the state and local levels.
  • Federal agencies remain committed to regulating workplace AI.

On March 21, 2024, the United Nations (UN) adopted a landmark resolution on the promotion of "safe, secure and trustworthy" artificial intelligence (AI) systems. The UN's resolution contains a "comprehensive vision" for how countries can deploy and use AI tools and addresses how countries should respond to its benefits and challenges. Significantly, the UN's resolution was released at a time where there are increasingly divergent paths on regulating AI across the globe. The European approach to regulating AI is often described as more rigorous. In contrast, the United States has so far adopted a light-handed approach to regulating AI in employment decisions. The United States approach is also frequently described as more decentralized. In practical terms, this means that AI regulation is increasingly occurring at a more localized level.

The European Approach: The European Union's Artificial Intelligence Act

On March 13, 2024 the European Parliament approved the European Union's (EU) Artificial Intelligence Act, more commonly known as the EU AI Act. The EU AI Act is the world's first comprehensive legal framework on AI and will apply in the same way across the EU's 27 European Member States, creating a harmonized approach to the regulation of AI. The EU AI Act is expected to have a significant impact on companies across the globe. The EU AI Act creates detailed obligations on those engaged with AI.

The stated aim of the EU AI Act is to improve the functioning of the European market and to promote human-centric and trustworthy AI, while ensuring a high level of protection of fundamental rights against the harmful effects of AI systems and supporting innovation. Almost any organization that is involved in the AI lifecycle will be impacted by the EU AI Act, whether it is developing, using, importing, or distributing AI.

The EU AI Act takes a risk-based approach to AI, creating certain risk-level classifications for AI applications. Put simply, the greater the potential risk that AI poses, the greater the compliance obligations. The EU AI Act sets out AI practices that pose an "unacceptable risk" and would therefore be prohibited. Although the list includes practices that would more traditionally be seen as intrusive (such as untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases) it also includes the use of AI to infer emotions of individuals in the workplace, which could potentially include software that employers are already using. Most of the compliance requirements in the EU AI Act fall on "high-risk" uses of AI, with most of the burden falling on those that develop the AI systems, although there will still be a significant compliance requirements falling on those that use high-risk AI systems. Of most significance to employers, almost any use of AI systems to select or distinguish among employees in the workplace, whether for recruitment, promotion, or termination, will be considered high-risk. For more detail on which AI systems would be deemed "high risk" and the obligations for such systems, please refer to our Littler Insight here.

The EU AI Act will have an international impact for several reasons. First, as with the EU's General Data Protection Regulation (GDPR), the EU AI Act has the potential to apply extraterritorially to companies regardless of whether they are based in the EU or not. The EU AI Act expressly applies to companies placing AI systems or models on the market in the EU, regardless of where the model is based. It also applies to developers and users of AI systems that are based outside the EU, where the output produced by the AI system is "used in the Union." Second, the EU is a large market for many U.S.-based international companies. If these companies are planning to sell or use AI within the EU, they must comply with the requirements of the EU AI Act. Third, the penalties for non-compliance with the EU AI Act are significant and demonstrate the weight that is being given to the EU AI Act. The maximum penalties are up to the higher of $38 million (EUR 35 million) or 7% of the company's global annual turnover in the previous financial year. By way of comparison, this is almost double the maximum penalties for a breach of the GDPR, which was considered to be ground-breaking when the law was implemented six years ago.

The EU AI Act is in the final stages of the legislative process and expected to be rubber stamped in the next couple of months. The majority of the provisions will come into effect after two years, although the prohibition on unacceptable-risk AI systems will come into effect a mere six months from now.

The U.S. Approach

Federal Level

At the federal level, numerous executive orders and congressional acts aimed to broadly regulate and/or guide AI use and development have been proposed, including the Algorithmic Accountability Act. Overall, the consensus is that there is no clear path for comprehensive AI legislation in the U.S. Congress.

In October 2022, the White House issued a "Blueprint For an AI Bill of Rights" ("AI Blueprint") to guide the design, use and deployment of AI systems. The AI Blueprint identified five key principles for protection when it comes to AI systems:

  1. Safe and Effective Systems: AI systems should be developed in consultation with experts and include pre-deployment testing (including identifying risks and mitigation efforts) and ongoing monitoring.
  2. Algorithmic Discrimination Protections: Developers of AI systems should include equity assessments as part of the system's design, and strive to utilize diverse and representative data to train models.
  3. Data Privacy: To the greatest extent possible, people should have agency regarding the collection, use, access, transfer, and deletion of their data in AI systems.
  4. Notice and Explanation: People should be notified when an AI system is being utilized and provided an explanation of its function, and who is responsible for the AI system.
  5. Human Alternative, Consideration and Fallback: People should be able to opt out of AI systems and have human alternatives.

Consistent with the United States' "light-handed" reputation towards regulating AI, the AI Blueprint recommendations are not currently federally mandated.

One year later, in October 2023, President Biden issued a sweeping executive order to address the growing concerns surrounding the use of AI. The executive order, in relevant part, directs federal agencies to develop standards, raise awareness, and increase regulation of AI uses, and also created the White House AI Council, which includes various members of President Biden's Cabinet, to coordinate agency efforts.

In addition, federal agencies have become increasingly involved with regulating AI within their jurisdictional mandates. For instance, in 2021, the U.S. Equal Employment Opportunity Commission launched an initiative to ensure that AI tools and other emerging technologies used in employment decision-making comply with the federal anti-discrimination laws that the agency enforces. Other agencies are also in various stages of developing an AI framework. For example, in 2022, the National Labor Relations Board's general counsel released a memorandum warning employers that using electronic surveillance and automated management technologies presumptively violates employee rights under the National Labor Relations Act. Congress has also introduced the Algorithmic Accountability Act of 2023, with a goal to protect individuals who may be subject to AI-based decision-making in sweeping areas such as housing, education, and credit. The legislation would require the Federal Trade Commission (FTC) to create regulations providing structured guidelines for AI assessment and reporting to companies that operate these systems. The FTC would also be required to publish an annual aggregate report on trends for the public to access.

State and Local AI Laws

Some states (Illinois, Maryland) and local jurisdictions (Portland, Oregon, and New York City) have successfully passed laws regulating AI, focusing on the use of AI in the hiring process. Notably, Illinois and Maryland have enacted laws that directly regulate employers' use of AI when interviewing candidates.

State laws are significant for other reasons. Perhaps, most notably, Silicon Valley is in California, a state regarded as not just a national leader, but a global leader in innovation and AI technology. After all, the Golden State is home to 35 of the world's top 50 AI firms and holds the majority of AI-related patents, corporate ventures and intelligence on the subject, worldwide. It is fair to say that California is actually in the process of curating the "gold standard" for AI innovation and advancement, despite the EU's fast-paced efforts in this area. To address these advancements, California has an executive order in place that addresses the shaping of future ethical and transparent AI development and use. The state's legislature is considering over two dozen pieces of legislation affecting multiple topics such as intellectual property, digital content disclosures, political advertisements, and algorithmic discrimination in employment. California is intent on becoming the world's AI leader.

To date, some local jurisdictions have passed laws regulating AI during the hiring process. In 2023, New York City enacted a broad, and one of the first, AI employment law that regulates employers' use of all AI tools used for hiring and promotion decisions. The law requires employers that use AI to screen candidates who apply for a job or a promotion for a role in New York City to conduct an annual bias audit, publish a summary of the audit results, inform the candidates that AI is being used, and give them the option of requesting an accommodation or an alternative selection process.

Conclusion

Multinational employers and businesses that develop AI would be well-advised to consider what is happening across the globe, including the EU AI Act. Because of the global nature of AI, and the international economy, all regulations in this space will likely be influential as other countries consider the future regulation of AI. Indeed, the rapidly evolving legal landscape requires that employers and their compliance counsel remain especially attentive to current and developing legal authority regarding the use of AI in the workplace.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.