Federal and state regulators are increasingly focusing their attention on artificial intelligence ("AI") tools, including the use of automated decision-making tools in employment. This White Paper explores current uses of AI in the workplace, focusing on the use of automated decision-making tools by employers during the recruiting and hiring process; examines the legal and regulatory risks associated with increased use of AI in employment; discusses employment policy considerations associated with employee use of AI-powered chatbots; and offers tangible solutions for employers seeking to reduce litigation risk and stay one step ahead while remaining in compliance with existing laws and emerging legislation.

The growing use of AI to make employment decisions has drawn the attention of lawmakers and regulators, who are concerned about privacy, the possibility of algorithmic bias, and the impacts of automation. On October 28, 2021, the Equal Employment Opportunity Commission ("EEOC") announced the launch of a new initiative to "ensure that [AI] and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces."1 In May 2022, the DOJ Civil Rights Division and the EEOC each issued a technical assistance document regarding AI and the potential for disability discrimination in the employment context.2

In April of this year, officials from the EEOC and other agencies that enforce employment, fair lending, and fair housing laws issued a joint statement pledging "to use [their] enforcement authorities to ensure AI does not become a high-tech pathway to discrimination." On May 18 of this year, the EEOC released a technical assistance document that explains the EEOC's views about the application of Title VII of the Civil Rights Act ("Title VII") to an employer's use of automated systems, including those that incorporate AI.3 Several proposals for comprehensive AI-related legislation, including the Algorithmic Accountability Act, have been proposed in Congress.4 And some lawmakers have called for the creation of an expert federal agency focused on regulating the development and use of AI.5

Federal officials are not alone in voicing concerns regarding AI. On July 5, 2023, New York City began enforcing a law that governs employers' use of AI to make hiring and promotion decisions. In California, a new agency—the California Privacy Protection Agency—is preparing to write new rules to address the uses and abuses caused by automated decision-making technology. Other states and localities are considering similar legislation and regulations. The New York City law, as well as proposed state laws, require employers to disclose how they are using AI and identify any disparate impacts on race, gender, and other protected categories. By drawing attention to the use of AI tools, these required disclosures could spur litigation asserting discrimination under Title VII, the Americans with Disabilities Act ("ADA"), the Age Discrimination in Employment Act ("ADEA"), and other employment laws.

ARTIFICIAL INTELLIGENCE IN THE WORKPLACE

Now more than ever, employers are relying upon AI. It is used in nearly every stage of the employment process, including recruiting, hiring, training, retention, promotion, compensation, and firing. In December 2021, EEOC Chair Charlotte Burrows reported that "83% of employers" and "90% of Fortune 500 companies" rely on AI during hiring.6 Within hiring and recruiting, employers use AI tools to target job postings to specific groups, screen applicants to move forward in the hiring process, administer automated interviews, and analyze candidate responses.

Two common types of AI are predictive algorithms (which can use labeled datasets to train algorithms to classify data or predict outcomes) and natural language processing (which helps machines process and understand human language). Both technologies may be used in a single AI tool. For example, an AI tool that screens applicant resumes may use natural language processing to scan the resume for key words and use predictive algorithms to select candidates for interviews. Such a tool might be "trained" using resumes from current employees who are high performers so that the tool, without human intervention, can decide what factors predict an applicant's success at the company. The AI tool's output is a shortlist of prescreened resumes that, in theory, reflects candidates who have similar attributes to successful employees. In short, the tool is making decisions that would previously have been made by humans.

In addition to screening resumes, employers are using AI tools to evaluate candidates through video interviews. Live or recorded video interviews can be run through software utilizing a combination of machine learning, computer vision, and natural language processing to evaluate candidates based on their facial expressions and speech patterns, and then provide a score or assessment of the applicants' attributes or fitness for a job. Other applications evaluate applicants' personalities, aptitudes, cognitive skills, or "cultural fit."

The efficiencies obtained by application of AI to human resources functions can be profound. By one measure, 85% of HR professionals reported that AI tools save them time and/ or increase their efficiency.7 Nearly 50% said that such tools improve their ability to identify top candidates.8 By streamlining repetitive tasks like screening resumes, recruiters have more time to provide a personalized experience to candidates and increase their competitive edge.

LEGAL RISKS OF USING ARTIFICIAL INTELLIGENCE IN HIRING AND RECRUITMENT

AI vendors often promise that their products will reduce or eliminate unconscious bias in recruiting and hiring decisions. However, critics express concern that AI tools perpetuate and can even exacerbate biases that are embedded in the training data. An often-cited example is Amazon's attempt to build an AI recruitment tool, which was abandoned in 2018 when engineers found that the algorithm discriminated against female candidates.9 The company's AI-driven model reportedly downgraded resumes containing the word "women's" and filtered out resumes with terms related to women, including candidates who had attended women-only colleges. This reportedly occurred because the tool was trained primarily on resumes submitted to the company over the past 10 years, the majority of which were from male candidates.

More recently, Workday's AI-powered screening tools are being challenged in a class action lawsuit filed in a California federal court in February 2023. The plaintiff alleges that these AI tools disqualify Black, over-forty, and disabled applicants.10 The plaintiff alleges he has been rejected from 80–100 positions that purportedly use Workday as a screening tool for applicants. Workday's AI-dependent tools, he argues, "allow its customers to use discriminatory and subjective judgements in reviewing and evaluating employees for hire" and "caused disparate impact and disparate treatment" against AfricanAmericans, individuals with disability, and individuals over the age of 40 in violation of Title VII, the ADA, and ADEA.

This section considers legal risks tied to AI adoption, including potential claims under discrimination laws, privacy laws, and newly adopted state and local legislation.

Disparate Treatment and Disparate Impact Claims

While AI technology is relatively new, the use of selection procedures or tools in making employment decisions is not. Well before the AI-era, employers used a variety of selection tools and procedures, including written aptitude tests, strength tests, and personality assessments. Those types of selection procedures have been repeatedly challenged in court under Title VII and other antidiscrimination laws. As early as 1978, the EEOC adopted the Uniform Guidelines on Employee Selection Procedures (the "Guidelines"), which provide guidance on how to assess bias in selection procedures. More recently, the EEOC has stated that these Guidelines apply squarely to AI tools that are used to make employment decisions.11

Selection tools are typically challenged through a disparate impact theory of discrimination. Unlike disparate treatment claims, disparate impact claims do not require proof of intentional discrimination. Rather, they require proof that a facially neutral employment policy or practice caused a disparate impact on a protected group without relevant justification. Employers who use biased AI tools could have liability under this theory without knowing or intending that the tool disadvantage a protected group. To demonstrate how this might occur, we first explain how courts analyze disparate impact claims, and then compare how this analysis typically applies outside the context of AI to how it might apply to an AI-powered selection tool.

Disparate impact claims arising under Title VII generally proceed in three parts.12 First, the plaintiff must identify a facially neutral employment practice or policy that caused a disparate impact on the basis of race, color, religion, sex, national origin, age, disability, or other protected category.13 Second, the employer can defend against a showing of disparate impact by demonstrating that the practice or policy is both "job-related and consistent with a business necessity."14 And third, the plaintiff can rebut the employer's "job-relatedness" defense by establishing that the employer failed to adopt a less discriminatory practice that would have equally met the employer's legitimate need.15

Statistical analysis plays an important role in litigating Title VII disparate impact claims. One statistical approach is to compare the selection rates of a particular protected group (e.g., White, Black, Latino, Asian, Native Hawaiian or Pacific Islander, Native American or Alaska Native) to the selection rate of another protected group. Because differences in selection rates can be caused by chance, it is important to measure whether the difference is significant enough to rule out chance. In the Guidelines, the EEOC uses a "four-fifths' rule" as a rule of thumb for screening out matters it is less likely to pursue. This metric measures whether the selection rate for one protected group is less than 80% of the selection rate for the protected group with the highest selection rate.16

Courts have expressed skepticism toward the four-fifths rule, however, noting that it is inherently unreliable, especially when analyzing small sample sizes.17 Even the EEOC has noted that "the four-fifths rule may be inappropriate under certain circumstances."18 Another approach, articulated by the U.S. Supreme Court in Hazelwood School District v. United States, 19 utilizes standard deviations. There, the Supreme Court noted that a disparity of more than two or three standard deviations "would undercut the hypothesis that decisions were being made randomly with respect to [the protected group]." Given the specialized nature of these inquiries, it is important that statistical analyses be supported by relevant and reliable expert analyses.

A proper statistical analysis must first identify the pool from which to assess adverse impact—in other words, the denominator of the selection rate formula. The pool should be aligned to the challenged employment decision. For example, if a plaintiff alleges that a written exam administered to applicants had a disparate impact on women, the pool of similarly situated applicants consists of all applicants who took that examination during the relevant time frame. If the pool is too broad—for example, if it includes applicants who took a different test or were screened out based on some other criteria at a different stage of the hiring process, the analysis will not be meaningful.

Determining the proper pool becomes difficult with certain AI tools. Take, for example, an AI tool that administers an AI-based pre-employment examination that changes based on dynamic data sets. These algorithms utilize machine learning to "learn" or "improve" over time. Thus, in our example, the AI-based examination an employer deploys in Week 1 may be different from the examination administered in Week 4, as the tool "learns" that certain questions are less likely to produce a desired result. It may be difficult to produce a meaningful statistical analysis without knowing how the AI tool works, e.g., how frequently the algorithm changes or whether the algorithm is different for different applicant pools. For similar reasons, in cases involving AI tools, it may be difficult to establish the causation prong—i.e., that the employment practice "caused" a disparate impact.

Certain AI tools could present challenges for employers to formulate and advance their "job-relatedness" defense. As noted above, an employer can defend against a disparate impact claim by showing that the selection criteria used by the AI tool is "job-related for the position in question and consistent with business necessity,"20 but the employer may not have complete visibility into the selection criteria used by the AI tool. Employers may be able to strengthen a job-relatedness defense with proof that the algorithm is programmed to utilize job-related criteria and by demonstrating how the algorithm applies that criteria.

Impermissible Reliance on Regulated Data Sources

Employers' reliance on AI tools also implicates state privacy laws. Many AI tools derive their efficacy and efficiency, in part, by relying on extremely large data sets. Generally speaking, the larger the data set, the more accurate an AI tool's predictions and/or recommendations will be. However, not all data is fair game for employers to use in connection with AI tools. Some data—such as criminal history, salary history, and biometric data—are subject to regulation in certain jurisdictions when used in the employment context. Further, employers who rely on certain data about candidates as part of the hiring process must comply with federal and state background checks laws.

AI Tools and Protected Data Sets. While antidiscrimination laws forbid employers from hiring employees based upon their race, sex, age, and other categories, employers also must be cautious when considering other types of protected data, including criminal history, salary history, and/or biometric data in the hiring process. Various jurisdictions restrict employers from using such data outright, while others require that employers abide by specific disclosure and other requirements before doing so.

Criminal and Salary History. Federal law does not ban employers from considering an applicant's criminal history, although EEOC guidance asserts that excluding all applicants with an arrest record will run afoul of Title VII if doing so results in discrimination based on race or another protected characteristic.21 Many state and local laws, however, explicitly restrict employers from considering applicants' or current employees' criminal history, either completely or unless certain conditions are met. For example, laws in California and New York limit the types of criminal records that may be considered and prohibit inquiry into an applicant's criminal history until after a conditional offer has been made.

To view the full article, click here.

Footnotes

1. EEOC Press Release, "EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness" (Oct. 28, 2021).

2. DOJ Civil Rights Division, Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring (May 12, 2022); EEOC, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022). At least one EEOC Commissioner has publicly voiced concern that the EEOC's May 12, 2022, technical assistance document "was not voted on by the full Commission, and did not go through the administrative law process involving noticed and comment." Keith E. Sonderling, Bradford J. Kelly, and Lance Casimar, The Promise and The Peril: Artificial Intelligence and Employment Discrimination, 77 U. Mia. L. Rev. 1, 42 (2022).

3. EEOC Press Release, "EEOC Releases New Resource on Artificial Intelligence and Title VII" (May 18, 2023); EEOC, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023).

4. Keith E. Sonderling & Bradford J. Kelley, Filling the Void: Artificial Intelligence and Private Initiatives, North Carolina Journal of Law & Technology (Vol. 24, Issue 4: May 2023).

5. Digital Platform Commission Act, S. 1671, 118th Congress (2023).

6. Workforce Matters, EEOC working to stop artificial intelligence from perpetuating bias in hiring (Dec. 5, 2021).

7. Society for Human Resource Management, Fresh SHRM Research Explores Use of Automation and AI in HR (April 13, 2022).

8. Id.

9. Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against women, Reuters (Oct. 10, 2018).

10. Mobley v. Workday, Inc., No. 3:23-CV-00770 (N.D. Cal. Feb. 21, 2023).

11. EEOC, Select Issues, supra note 3.

12. The disparate impact proof structure under Title VII, in part due to congressional amendments of that statute in 1991, differs in important ways from the proof structure under other federal antidiscrimination provisions that codify disparate impact.

13. 42 U.S.C. § 2000e-2(k)(1)(A).

14. Id.

15. Id. § 2000e-2(k)(1)(A)(ii).

16. EEOC's Uniform Guidelines on Employee Selection Procedures, codified at 29 C.F.R. §§ 1607.1, 1607.4(D).

17. See, e.g., Watson v. Fort Worth Bank & Trust, 487 U.S. 977, 995 (1988) (noting that the rule "has been criticized on technical grounds . . . and has not provided more than a rule of thumb for courts"); Stagi v. Nat'l R.R. Passenger Corp., 391 Fed.Appx. 133, 138 (3d Cir. 2010) (unpublished) ("[T]he 'four-fifths rule' has come under substantial criticism, and has not been particularly persuasive.").

18. See EEOC, Select Issues, supra note 3.

19. 433 U.S. 299 (1977)

20. 42 U.S.C. § 2000e-2(k)(1)(A). See also Griggs v. Duke Power Co., 401 U.S. 424 (1971).

21. EEOC,Enforcement Guidance on the Consideration of Arrest and Conviction Records in Employment Decisions under Title VII of the Civil Rights Act (April 25, 2021).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.