Highlights:

  • It is crucial to consider disparate impact discrimination and ensure job-relatedness when utilizing selection tools.
  • Employers have the responsibility to audit and validate AI results in employment decisions.
  • The four-fifths rule and statistical analysis play a significant role in assessing disparate impact.
  • Federal agencies prioritize monitoring the use of AI in employment decisions.
  • Employers using AI in selection procedures should conduct thorough reviews, validations and ongoing audits.

Employers are increasingly using artificial intelligence (AI) to make employment decisions to save time and effort, increase objectivity and decrease bias, and optimize employee performance. The Equal Employment Opportunity Commission (EEOC) issued Technical Assistance Guidance (Guidance) regarding the use of software, algorithms, and artificial intelligence (AI) in employment selection in May. The Guidance is part of the EEOC's ongoing efforts to ensure employers' use of new technologies in employment decisions comply with federal anti-discrimination laws.

The Guidance evidences the EEOC's standards for evaluating employers' use of algorithmic decision-making tools and provides suggested best practices for employers, but it is not binding and does not have the force of law. The guidance is limited to assessing whether an employer's selection procedures have a disproportionately negative impact on categories protected by Title VII of the Civil Rights Act of 1964 (Title VII), like race and gender. It does not apply to other uses of AI or consider other federal or state laws.

According to the EEOC, "[i]n the employment context, using AI has typically meant that the developer relies partly on the computer's own analysis of data to determine which criteria to use when making decisions." This analysis often utilizes machine learning, computer vision, natural language processing and understanding, intelligent decision support systems and autonomous systems. Examples of common AI tools highlighted by the guidance include resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software and worker management software.

The EEOC Warns Employers to Be Wary of Disparate Impact Discrimination Resulting from the Use of AI in Employment Selection

In the employee selection context, disparate impact discrimination can occur when a neutral employment test or selection tool has the effect of disproportionately excluding individuals based on race, color, religion, sex, or national origin, particularly if the tests or selection procedures are not job-related for the position and consistent with business necessity.

According to the guidance, employers can show that a neutral employment test or selection tool is job-related and consistent with business necessity by ensuring the selection procedure is associated with the skills needed for the specific position, as opposed to a general measurement of applicants' or employees' skills. Thus, employers may need to utilize different algorithmic decision-making tools for different positions based on the required skills and apply different assessment criteria for a particular role.
The guidance goes on to state that once an employer shows the selection procedure is job-related and consistent with business necessity, it must determine whether protected classes are adversely impacted and, if so, potentially consider a less discriminatory alternative. In many cases, the less discriminatory alternative might mean having a human perform the selection function instead of using the automated decision-making tool.

The Guidance Highlights the Need for Employers to Validate Results Generated by AI

The guidance emphasizes that employers (not AI vendors) are responsible for any disparate impacts caused by the use of AI tools in employment selection and suggests that employers need to continually audit the employment selection results generated by AI tools. The EEOC directs employers to the EEOC's Uniform Guidelines on Employee Selection Procedures (UGESP) to evaluate whether the use of AI results in a selection rate for individuals in a protected group "that is 'substantially' less than the selection rate for individuals in another group."

The guidance acknowledges that the EEOC and employers have historically relied on the "four-fifths rule" to determine whether the employer's (i.e., AI's) selection process disparately impacts a protected class. Under the four-fifths rule, members of a protected class are presumed to have been disparately impacted by a selection process if the ratio of protected class members selected versus non-protected class members selected is below four-fifths (i.e., 80%). However, the Guidance cautions that while the four-fifths rule is a helpful rule of thumb for employers, a more detailed analysis focused on statistical significance may be required in many circumstances.

What Does This Mean for Employers?

The EEOC's guidance serves as a helpful framework to anticipate how the EEOC will assess employers' use of algorithmic decision-making tools. It also highlights the EEOC and other federal agencies' continued focus on employers' use of AI in employment selection decisions that we predict will only increase with the proliferation of AI usage. Indeed, many other federal agencies, including the Department of Justice, National Labor Relations Board, Consumer Financial Protection Bureau, Federal Trade Commission, and National Institute of Standards and Technology, have recently rolled out AI-related guidance.

Additionally, many states and localities, like Illinois and New York City, have enacted laws regulating the use of AI in employment. Employers who intend on using AI to make employment selection decisions should consider taking the following steps:

  • Review current selection procedures (including those designed or administered by third-party vendors) to determine the extent algorithmic decision-making tools are being utilized.
  • If using AI in employment selection procedures, ensure the selection criteria focuses on skills needed for the particular position, as opposed to a general measurement of applicants' or employees' skills.
  • Ensure in-house and third-party vendor-provided algorithmic selection tools are validated according to the UGESP guidelines. Use the UGESP guidelines to determine whether selection procedures have an adverse impact.
  • Vet any third-party provider of algorithmic decision-making tools, including inquiring as to whether the tool is validated and understanding the provider's process for guarding against discriminatory results.
  • Continually audit selection results to identify any potential or actual adverse impact on protected classes, both before using any algorithmic decision-making tool and on an ongoing basis. Consider the extent to which these audits should be conducted under attorney-client privilege.
  • If a third-party vendor is designing or administrating AI tools for employment selection procedures, consider negotiating indemnification provisions in the agreement with the vendor whereby the vendor assumes liabilities resulting from flawed or unlawful results generated by the AI tool.
  • Keep apprised of legal developments in the space, and consult with knowledgeable legal counsel as new legislation, case law, or regulations are implemented.

Originally published by HR.com.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morrison & Foerster LLP. All rights reserved