On October 17, 2016, NYU School of Law's Center on Labor & Employment Law empaneled Littler's Global Director of Data Analytics Dr. Zev J. Eigen, U.S. Equal Employment Opportunity Commission Chair Jenny R. Yang, and Professor Pauline Kim of Washington University School of Law to discuss the implications of using predictive algorithms to inform employment decisions. This conference followed the EEOC's October 13, 2016 discussion with a group of Big Data experts, including Littler Shareholder Marko Mrkonich, on the use of data analytics in hiring, performance management, retention and other employment decisions.

Chair Yang opened the meeting by summarizing the Commission's views on the use of analytics in rendering employment decisions, citing testimony offered by panelists from the October 13 event. The EEOC is evaluating the ways in which data scientific applications in employment are used by employers, and exploring how the Commission should respond. Chair Yang expressed concern about validation of methods and opacity in machine-learning processes. She also recognized the tremendous potential value of analytic systems in HR. She noted, "AI has the potential to open doors to opportunity by reducing bias and by increasing the efficiency of our economy as a whole by getting the best people in the right jobs at the right time, and by helping employers make better decisions." This could be done by broadening the talent pool and expanding the opportunities to job seekers that might have been overlooked by employers in the past. She went on to say, "one way big data can improve employment decision-making is by reducing reliance on a frequent source of discrimination—human bias, particular in subjective practices."

This idea dovetailed with Dr. Eigen's point that in evaluating AI systems, it is critical for government agencies, lawmakers, and employers to ask the question, "compared to what?" That is, it is not productive to evaluate whether an algorithm produces "biased" results without comparing it to another realistic alternative method of evaluation (as opposed to some hypothetical ideal situation of zero impact). The AI system-based analysis of objective data should be considered an improvement if it reduces the aggregate negative impact on a protected class as compared to the aggregate negative impact that occurs as a result of subjective human decision-making or alternative systems. AI systems should not be banned simply because they may produce some relative degree of less-than-desirable impact.

Dr. Eigen explained that employers are using AI as a supplement to HR decision-making now to assist human decision-making. There are not currently systems in place in which the entire employment decision-making process is delegated to an AI system. As such, there is no difference between a human rendering a decision based on statistics or data that have been available to employers for decades, versus a human rendering an HR decision based on the "advice" of AI. Instead, AI is a tool to help reduce bias by helping decision-makers focus on objective data over potentially discriminatory subjective factors.

Dr. Eigen made three other points. First, he noted that it is important that employers and regulators become better-informed consumers of AI and machine-learning applications in HR. One thing that has come up repeatedly is the identification of simple correlations in data as a potential source of discriminatory impact. For instance, if an employer selects programmers who frequent a website dominated by men because frequenting that website correlates with better programming performance, that potential impact on female applicants could be problematic. Dr. Eigen argued there are no data scientific applications of which he was aware that would recommend using a simple correlation like this as a method for hiring. In fact, identification of simple correlations like this one has been a problem since early disparate impact case law (such as a correlation between graduating high school and race, as in the seminal case of Griggs v. Duke Power Co.). That is not how AI or machine-learning systems operate. Second, Dr. Eigen explained that it is important for regulators to evaluate not only data generated by employers, but also the software and systems used to generate those data. There has been disproportionate attention paid to the former and insufficient attention paid to the latter. Lastly, to echo Chair Yang's point about the positive value of analytics in HR, Dr. Eigen recommended to employers that they seek out systems that enable the identification of new talent pools to improve diversity and reduce bias broadly.

Professor Kim expressed concern that jurors or fact-finders would regard algorithms as more legitimate and objective and therefore enable bad-actor employers to avoid liability or otherwise obfuscate discriminatory intent. She also noted that AI as applied to HR is fundamentally different from AI as applied in marketing, sales, or related consumer-facing applications. If a customer is erroneously offered a product in which she has no interest, there are fewer consequences than if an applicant is not selected for a job. She worried that some applicants might become discouraged if there were too much uniformity across AI selection systems. She expressed concern that because of the nature of employment selection, a lack of experimental validation would occur, thereby reducing the efficiency of the model and perhaps leading to homogenization of the workforce. She made a comparison to credit card fraud detection to illustrate this point. Credit card issuers use AI to detect fraud. When fraud is detected erroneously (type 1 error), the end-user corrects the error and the AI system learns to improve and correct itself. Professor Kim was concerned that an employment selection AI system would lack sufficient type 1 error correction because applicants who were passed over would not be hired. Dr. Eigen and Chair Yang noted the need for employers to be mindful of these issues.

Employers should take care not to implement systems that will just replicate the demographics of their existing workforce. Instead, employers should implement new AI systems experimentally—so that this precise type of error correction and system learning may take place.

Lastly, the panel discussed the tension between the need for an employer to carefully evaluate the extent to which "algorithms discriminate" on one hand, and the fear of incurring liability for observing relationships between protected categories and outcomes on the other. Dr. Eigen noted that employers are wise to be mindful of these risks. He said, "employers should conduct attorney-client privileged audits to assess risk and reduce it before it materializes into costly litigation." Chair Yang expressed the concept of a "safe harbor" for employers to explore data-scientific applications and reduce risk without fear of potentially being exposed to costly litigation associated with uncovering problems.

Littler will continue to monitor and report on the ongoing dialogue among regulators, government agencies, practitioners and experts in the rapidly evolving space of AI and HR.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.