The House Financial Services Committee considered testimony on measures to reduce bias in artificial intelligence ("AI") algorithms used by financial institutions and regulators.

Testimony

University of Massachusetts Amherst Assistant Professor Philip Thomas urged regulators to provide a definition of "fairness" in order to effectively regulate the use of machine learning. He encouraged regulators to consult with machine learning researchers to assess (i) whether a chosen definition of fairness could be legally enforceable, (ii) the potential for unexpected behavior that could result from a particular definition and (iii) the impact a definition may have on profitability. Dr. Thomas also recommended creating algorithms that are "better behaved" by (i) explicitly constraining a model's output behavior and (ii) avoiding definitions of fairness that could potentially impose unintended harm on a group it was designed to protect (e.g., algorithms designed to prohibit the use of race in lending decisions to deter racial discrimination).

Brookings Institution Fellow Makada Henry-Nickie called on regulators, financial institutions and technologies to assess both the benefits and risks of artificial intelligence in order to implement the necessary safeguards to prevent harm. To take the necessary precautions to protect consumers, Dr. Henry-Nickie urged researchers to both (i) identify the source and transmission mechanism causing bias outcomes and (ii) recognize that machine learning bias is "fluid," and explained that flexible safeguards must be implemented to ensure that artificial intelligence can fully benefit consumers.

In addition, she advised Congress to:

  • strengthen the resiliency of the federal consumer oversight framework;
  • support the CFPB in developing a consumer-focused model governance framework that is consistent with the advancement of algorithmic decision-making and market tools; and
  • monitor the Departing of Housing and Urban Development's progress regarding its proposal to address the disparate impact standard.

University of Pennsylvania Professor and National Center Chair Michael Kearns noted the "dangers and harms of machine learning" and the unintended consequences of scientific principles embedded in machine learning.

Emerging Tech AI and Privacy Advisor Bari A. Williams identified several ways to avoid fraud while improving the deployment of AI within the financial services sector. She recommended (i) being deliberate in the auditing of systems by proactively looking for biases, (ii) leveraging statistical techniques to reassess data to reduce bias and (iii) implementing a "fairness regulator" (i.e., a mathematical constraint that guarantees fairness within the algorithmic model). In addition, Ms. Williams encouraged the use of mathematical methods in separate products that provide explanations for the cause of disparity and the cause of fairness. She also called on Congress to increase parity and transparency.

Carnegie Mellon University Heinz College of Information Systems and Public Policy Professor Rayid Ghani stated that policymakers must expand on the current regulatory frameworks to allow them to be more adaptable to AI-assisted decision-making. He urged Congress to:

  • provide educational resources for agencies as they increase their involvement with AI;
  • enhance agency technology capabilities and compliance;
  • include key requirements within requests for proposals for AI systems, such as (i) an initial project phase during which applicants can gather the requirements necessary to ensure equitable outcomes, (ii) a detailed plan and methodology and (iii) a "continuous improvement plan" to ensure desired outcomes.

Commentary Steven Lofchie

Professor Michael Kearns raises important moral, philosophical and economic dilemmas to be considered when designing AI systems to achieve "equitable" outcomes:

Stakeholders must decide what is the right accuracy-fairness balance. We must also be cognizant of the fact that different notions of fairness may be in competition with each other as well. For example, it is entirely possible that by asking for more fairness by race, we must suffer less fairness by gender. These are painful but unavoidable scientific truths.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.