On April 10, 2019, U.S. lawmakers introduced the Algorithmic Accountability Act (the AAA). The bill is sponsored by U.S. Senators Ron Wyden (D-OR) and Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) sponsored a House of Representatives equivalent bill. The AAA empowers the Federal Trade Commission (FTC) to promulgate regulations requiring covered entities to conduct impact assessments of algorithmic "automated decision systems" (including machine learning and artificial intelligence) to evaluate their "accuracy, fairness, bias, discrimination, privacy and security."

Automated decision systems of covered entities

The AAA defines "covered entities" as those that generate more than $50 million per year, possess or control the personal information of at least one million consumers or devices, or that act as data brokers as a primary business function. The bill empowers the FTC to establish a framework for evaluating the potential bias or discrimination against consumers that might result from covered entities' use of automated systems, broadly defined as "computation process[es], including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making."

Bias/discrimination impact assessment

Under the bill, covered entities are required to audit their processes for bias and discrimination and to timely correct identified issues. There have been growing concerns regarding racial, gender-based or political biases that can result from automated decision-making and the potential negative impacts from the use (or misuse) of artificial intelligence. Senator Wyden highlighted these concerns in the press release announcing the introduction of the bill. The Senator notes that "instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color" in a number of significant decisions that impact consumers, including home ownership, creditworthiness, employment decisions and even incarceration. The bill provides that in evaluating automated systems for bias, review of, among other things, the systems' "training data" must take place to determine whether or how a given system's biases manifest. Stated another way, the bill rests on the argument that an algorithm is only as good as the data by which it is informed.

Data protection impact assessment

Although the public presentation of the AAA has focused more on the risks associated with bias and discrimination from automated decision-making, the AAA also includes imperatives for privacy and data security. Many automated decision-making systems require large amounts of data (often in the form of personal information) to maximize the efficacy of their results and to learn and improve. The collection, retention, use and disposal of that information also carries risks to privacy and security that other countries have recognized and that the AAA seeks to address by requiring covered entities to audit "the extent to which an information system protects the privacy and security of personal information the system processes."

Given the sensitivity of the information often collected by automated decision-making systems (including race, gender, biometric, and genetic information), any promulgated regulations that address the issue of what qualifies as appropriate impact levels may take queues from existing guidance that provides for heightened levels of care for such sensitive data points.

Implications

The legislation is illustrative of a growing trend that requires companies to analyze whether the use of, arguably, objective algorithms in making certain decisions, such as employment and lending decisions, produce inadvertent discriminatory outcomes. Moreover, the bill is evocative of a significant trend and strategy to regulate technology and the use of personal data while reinforcing regulatory power at the federal level and promoting industry self-regulation.

In the same week as the AAA was proposed, Senators Mark Warner (D-VA) and Deb Fischer (R-NE) introduced the "Deceptive Experiences To Online Users Reduction (DETOUR) Act," which prohibits the use of coercive practices that manipulate or pressure consumers into consenting to various collections and use of their personal data without making an informed choice. A bipartisan proposal, the "Commercial Facial Recognition Privacy Act," which prohibits the collection or sharing of facial recognition data without explicit consent, was also proposed in March 2019. All three of the newly proposed bills empower the FTC with regulatory enforcement authority. Two of them (the Commercial Facial Recognition Privacy Act and the AAA) also provide state attorneys general with the ability to bring enforcement actions.

If any of this legislation passes, it will further reinforce the FTC's regulatory power and by extension, will likely be informed by the FTC's recent forays into privacy and data security practices as elements of antitrust and consumer protection enforcement priorities. The FTC demonstrated its focus on consumer protection and antitrust issues that arise from harvesting data through artificial intelligence at its " Competition and Consumer Protection in the 21st Century" hearings in November 2018, at which panelists discussed new legislation and entertained the notion of whether or not the FTC's current powers were sufficient to address new and developing data practices.

The proposed AAA is at least some indication that Congress has determined the FTC needs more specific authority to adequately regulate in this space. Companies seeking to apply new technological methods and innovate how they use data they collect or want to acquire in the future will have to carefully consider the controls they implement in order to comply.

Client Alert 2019-098

This article is presented for informational purposes only and is not intended to constitute legal advice.