Today’s guest post is by Corinne Fierro, Mildred Segura, and Farah Tabibkhoei, all of Reed Smith. These three are all part of the firm’s left-coast, techno side of the product liability practice, and bring our readers a recent appellate decision that addresses the intersection of 21st Century high technology and 20th Century product liability concepts. As always, our guest bloggers deserve all the credit (and any blame) for what follows.

**********

Is an Artificial Intelligence ("AI") algorithm subject to strict liability principles? For years, this question has gone unanswered by courts. Now, it would seem that the courts' long-awaited discussion on this topic has begun as T.S. Elliot predicted it all would end: "not with a bang but with a whimper."

On March 5, 2020, the Court of Appeals for the Third Circuit held in Rodgers v. Christie that an algorithmic pretrial risk assessment, which uses a "multifactor risk estimation model," to assess whether a criminal defendant should be released pending trial , was not a "product" under the New Jersey Products Liability Act ("NJPLA" or "the Act"). 2020 WL 1079233, at *2 (3d Cir. 2020). This is not a life sciences case, but is important because it's an indication of how the product liability framework may be applied to AI's applications including, but not limited to, medical and pharmaceuticals. You may recall this Blog’s past blog post, which theorized about how an AI product liability case could play out. Although the Third Circuit's Rodgers decision is not binding precedent, it is a start to what we believe will be a growing body of case law regarding AI in the product liability context.

Let's dive in.

In Rodgers, Plaintiff brought a product liability action under the NJPLA after her son was murdered, allegedly by a man who had been granted pretrial release from jail just days earlier − after the state court used the Public Safety Assessment ("the PSA"). Id. at **1-2. The PSA is an algorithm developed by the Laura and John Arnold Foundation (the "Foundation") to estimate a criminal defendant's risk of fleeing or endangering the community. Id. at *3. Plaintiff sued the Foundation, and miscellaneous individuals not involved in this appeal.

The Court affirmed the district court's dismissal of the complaint finding the PSA is not a product under the NJPLA. The district court applied the NJPLA, which imposes strict liability on manufacturers or sellers of defective "products." Id. at *2. The NJPLA does not define the term "product," so the court turned to the Restatement (Third) of Torts. Under the Restatement, a "product" is "tangible personal property distributed commercially for use or consumption" or any "[o]ther item[]" whose "context of . . . distribution and use is sufficiently analogous to [that] of tangible personal property." Id.

Applying this definition, the district court held the PSA does not qualify as a "product" under the Act, and therefore could not be subject to strict liability. The Third Circuit affirmed this decision for two reasons: 1) the PSA is not distributed commercially; and 2) the PSA is not "tangible personal property nor remotely analogous to it." Id. at *3 (internal quotations removed).

As to the first point, the Third Circuit agreed with the district court that the PSA was not commercially distributed but rather designed as an "objective, standardized and . . . empirical risk assessment instrument" to be used in pretrial services programs. Id. (internal quotations removed).

Second, the Third Circuit affirmed the district court's reasoning that the PSA could not qualify as a "product" because "information, guidance, ideas, and recommendations" cannot be products under the Third Restatement. Id. (internal quotations removed).

In addition to this definitional exclusion, the district court also was hesitant to impose strict liability because "extending strict liability to the distribution of ideas would raise serious First Amendment concerns." Id. This reasoning, which has previously been applied to a variety of product liability cases involving books and media, could apply to AI more generally, given that AI algorithms are used most commonly now to render recommendations and guidance. In healthcare, AI algorithms are used to diagnose patients with various diseases such as diabetic retinopathy and cancer. They help patients monitor their health via "smart" insulin pumps and phone applications. The role AI plays in these scenarios, however, is suggestive. AI proposes its idea in the form of a diagnosis, or application alert, and human beings act upon that information. The First Amendment defense would, by extension, likely apply to these algorithms.

Plaintiff attempted to sidestep the "product" definition, and argued that PSA's defects "undermine[] New Jersey's pretrial release system, making it not reasonably fit, suitable or safe for its intended use." Id. at *4 (internal quotations omitted). The Third Circuit affirmed the lower's court dismissal of this argument, noting that the NJPLA only applies to defective products, "not to anything that causes harm or fails to achieve its purpose." Id.

So what does this case mean for defendants facing AI product liability claims? First, if the Third Circuit's decision is any indication, courts are likely to apply traditional product liability principles to AI and find that AI is not a "product" within the meaning of the Restatement (Third) of Torts. Second, courts following Rodgers are likely to hold that AI is not subject to strict liability claims. Third, we expect the holding in Rodgers will not open the floodgates to AI litigation, for now, because plaintiffs will likely face an uphill battle in establishing that strict liability applies. And finally, a First Amendment challenge stands in the way for plaintiffs seeking to extend strict liability to algorithms.

While this case is not binding precedent, it is hopefully the start of broader court engagement on the topic. We're hoping for a bang.

This article is presented for informational purposes only and is not intended to constitute legal advice.