If the promise of adaptive artificial intelligence (sometimes called "Machine Learning") is to be achieved in the medical area, FDA's regulation of medical devices is going to have to graduate from geometry to calculus.  By its nature, machine learning changes the details of its output constantly.  The rigid regulatory model requiring FDA pre-approval of all "major changes" (defined as anything that could affect product safety or effectiveness) that has served FDA, and the public health, well for many years cannot handle adaptive artificial intelligence.  The fundamental insight of the calculus was that it allowed measurement of rates of change rather than change itself.  FDA is going to have to come up with something analogous to evaluate, not every particular change created by machine learning, but rather rates and directions of change.  And FDA is trying.  This guest post by Reed Smith attorneys Mildred Segura, Maryanne Woo, and Christopher Butler takes a deep dive into FDA's self-described "first step" at creating a regulatory regime for this potentially transformative technology.

In advance, please pardon all the acronyms, they're unavoidable in this area.

**********

FDA (this acronym doesn't count) Commissioner Scott Gottlieb was not around very long, but in one of his last statements as Commissioner, he described FDA's recent discussion paper we are discussing today a "first step toward developing a novel and tailored approach" to how the agency regulates Artificial Intelligence/Machine Learning ("AI/ML") (acronym #1).  It is one small step in a series of steps FDA intends to take before its ultimate giant leap to a Total Product Lifecycle ("TPLC") (acronym #2) regulatory framework for AI/ML-based medical devices.  FDA has been discussing TPLC regulation generally for a couple years now as part of its Digital Health Innovation Action Plan, but this is different.  It is the agency's first try at tackling AI/ML.  This is the hard nut.  It represents an important aspect, perhaps the most important aspect, of FDA's ultimate goal of TPLC oversight for digital health.

FDA has already cleared or approved several examples of AI/ML-based Software as a Medical Device ("SaMD") (acronym #3).  For instance, last year, FDA authorized an AI-based device for detecting diabetic retinopathy, an eye disease that can cause vision loss.  FDA also authorized a device using artificial intelligence to alert providers of a potential stroke in patients.  The AI/ML used in these devices, however, are like moon rocks, "locked" in shape.  Their algorithms do not change and any alteration would likely require additional FDA premarket review.  In contrast, the excitement in this space is about AI/ML algorithms stems from them being "adaptive" and having the ability to "learn" from real-world experience.  Adaptive AI/ML is more like a nebula, a non-solid body that changes shape depending upon whatever nearby material is exerting gravitational pull.  These types of algorithms will change over time and may provide different outputs or may change the intended use from what FDA originally authorized.

While the authors scoff at those who are not convinced the moon landing was real, despite the photographic and hard (the moon rocks again) evidence, we do agree FDA cannot rely on a single snapshot of an adaptive AI/ML algorithm at the premarket approval stage to guarantee continued safety and effectiveness throughout its many iterations.  At the same time, however,  FDA does not want to stifle innovation in technology, which Dr. Gottlieb recognized has "the potential to fundamentally transform the delivery of health care."

So, what to do?

FDA has built some of the framework on which to base its launch into the new (final?) frontier of AI/ML-based SaMD regulation.  The most significant is its Software Precertification (Pre-Cert) Pilot Program. The big departure accomplished by this voluntary test program, compared to traditional medical device evaluation, is that FDA scrutiny focuses more on the developer itself rather than the device.  Pre-Cert will be given only to those "manufacturers who have demonstrated a robust culture of quality and organizational excellence, and who are committed to monitoring real-world performance of their products once they reach the U.S. market."  That amounts to a method of addressing the direction of device-related change.

Building on the precertification working model, FDA's discussion paper ventures into regulation of AI/ML-based SaMD modifications after FDA's initial review.  Electronicovigilence anyone?  It suggests its level of regulatory action would depend on the type and effect of the modification.  For example, a change in the data inputted into the device might lead to better precision in the analysis.  That type of change would likely not merit premarket approval.  On the flip side, the AI/ML may have modified itself to such a significant degree that a device originally intended only to aid diagnosis could now be relied upon to provide a definitive diagnosis.  For example, an app that was used to flag skin moles as potentially cancerous (and recommend a visit to a doctor) could morph into a program that could definitively diagnose melanoma (without a trip to the doctor).  Such a change in the possible use for a life-threatening condition would necessitate pre-market approval ("PMA") (acronym #4).

And then there is the mushy middle (or should that be "muddle"?).  The AL/ML may have learned enough to evolve to accept new types of data inputs that could improve, but not change, its original function.  For example, a device that relied upon heart rate data to diagnose types of atrial fibrillation has self-modified to gain the capacity to use oxygen saturation as well as heart rate to make that diagnosis.  For this middle ground, FDA suggested a "focused review," which unfortunately remains as unfocused as the pre-corrective optics Hubble telescope.  FDA is aware of this vacuum and is looking for input , but it is clear FDA wants manufacturers to reach out affirmatively and report modifications of any type, before any adverse safety event or any indication of an issue with efficacy.

FDA has clearly signaled it is looking for help – even asking if these modifications are the type typically encountered with AI/ML-based SaMD.  As with all discussion papers, there are more questions than answers, but with this one, it seems as if FDA is sending out a SETI (extra-bonus acronym #5) regulatory signal hoping for as many people to respond to it as possible.  Such as, should FDA require submissions to characterize of the process of AI/ML-based SaMD self-modification, and if so, how?

Another aspect of the new frontier raised by FDA's discussion paper is the proposed framework for transparency and performance monitoring during the total product lifecycle.  FDA believes that applying a TPLC approach to the regulation of software products is particularly important for AI/ML-based SaMD due to such software's ability to adapt and improve from real-world use, and necessary to permit AI/ML-based devices to safely enter the healthcare space.

FDA suggests that companies be required to monitor continuously the accuracy and performance of their devices and any software changes.  Transparency could include regular updates to FDA, device companies and collaborators of the manufacturer, and the public, such as clinicians, patients, and general users.  Monitoring could include adding to a file or annual report, Case for Quality activities, or real-world performance analytics via the Pre-Cert Program.

Right now, these details are as dark and featureless as a black hole, and a lot needs to be done to get from these concepts to real regulation.  As product liability litigators, concerns about the proposed framework's effects on preemption and duty to warn/learned intermediary doctrine are on our radar screens.  For example, will regulation of SaMD self-modification be "rigorous" enough to support preemption?  Will warning causation come to depend on whether the information in question would have altered the output of AI/ML-based SaMD, instead of the learned intermediary physician?

In addition, adaptive AI/ML modifications maybe occurring at such a high rate as to render this framework untenable, especially as to updating the public.  We anticipate a robust and lengthy discussion before FDA issues any draft guidance and we will be actively watching as these outlines become clearer.

This article is presented for informational purposes only and is not intended to constitute legal advice.