As private enterprise and government alike continue to explore the promise and challenges of artificial intelligence (AI)-driven products proliferating the marketplace, regulation has — not surprisingly — emerged as a top concern. 

What's becoming increasingly clear is that such products will not only require new regulations, but a completely different approach to regulatory approvals altogether — one that complements standard pre-market approvals with a comprehensive total product life-cycle monitoring process.

The U.S. Food and Drug Administration recently issued a press release and discussion paper proposing just such a framework for AI-based medical devices. In particular, the agency wants to explore continuously learning or adaptive algorithms, rather than ones that have been locked (and approved).

The FDA wants to consider permitting what are called "software as a medical device" that incorporate AI or machine learning so that they develop their capabilities after initial regulatory approval. This regulatory approval process would continue with monitoring the software through to real world performance by ensuring that it continues to perform in accordance with a predetermined change control plan in order to guarantee safety.

Of particular interest is the 20-page discussion paper that accompanies the press release. Recognizing that an evolving piece of software cannot be authorized just once in advance of deployment, the paper is predicated on the idea that pre-market regulatory approval no longer works and a "total product lifecycle" regulatory approach is needed to allow for improvements while protecting safety. To this end, new statutory authorities may be needed. The paper categorizes the types of modifications that could be expected from AI or machine learning in the lifecycle of a product.

Such a regulatory approach would be based on:

  1. clear expectations on quality systems and good machine learning practices;
  2. pre-market review to demonstrate safety, effectiveness and establish clear expectations for managing patient risk throughout product lifecycle
  3. manufacturer monitoring and risk management; and
  4. transparency to users and the FDA surrounding the use of post-market real-world performance reporting for assuring safety and effectiveness.

Detailed discussion of these principles is included along with flow charts for considering how the evaluation and continuous monitoring could work. Appendices to the paper include hypothetical examples of products, recommendation for how they could be evaluated, and proposed content for an algorithm change protocol. The proposed content for that protocol could include a data management plan, protocols for retraining and optimizing the algorithm, performance evaluation protocols and procedures that describe how updated medical device algorithms will be tested, distributed and communicated when released.

The paper concludes with a series of questions that the FDA would like submissions on.

This proposed regulatory approach and components of the proposed analysis comprise a well thought-through and detailed set of steps and considerations. In addition, it seems like such an approach that would work in other regulatory fields — not just medical software. Indeed, the total life approach to continuous regulation of a software product after pre-market regulatory approval (based on a principled set of evaluation criteria might) well be applicable across a number of industries. This could be seen as a model approach to an AI regulation.

In Canada, the 2019 federal budget made tepid reference to a regulatory sandbox in which Health Canada could explore regulations for AI medical devices.  Unfortunately, nothing as robust and thoughtful as the FDA approach has appeared in this Canadian regulatory sector.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.