On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal "implementing agencies" regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are "implementing agencies" for purposes of the Memorandum.

Submission of Agency Plan to OMB

The "implementing agencies" have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency's plan must: (1) identify any statutory authorities specifically governing the agency's regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency's regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.

Application of these principles may lead an agency to conclude that a new regulation does not necessarily make sense if there is already an adequate regulation in place or a more feasible and less burdensome non-regulatory approach. The Memorandum states that agencies should consider new regulations only if the application of these principles leads to the conclusion that federal regulation is necessary.

Non-Regulatory Approach

An agency may determine that either existing regulations are sufficient for a particular AI solution, or that the benefits of a new regulation do not justify its costs either currently or in the foreseeable future. In such cases, the agency may consider either not taking any action, or instead, identifying non-regulatory approaches that may be appropriate to address the risk posed by certain AI applications. Examples of non-regulatory action include:

  • Using Existing Sector-Specific Policy Guidance or Frameworks: Agencies should consider using any existing statutory authority to issue non-regulatory policy statements, guidance, or testing and deployment frameworks as a means of encouraging AI innovation in that sector.
  • Pilot Programs and Experiments: Agencies should consider using any authority under currently existing law or regulation to grant waivers and exemptions from regulations or to allow pilot programs that provide safe harbors for specific AI applications.
  • Voluntary Consensus Standards: The private sector and other stakeholders may develop voluntary consensus standards for AI applications that provide non-regulatory approaches to manage risks associated with AI applications that are potentially more adaptable to the demands of a rapidly evolving technology. Agencies should give a preference to voluntary consensus standards but may also avail themselves of independent standards-setting organizations and consider the robustness of their standards when evaluating the need for or developing related regulations.

FDA Regulation of AI

The US Food and Drug Administration (FDA), which regulates medical devices, seems a likely agency to be treated as an implementing agency for purposes of the Memorandum. In response to the 21st Century Cures Act, the FDA has already been quite active in providing guidance and articulating its plans concerning FDA regulation of AI technology and solutions. Notably, in April 2019, the FDA issued a white paper, "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device," announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. FDA defines AI, and specifically ML, as "techniques used to design and train software algorithms to learn from and act on data." Among other things, FDA proposed an approach that would allow post-market modifications to approved or cleared algorithms to be made without requiring developers to submit a new premarket application if the modifications would not present significant new risks or significant changes to the core functions. The Agency proposes that developers could provide periodic reports and updates using real-world learning and data to allow for ongoing assessments of safety and performance as the AI evolves and adapts in real-world settings. This approach accommodates the unique, iterative nature of AI products while allowing FDA to apply safety and effectiveness standards throughout the lifecycle of the product. The FDA's white paper represents the FDA's preliminary thinking regarding an appropriate framework for regulating AI as a medical device and its recognition that the current FDA medical device regulatory scheme and enforcement approach will not accommodate the regulatory flexibility and speed needed to support and promote AI discovery and commercialization.

The FDA has not yet spoken to how it might approach development of a plan if it is treated as an implementing agency under the OMB Memorandum.

The OMB Memorandum will likely fuel and accelerate focused development of the federal government's approach to promoting and regulation AI technologies. All stakeholders with a meaningful interest in AI technologies for any reason and from any perspective should closely monitor and carefully plan for further developments in this area.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.