In an increasingly interconnected world, the convergence of cutting-edge technologies and emerging security threats pose unique challenges that demand proactive solutions. Chemical, biological, radiological, and nuclear (CBRN) weapons have long been a cause for concern in the realm of national security. However, as technology evolves, so too does the potential for these threats to be amplified and exploited. President Biden's artificial intelligence (AI) executive order (EO) addresses these issues head-on, taking a significant step towards mitigating the risks at the intersection of AI and CBRN threats. This directive calls for a comprehensive evaluation of the potential misuse of AI for CBRN development but also champions responsible AI development and usage. Furthermore, as the healthcare sector increasingly incorporates AI into its operations, federal agencies are showing a heightened focus on ensuring the responsible integration of AI in this critical field. Organizations involved in healthcare products and services delivery are urged to remain vigilant as these changes unfold.

Chemical, Biological, Radiological, or Nuclear Weapons

Section 4.4 of the EO addresses methods to reduce risks at the "Intersection of AI and CBRN Threats." In furtherance of this goal, the EO directs the Secretary of DHS to evaluate the potential for AI to be misused to enable the development of CBRN threats and submit a report on these efforts within 180 days of the date of the EO. Similarly, the Secretary of DoD is directed to prepare a study assessing potential national security risks related to biosecurity and synthetic biology, including recommendations on how to coordinate data and resources. The EO focuses particularly on the use and potential misuse of synthetic nucleic acids, and requests that, within 180 days of the EO, life sciences organizations that receive federal funding for research certify, as a condition of funding, that synthetic nucleic acid procurement is conducted in compliance with a framework established by the Secretary of Commerce. Organizations that utilize synthetic biology/nucleic acids in products should be cautious to keep aware of these changes and the manner in which these obligations will be proposed for implementation by funding agencies through inclusion in applicable agreements.

Responsible AI Development and Use

Section 5.2(e) of the EO directs the Secretary of Health and Human Services (HHS Secretary) to prioritize grantmaking and other awards to support responsible AI development and use. Specifically, the EO directs the HHS Secretary to: (1) collaborate with appropriate private sector actors through HHS programs that may support the advancement of AI-enabled tools that develop personalized immune-response profiles for patients; (2) prioritize allocation of the 2024 Leading Edge Acceleration Project cooperative agreement awards to initiatives that explore ways to improve healthcare-data quality to support the responsible development of AI tools for clinical care, real world evidence programs, population health, public health, and related research; and (3) accelerate grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program and showcasing current AIM-AHEAD activities in underserved communities.

To help ensure the safe and responsible use of AI in healthcare, section 8(b) of the EO directs the HHS Secretary to take the following actions:

  • Establish an AI Task Force and Strategic Plan on Responsible Use of AI in the Health Sector: The EO directs the HHS Secretary, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to establish an HHS AI Task Force charged with developing a strategic plan on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector. The EO specifies that such sectors include research and discovery, drug and device safety, healthcare delivery and financing, and public health. The plan would include policies and frameworks, as well as regulatory action where appropriate. The EO lists several areas to be addressed in the AI Task Force plan, including development and use of predictive and generative AI-enabled technologies in healthcare delivery and financing, long-term safety and real-world performance monitoring of AI-enabled technologies used in the health sector, incorporation of equity principles in AI-enabled technologies used in the health sector, and incorporation of privacy and security standards into the software development lifecycle for protection of personally identifiable information.
  • Ensure Quality of AI Technologies in the Health Sector: The EO directs the HHS Secretary, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality. The EO provides that this work shall include development of an AI assurance policy and infrastructure needs for enabling premarket assessment and postmarket oversight of AI-enabled healthcare technology algorithm system performance against real-world data.
  • Advance Compliance With Nondiscrimination Laws: The EO requires the HHS Secretary to, in consultation with relevant agencies, consider appropriate actions to advance the prompt understanding of and compliance with federal nondiscrimination laws by health and human services providers that receive federal financial assistance, as well as how those laws relate to AI. Such actions may include providing technical assistance to providers and payers about their obligations under federal nondiscrimination and privacy laws as they relate to AI and potential consequences of noncompliance, and issuing guidance or taking other action in response to complaints or reports of noncompliance with federal nondiscrimination and privacy laws as they relate to AI.
  • Establish an AI Safety Program: The EO directs the HHS Secretary to, in consultation with the Secretary of Defense and Secretary of Veterans Affairs, establish an AI safety program in partnership with voluntary federally listed patient safety organizations. The AI safety program would establish a common framework for approaches to identify and capture clinical errors resulting from AI deployed in healthcare settings, as well as specifications for a central tracking repository for associated incidents that cause harm, and analyze captured data to generate evidence to develop recommendations or guidelines for avoiding those harms.
  • Develop a Strategy for Regulation of AI in Drug Development: The EO directs the HHS Secretary to develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes. At a minimum, the strategy would define principles required for appropriate regulation throughout each phase of drug development, identify areas where rulemaking, guidance, or additional statutory authority may be necessary to implement such a regulatory system, and identify existing budgetary and other resources for such a regulatory system.

The administration's focus on AI in the healthcare sector in the EO is also reflected in recent activity from federal agencies. For example:

  • In 2021, the Food and Drug Administration (FDA) issued an action plan titled "Artificial Intelligence and Machine Learning in Software as a Medical Device." This followed the agency's publication in April 2019 of a discussion paper titled "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)." The 2019 discussion paper described the FDA's foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications. The 2021 action plan incorporates feedback from interested parties on the 2019 discussion paper and confirms the FDA's view that AI/ML technologies have the potential to significantly enhance the quality of healthcare by utilizing vast amounts of data generated through healthcare delivery, reflecting real-world use and experience. FDA stated its optimism that "with appropriately tailored total product lifecycle-based regulatory oversight, AI/ML-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive."
  • In August 2022, the Department of Health and Human Services (HHS) issued a proposed rule focused on health equity designed to reduce disparities in healthcare. The proposed rule, titled "Nondiscrimination in Health Programs and Activities," includes provisions prohibiting the use of clinical algorithms to make decisions that discriminate against any individual on the basis of race, color, national origin, sex, age, or disability. In announcing the proposed rule, HHS made it clear that its intent is not to hinder the use of clinical algorithms, which have proven useful in healthcare decision-making; but rather to help ensure their use is free of bias.

Organizations that manufacture medical products or are involved in healthcare services delivery should stay aware of these changes as they are implemented.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.