Under the non-binding Voluntary AI Commitments on "Ensuring Safe, Secure, and Trustworthy AI," the companies pledged to adhere to a set of eight rules focused on ensuring that AI products are safe before introducing them to the public, building systems that put security first, and strengthening the public's trust in these products. Specifically, the companies committed to:

  1. Internal and external security testing of their AI systems before releasing them. This testing will be partly carried out by independent experts and is intended to guard against AI risks, such as biosecurity and cybersecurity.
  1. Sharing information across the industry with governments, civil society, and academia on managing the risks associated with AI, such as by identifying best practices.
  1. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, which are an essential part of AI systems. The companies agreed that model weights should be released only when intended and after security risks are evaluated.
  1. Facilitating third-party discovery and reporting of vulnerabilities in the companies' AI systems. This commitment is focused on establishing robust reporting mechanisms to promptly identify and correct issues.
  1. Developing robust technical mechanisms to ensure that users know when content is AI-generated, such as by employing a watermarking system. This is intended to help promote public trust in AI by reducing the risks of fraud and deception.
  1. Publicly reporting the capabilities, limitations, and areas of appropriate and inappropriate use in the companies' AI systems. This public report will identify both security and societal risks.
  1. Prioritizing research on the societal risks posed by AI systems, including the avoidance of harmful bias and discrimination, as well as protection of privacy.
  1. Developing and deploying advanced AI systems to help address society's greatest challenges, such as cancer prevention and climate change mitigation.

The announcement described the commitments as "intend[ed]... to remain in effect until regulations covering substantially the same issues come into force." The White House also announced that it is working on an executive order and pursuing bipartisan legislation to further regulate AI. Companies should closely monitor these developments as the Biden Administration has signaled that AI regulation is a key priority.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.