BACKGROUND

On 26 January 2023, the United States National Institute of Standards and Technology (NIST)1 released the AI Risk Management Framework (AI RMF 1.0).2 The NIST AI RMF 1.0 provides a comprehensive framework for managing risks associated with AI technologies. It emphasises the need to balance harnessing AI's potential benefits and mitigating its inherent risks.

AI RMF 1.0 was created in response to a directive from Congress to develop a framework to manage AI risks and to promote trustworthy and responsible development of AI systems.3 AI RMF 1.0 as a framework4 could be a lex informatica5 and universally applicable digital international practices.6

KEY REQUIREMENTS

  1. Risk Management: AI RMF 1.0 offers a structured approach to assessing, documenting, and managing AI-related risks and impacts.7
  2. Trustworthiness: The framework identifies key characteristics of trustworthy AI, including validity, reliability, safety, security, accountability, transparency, explainability, privacy, and fairness.8
  3. Core Functions: It delineates four core functions for risk management: Govern, Map, Measure, and Manage, each broken down into specific actions and outcomes.9
  4. Audience and Engagement: The framework targets diverse AI actors across the AI lifecycle, encouraging inclusive and multidisciplinary perspectives.10

IMPLICATION

Organisations employing AI technologies must adopt a holistic risk management approach, addressing technical, societal, legal, and ethical standards. Implementing AI RMF 1.0 will involve significant organisational commitment, including adjustments in governance, transparency, and accountability practices.

CONSIDER

  1. AI technologies' dynamic and evolving nature necessitates a flexible and continuous risk management approach.
  2. Organisations should integrate the AI RMF into their broader enterprise risk management strategies, aligning AI risk management with organisational principles and policies.
  3. Collaboration and engagement with various stakeholders, including policymakers, AI developers, and civil society, are essential for effective implementation.

CONCLUSION

The NIST AI RMF 1.0 represents a critical step towards responsible AI usage, offering a structured approach to managing AI risks and enhancing trustworthiness. Its comprehensive and flexible nature allows organisations across various sectors to adapt and implement its guidelines, fostering a safer and more ethical AI ecosystem.

* Setyawati Fitrianggraeni holds the position of Managing Partner at Anggraeni and Partners in Indonesia. She also serves as an Assistant Professor at the Faculty of Law, University of Indonesia, and is currently pursuing a PhD at the World Maritime University in Malmo, Sweden. This article is co-authored by Sri Purnama, Junior Legal Research and Jericho Xafier Ralf, Trainee Associate Analyst at Anggraeni and Partners.

Footnotes

1. The National Institute of Standards and Technology (NIST) is one of U.S oldest physical science laboratories which was founded in 1901 and is now a part of the U.S Department of Commerce. NIST was established by Congress to remove a major challenge to U.S industrial competitiveness during 1901. In the present, NIST's mission is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve quality of life.

2. The reference in this document pertains to the version dated 6 December 2023, as obtained from the National Institute of Standards and Technology (NIST) website, accessible atArtificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov), AI Risk Management Framework | NIST, and About NIST | NIST. Please note that subsequent amendments or updates to the reference may have occurred after this date. Readers are advised to consult the latest version of the document for the most current information.

3. Simon Hodgett, Sam IP, and Sam Dobbin, "National Institute of Standards and Technology launches version 1.0 of AI Risk Management Framework", Lexology,https://www.lexology.com/library/detail.aspx?g=46040915-562d-447e-b840-89d9a412596faccessed on 9 December 2023.

4. In general, a framework is a real or conceptual structure intended to serve as a support or guide for the building of something that expands the structure into something useful. See, Definition of Framework, https://www.techtarget.com/whatis/definition/framework#:~:text=In%20general%2C%20a%20framework%20is,the%20structure%20into%20something%20usefulaccessed on 9 December 2023.

5. In cyber law, there islex informaticaas the rules of information imposed by technology communication networks. Christos Dimitriou, "Lex Informaticaand Legal Regime: Their Relationship", 2015, p. 3,https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2626941accessed on 9 December 2023.

6. Therefore, if AI RMF 1.0 as a guideline becomes practices that are carried out repeatedly, then it will be binding and become a customary international law. Customary international law is, in a sentence, international obligations arising from recognized state practice carried out by a sense of legal obligation. See, Thomas Larsson, "Customary International Law", Master's Thesis in Public International Law, Uppsala Universitet: 2014, p. 9.

7. National Institute of Standard and Technology, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)".Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov), pp. 4-9, accessed on 6 December 2023. This part is on Framing Risk, discusses about understanding and addressing risks, impacts, and harms; and challenges for AI risk management (risk measurement, risk tolerance, risk prioritisation, and organisational integration and management of risk.

8. Ibid., pp. 12-18.

9. Ibid., pp. 20. Governance is designed to be a cross-cutting function to inform and be infused throughout the other three functions.

10. Ibid., pp. 9-11.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.