
Discover more from AI in Business
NIST Rolls Out AI Risk Management Framework
Some 240 organizations submitted comments over 18 months to get the framework to where it is today; reactions range from praiseworthy to skeptical
By John P.. Desmond, Editor, AI in Business

The AI risk management framework recently published by NIST is being met with cautious acceptance and some degree of skepticism by the community engaged with the use of AI in business and government.
Announced in a press release from the National Institute of Standards and Technology in January, the AI Risk Management Framework (AI RMF 1.0) provides guidance for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the “many risks” of AI technologies.
Congress directed NIST to develop the RMF, an effort that started 18 months ago and reflects some 400 sets of formal comments NIST received from over 240 organizations on the draft framework. NIST released statements from some organizations that have committed to promote or use the framework. The agency also released a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework.
Here are some excerpts from selected comments:
“ We commend NIST’s efforts to align the AI Risk Management Framework with its Cybersecurity and Privacy Frameworks to further enable organizations to build upon existing frameworks.”
Natasha Crampton, Chief Responsible AI Officer – Office for Responsible AI, Microsoft
"The National Fair Housing Alliance (NFHA) … is pleased to see the framework addresses discrimination risks and stresses the importance of ensuring systems are fair and managed for bias. …. Technology is the new civil rights frontier.“
Lisa Rice, President & CEO, NFHA
“The AI RMF has the potential to help organizations of all sizes to work towards trustworthy AI systems that benefit all.”
David Danks, UC San Diego Halıcıoğlu Data Science Institute (HDSI) and Department of Philosophy professor
“The Framework creates a common approach for how companies can map, measure, manage, and govern AI risk. In doing so, it promises to accelerate the adoption of trustworthy AI, build bridges with partners around the world, especially in Europe,”
Sayan Chakraborty, Executive Vice President, Product & Technology, Workday
Some are skeptical. Seeing it as a good starting point, “in practical terms, it doesn’t mean very much,” stated Bradley Merrill Thompson, an attorney focused on AI regulation at law firm Epstein Becker & Green, in an email to VentureBeat.
“It is so high-level and generic that it really only serves as a starting point for even thinking about a risk management framework to be applied to a specific product,” he stated. “This is the problem with trying to quasi-regulate all of AI. The applications are so vastly different, with vastly different risks.”
Framework Presented in Two Parts
The NIST AI RMF is divided into two parts. The first discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice.
In a statement to VentureBeat, Courtney Lang, senior director of policy, trust, data and technology at the Information Technology Industry Council, said that the AI RMF offers a “holistic way to think about and approach AI risk management, and the associated Playbook consolidates in one place informative references, which will help users operationalize key trustworthiness tenets.”
Having guidance from NIST on how to manage the risk of AI systems is better than not having it, the consensus seemed to be.
Andrew Burt, managing partner at law firm BNH.AI, stated to VentureBeat, “When it comes to AI risk management, practitioners feel, all too often, like they are operating in the Wild West. NIST is uniquely positioned to fill that void, and the AI Risk Management Framework consists of clear, effective guidance on how organizations can flexibly but effectively manage AI risks. I expect the RMF to set the standard for how organizations manage AI risks going forward, not just in the US, but globally as well.”
The framework is part of NIST’s larger effort to cultivate trust in AI technologies — necessary if the technology is to be accepted widely by society, according to Under Secretary for Standards and Technology and NIST Director Laurie E. Locascio.
“The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” Locascio stated in the NIST press release. “It offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI. We expect the AI RMF to help drive development of best practices and standards.”
Speaking at a discussion of the AI RMF hosted by George Washington University recently and reported on by Nextgov, one NIST researcher said upcoming guidance is likely to offer more tailored recommendations on AI and machine learning algorithms for different industries.
Elham Tabassi, the chief of staff in the Information Technology Laboratory at NIST, stated that while the RMF was designed to be broadly applicable to a variety of use cases as a general, high-level framework, she believes AI risk management techniques should vary between different sectors and implementation is best conducted with the help of groups with domain expertise.
“AI is all about context and use case, and how either of these trustworthy characteristics…will manifest themselves in different use cases in [the] financial sector, versus hiring, versus face recognition…and they may have different priorities,” she stated in the Nextgov account. “There is a need for tailored guidance to that specific technology or use case.”
A number of law firms weighed in on the AI RMF, including Epstein Becker & Green, based in Boston, in their Workforce Bulletin.
The firm noted that part one of the AI RMF outlines seven characteristics of trustworthy AI systems. They are:
Valid and reliable – AI systems can be assessed by ongoing testing or monitoring to confirm that the system is performing as intended;
Safe – AI systems should not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered;
Secure and resilient – AI systems and their ecosystems are resilient when they are able to withstand unexpected adverse events or changes in their environment;
Accountable and transparent – Information about an AI system and its outputs increases confidence in the system and enables organizational practices and governing structures for harm reduction;
Explainable and interpretable – The representation of the mechanism underlying AI systems’ operation (explainability), and the meaning of an AI systems’ output (interpretability), can assist those operating and overseeing AI systems;
Privacy-enhanced – Anonymity, confidentiality, and control generally should guide choices for AI system design, development and deployment;
Fair with harmful bias managed – NIST has identified three major categories of AI bias to be considered and managed: systemic, computational, statistical, and human-cognitive AI bias.
Part two of the AI RMF includes suggestions for organizations to prepare AI RMF Profiles, which describe use case profiles, temporal profiles and cross-sectoral profiles.
“Although the AI RMF does not include model templates, organizations should consider preparing AI RMF Profiles to streamline the process of operationalizing and documenting compliance with AI RMF guidance,” state the authors, led by Adam S. Forman, leader of the firm’s AI practice group.
Read the source articles and information in a press release from the National Institute of Standards and Technology, statements NIST released from organizations committed to the work, the AI RMF Playbook from NIST, in VentureBeat, from Nextgov and from Workforce Bulletin.
(Write to the editor here.)