Navigating the NIST AI
Risk Management Framework

The Ultimate Guide to AI Risk

Introduction

In the ever-evolving landscape of cybersecurity, the rapid advancements in artificial intelligence (AI) have brought both tremendous opportunities and significant risks. To effectively manage these risks, cybersecurity professionals and IT managers are evaluating several existing frameworks to adapt their processes to this new landscape. The National Institute of Standards and Technology (NIST) has released the NIST AI Risk Management Framework (RMF), a framework with the goal of helping organizations manage AI risk. 

In this guide, we will provide a comprehensive overview of AI risk, the challenges you might run into when dealing with AI risk, and the  NIST AI RMF and its importance in safeguarding AI systems.

Woman working at a table - Decorative

Why does AI risk matter?

AI is found everywhere we look: in apps, software programs, chatbots, the Internet of Things, and so much more, with widely varying inputs and outputs. Chat-GPT, a generative AI by Open AI, has wide adoption spanning across industries, for example. Security and compliance tools even feature built-in AI.  

But along with these AI systems come some perplexing problems. From ethics and safety to simply whether an AI system is secure and private, AI risks are new, evolving territory.

As organizations increasingly adopt AI technologies, it is crucial to address the potential risks associated with these systems. But, as the National Institute of Standards and Technology outlines in the NIST AI risk management framework, risk is not just limited to the users of AI. This framework establishes that the “design, development, use, and evaluation of AI products, services, and systems” are all affected by AI risk.

AI risk can be broadly classified into three categories: harm to people, harm to an organization, and harm to an ecosystem. Each of these categories has three defined bullets from NIST.

AI Risk Categories

Harm to People

Individual: Individual harm is defined by NIST as “harm to a person’s civil liberties, rights, physical or psychological safety, or economic opportunity.” An example would be an AI system stealing content and claiming it as their own.

Group/Community: NIST defines group/Community harm as “harm to a group, such as discrimination against a population sub-group.” An example would be an algorithm that disqualifies candidates based on certain criteria, like names, race, or ethnicity.

Societal: NIST defines societal harm as “harm to democratic participation or educational access.” An example would be the interruption of a democratic election due to AI risk.

Harm to an Organization

According to NIST, harm to an organization consists of three things:

01

Harm to an organization’s business operations

One example of this would be a disruption to a manufacturing location.

02

Harm to an organization from security breaches or monetary loss

One example of this would be sustaining a $1M loss due to a data breach..

03

Harm to an organization’s reputation

One example of this would be a business experiencing a damaged reputation due to a breach or other system failure.

Harm to an Ecosystem

Likewise, harm to an ecosystem consists of the following:

01

Harm to interconnected and interdependent elements and resources

02

Harm to the global financial system, supply chain, or interrelated systems

03

Harm to natural resources, the environment, and planet

Regulatory Changes

On October 30, 2023, President Biden issued an executive order for Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The new executive order outlines various focus areas, including new standards for AI safety and security, protecting Americans’ privacy, advancing equity and civil rights, standing up for consumers, patients, and students, supporting workers, promoting innovation and competition, advancing American leadership abroad, and ensuring responsible and effective government use of AI.

This executive order demonstrates the American government’s commitment to governing the use of AI in a way that promotes safe and secure usage, as well as the ethical development of the AI systems themselves.

New Standards for AI Safety and Security

Per the executive order, AI developers must do the following:

  1. Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  2. Develop standards, tools, and tests to ensure that AI systems are safe, secure, and trustworthy.
  3. Protect against the risks of using AI to engineer dangerous biological materials.
  4. Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
  5. Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
  6. Order the development of a National Security Memorandum that directs further actions on AI and security.

The second step is where the NIST AI risk management framework comes into play, with the executive order calling NIST out by name. Within 270 days of publication of the executive order, NIST will be responsible for “developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI.”

The Challenges of AI Risk Management

With the landscape of AI risk changing nearly every day, there are several challenges presented. In the official documentation accompanying the NIST AI risk management framework, there are four challenges highlighted: risk measurement, risk tolerance, risk prioritization, and risk integration and management.

Risk measurement

Risk measurement

One of the greatest challenges when it comes to measuring AI risk is its volatility. As AI risk is not necessarily well-defined or adequately understood to start with, it can be hard to measure either quantitatively or qualitatively. As NIST outlines in the framework documentation, the main risk measurement challenges are as follows:

Risks related to third-party software, hardware, and data

Unfortunately, in the current state of AI systems, there may be misalignment between the risk metrics and methodologies used by the developers and those used by the organization “deploying or operating the system.” There may also be a lack of transparency on the part of the developers, who may or may not disclose their risk metrics or methodologies. Add in the fact that these are third-party vendors and another layer of complexity comes to fruition. How organizations use third-party data with the AI systems, products, or services is also complicated — especially if there are not “sufficient internal governance” and “technical safeguards.”

Tracking emergent risks

By tracking emergent risks and creating methods to measure them, you will enhance your overall risk management program. Conducting an AI system impact assessment can help you and other AI actors “understand potential impacts or harms within specific contexts.”

Availability of reliable metrics

One of the main challenges when it comes to artificial intelligence risk management is the lack of availability of reliable metrics. There is no consensus around how to measure the risk and trustworthiness of AI use and systems. Developing your own measures is more of a stop gap until there are institutional endeavors to develop such metrics. It’s vital to understand that context matters and that the harms of AI risk may impact various groups differently. NIST also warns to take into consideration the communities or sub-groups harmed without being “direct users of a system.”

Risk at different stages of the AI lifecycle

Measuring risk in the early stages of the AI lifecycle will most likely differ from measuring risks at a later stage. These complications may increase as AI systems continue to adapt and evolve beyond their current state. Plus, different stakeholders will have varying perspectives on what constitutes a risk, depending on how they are using AI or are involved in the AI lifecycle.

Risk in real-world settings

Measuring risks in controlled environments will offer insights pre-deployment, but these measures will likely differ when AI systems are deployed and used within real-world settings.

Inscrutability

Because of the “black box” surrounding many AI systems, risk measurement is further complicated because of their inscrutability. That is why transparency and documentation are vital for trustworthy AI systems.

Human baseline

Humans differ from AI systems in the way they carry out tasks and make decisions. Therefore, a baseline metric is needed for comparison. This proves difficult because of these differences.

Risk tolerance

Risk tolerance

Per NIST, “the AI RMF can be used to prioritize risk” but “it does not prescribe risk tolerance.” Risk tolerance in this context refers to the “organization’s or AI actor’s readiness to bear the risk in order to achieve its objectives.”

Risk tolerance is highly contextual and influenced by a variety of factors, including but not limited to legal and regulatory requirements and “policies and norms established by AI system owners, organizations, industries, communities, or policy makers.” Risk tolerance will also change over time as these entities continue to evolve. Identifying your AI risk tolerance is a nebulous task, but there are constantly emerging methods and technologies that will help to define risk tolerance in the future. These include developments by businesses, governments, academia, and civil society.

NIST

“The Framework is intended to be flexible and to augment existing risk practices, which should align with applicable laws, regulations, and norms. Organizations should follow existing regulations and guidelines for risk criteria, tolerance, and response established by organizational, domain, discipline, sector, or professional requirements.”

NIST AI Risk Management Framework (AI RMF 1.0)
Risk prioritization

Risk prioritization

Developing a culture of risk management aids the organization and can help your stakeholders and employees understand that AI risks are widely varied and not all the same. Some AI systems may pose critical risks and may need to cease operations in order for the risks to be mitigated, whereas lower priority risks may be deemed tolerable by the organization.

For example, AI systems that are trained with sensitive or protected data – such as the use of PII, or personally identifiable information – may call for higher prioritization. On the other hand, those trained with non-sensitive data sets may call for lower prioritization.

Another variable is whether or not the AI system was designed to directly interact with humans or with other systems. Humans are inherently risky, so those AI systems that interact with humans may pose greater risks. Lastly, non-human-facing AI systems may still pose a threat, such as “downstream safety or social implications.”

By developers defining the residual risk of an AI system, they’re able to more “fully consider” the risks – thus allowing them to better “inform end users” about the potential risks of using the system.

Risk integration and management

Risk integration and management

NIST outlines that “AI risks should not be considered in isolation,” due to different actors and their roles within the AI lifecycle. An example they give is the organization developing an AI system that does not have information on how the AI system will be used.

AI risk management should be an integral part of your overall risk management strategies. By treating AI risks alongside your cybersecurity and privacy risks, you will accomplish a more integrated risk management program that more fully encompasses all risks facing your organization.

By doing so, you may recognize that some AI risks are inherently linked with other types of risks. NIST provides the following examples:

  • Privacy concerns related to the use of underlying data to train AI systems
  • The energy and environmental implications associated with resource-heavy computing demands
  • Security concerns related to the confidentiality, integrity, and availability of the system and its training and output data
  • General security of the underlying software and hardware for AI systems
Woman working at laptop decorative

Audience

When it comes to the usage of AI systems, there is a broad cast of actors associated with each system. From its developers to its end-users – or even those whose data is being used — there are multiple perspectives to take into account.

The primary AI RMF audience consists of the actors who “perform or manage the design, development, deployment, evaluation, and use“ of AI systems. These are represented as the Application Context, Data and Input, AI Model, and Task and Output AI dimensions. In the figure below, the People and Planet circle represents “human rights” and “the broader well-being of society and the planet.” This center circle stands for the audience that informs the primary audience and serves as a separate audience for AI RMF.

Developed by the OECD, this framework helps classify the AI lifecycle into five key “socio-technical” dimensions. This figure has been modified by NIST for the purposes of the NIST AI RMF.

Developed by the OECD, this framework helps classify the AI lifecycle into five key “socio-technical” dimensions. This figure has been modified by NIST for the purposes of the NIST AI RMF.

NIST defines these People and Planet actors as including “trade associations, standards developing organizations, researchers, advocacy groups, environmental groups, civil society organizations, end users, and potentially impacted individuals and communities.” Each of these entities forms the audience that informs the primary audience.

Other things these actors are responsible for include:

  • Assisting in providing context and understanding potential and actual impacts
  • Being a source of formal or quasi-formal norms and guidance for AI risk management
  • Designating boundaries for AI operation
  • Promoting discussion of the tradeoffs needed to balance societal values and priorities related to civil liberties and rights, equity, the environment and the planet, and the economy

AI Risks and Trustworthiness

The trustworthiness of AI systems matters, because trust is the baseline of forming a relationship. Developers of these AI systems want to provide end-users with a smooth, easily repeatable experience to enhance their day-to-day workflows — and adopt further usage. Because we cannot ensure AI system trustworthiness, it is important to use human judgment when making decisions and setting any thresholds or values for AI trustworthiness.

Trade offs (like in other aspects of risk management) may be part of your AI risk management strategy. And, they may vary greatly depending on the type of AI system and the data it interacts with. To simplify this process, NIST has defined characteristics of trustworthy AI as the following:

NIST trustworthy AI characteristics
Valid and reliable

Valid and reliable

To understand these characteristics, NIST has provided definitions via other compliance frameworks.

To define “valid,” NIST relies on ISO 9000:2015 for an explanation:

“Validation can be defined as confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.” — ISO 9000:2015

To define “reliable,” NIST uses the ISO/IEC TS 5723:2022 definition:

“The ability of an item to perform as required, without failure, for a given time interval, under given conditions.” — ISO/IEC TS 5723:2022

What does that mean in relation to AI systems? If a system does not meet the expectations or standards set by its developers and users, then it is not valid. As NIST outlines, “deployment of AI systems which are inaccurate, unreliable, or poorly generalized to data and settings beyond their training creates and increases negative AI risks and reduces trustworthiness.

When it comes to reliability, NIST says it is a “goal for overall correctness” when an AI system is operated under expected conditions and “over a given period of time.” An AI system must perform correctly and consistently to be considered reliable. Accuracy, robustness, and generalizability also belong to the mix of variables responsible for validity and reliability.

Safety

Safe

AI systems should be safe to use. Per ISO/IEC TS 5723:2022, these systems should not “lead to a state in which human life, health, property, or the environment is endangered.”

The safety of an AI system can be improved through:

  • Responsible design, development, and deployment practices
  • Clear information to deployers on responsible use of the system
  • Responsible decision-making by deployers and end-users
  • Explanations and documentation of risks based on empirical evidence of incidents

Safety overall should align with that of other institutions — such as transportation and healthcare — and other existing “sector- or application-specific guidelines or standards.”

Secure and resilient

Secure and reslient

Resilience, as defined by NIST, means that an AI system “can withstand unexpected adverse events or unexpected changes in their environment or use.” Secure AI systems “can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use.”

Accountable and transparent

Accountable and transparent

Transparency “reflects the extent to which information about an AI system and its outputs is available to the individuals interacting with such a system – regardless of whether they are even aware that they are doing so,” per NIST.

Explainable and interpretable

Explainable and interpretable

Explainability “refers  to a representation of the mechanisms underlying AI systems’ operation.”

Interpretability “refers to the meaning of AI systems’ output in the context of their designed functional purposes.”

Together, these two traits help developers and users better understand the “functionality” and “trustworthiness” of the AI system. Without explainability, the risks of AI systems may increase. Additionally, they are less accurately documented, debugged, and monitored.

Privacy enhanced

Privacy-enhanced

Privacy “refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation).”

As AI systems are designed, developed, and deployed, privacy values – including anonymity, confidentiality, and control – should help guide decisions.

Fairness

Fair – with harmful bias managed

Fairness “in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination.”

Mitigating “harmful biases” does not guarantee that a system is “necessarily fair.” NIST identifies three different kinds of AI bias for consideration and management: systemic, computational and statistical, and human-cognitive.

NIST selected these because each “can occur in the absence of prejudice, partiality, or discriminatory intent.”

The NIST AI Risk Management Framework

On January 26, 2023, the National Institute of Standards and Technology released the AI Risk Management Framework (AI RMF 1.0) alongside companion documents: the NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives.

The framework was developed in “collaboration with the private and public sectors” in order to “better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).” To develop the framework, NIST engaged in a highly collaborative process including a “request for information, several draft versions for public comments, multiple workshops, and other opportunities to provide input.” Further, in March 2023, NIST released the Trustworthy and Responsible AI Resource Center, to support “the implementation of, and international alignment with, the AI RMF.”

Effectiveness of the AI RMF

The effectiveness of the framework is continually being assessed by organizations using it, in collaboration with NIST. Evaluations of its effectiveness will continually be assessed and part of future NIST activities, performed in conjunction with members of the AI community.

According to NIST, framework users are expected to benefit from:

  • Enhanced processes for governing, mapping, measuring, and managing AI risk, and clearly documenting outcomes
  • Improved awareness of the relationships and tradeoffs among trustworthiness characteristics, socio-technical approaches, and AI risks
  • Explicit processes for making go/no-go system commissioning and deployment decisions 
  • Established policies, processes, practices, and procedures for improving organizational accountability efforts related to AI system risks
  • Enhanced organizational culture which prioritizes the identification and management of AI system risks and potential impacts to individuals, communities, organizations, and society
  • Better information sharing within and across organizations about risks, decision-making processes, responsibilities, common pitfalls, TEVV practices, and approaches for continuous improvement
  • Greater contextual knowledge for increased awareness of downstream risks
  • Strengthened engagement with interested parties and relevant AI actors
  • Augmented capacity for TEVV of AI systems and associated risks

AI RMF Core

AI RMF Core

The premise of the framework’s core is to provide actionable steps that “enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI systems.”

The Core is composed of four different Functions: Govern, Map, Measure, and Manage. Each of these functions is broken into categories and subcategories, which are then subdivided into actions and outcomes.

NIST warns their readers that the actions are not a checklist, nor are they an ordered set of steps. So, it is important to understand this going into each of the Functions. The most vital part of the AI RMF is the fact that risk management should be a continuous process performed throughout the AI lifecycle; the AI RMF should be used in a way that brings in multiple perspectives, with NIST suggesting external AI actors bringing their views into the organization.

A culture of openness and exchanging ideas is another suggestion from NIST, with conversations centering around the  “purposes and functions of the technology being designed, developed, deployed, or evaluated” being top of mind. As you begin to explore the NIST AI risk management framework, the NIST AI RMF Playbook is a good place to start. Its contents are meant to be used as suggestions and, like the AI RMF, is completely voluntary. The Playbook is also a part of the NIST Trustworthy and Responsible AI Resource Center. It’s important that no matter which function you start with in the AI RMF, you understand that the process is iterative and ongoing, much like NIST RMF.

Govern

Govern

“A culture of risk management is cultivated and present.”

– NIST AI Risk Management Framework (AI RMF 1.0)

According to NIST, the Govern Function:

  • Cultivates and implements a culture of risk management within organizations designing, developing, deploying, evaluating, or acquiring AI systems
  • Outlines processes, documents, and organization schemes that anticipate, identify, and manage the risks a system can pose, including to users and others across society — and procedures to achieve those outcomes
  • Incorporates processes to assess potential impacts

The Govern function permeates each of the remaining AI RMF functions. It is integral to the success of the framework. Governance is a vital part of the NIST AI risk management framework and is a “continual and intrinsic requirement for effective AI risk management over an AI system’s lifespan and the organization’s hierarchy.”

This comes as no surprise, as governance is an essential part of the GRC puzzle. The most important part of the Govern Function is that it becomes part of the organization’s culture. The categories and subcategories of the Govern Function can be found on page 22 of the official documentation for the NIST AI Risk Management Framework.

Map

Map

“Context is recognized and risks related to context are identified.”

– NIST AI Risk Management Framework (AI RMF 1.0)

The Map Function is the context in which risks are related to an AI system. 

Gathering information from a wide array of actors and stakeholders helps “prevent negative risks” and “develop more trustworthy AI systems by:

  • Improving their capacity for understanding contexts
  • Checking their assumptions about context of use
  • Enabling recognition of when systems are not functional within or out of their intended context
  • Identifying positive and beneficial uses of their existing AI systems
  • Improving understanding of limitations in AI and ML processes
  • Identifying constraints in real-world applications that may lead to negative impacts
  • Identifying known and foreseeable negative impacts related to intended use of AI systems
  • Anticipating risks of the use of AI systems beyond intended use

Upon accomplishing the Map Function, framework users should be able to decide “whether to design, develop, or deploy an AI system.” Based on multiple perspectives – including those of external collaborators and end-users – you will have a more complete picture of the risks the AI system poses to its audience.

If the decision is made to proceed, you should then move onto the Measure and Manage Functions — in addition to the policies and procedures put into place by the Govern Function. The categories and subcategories of the Map Function can be found on page 26 of the official documentation for the NIST AI Risk Management Framework.

Measure

Measure

“Identified risks are assessed, analyzed, or tracked.”

– NIST AI Risk Management Framework (AI RMF 1.0)

The Measure Function “employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.” 

Measuring AI risks consists of tracking metrics for trustworthiness, social impact, and human-AI configurations. Processes included in the Measure Function include testing software and assessing the performance of the system in relation to “measures of uncertainty, comparisons to performance benchmarks, and formalized reporting and documentation of results.”

By measuring your AI risk, you can better understand them. According to NIST, “measurement provides a traceable basis to inform management decisions.” These decisions include recalibration of the AI system, impact mitigation, or removal of the system from development or use. From here, there are also additional controls, such as compensating, detective, deterrent, directive, and recovery controls.

The outcomes of your measurement efforts will come into play during the Manage Function, where risk is monitored and mitigation efforts are put into place. The categories and subcategories of the Measure Function can be found on page 29 of the official documentation for the NIST AI Risk Management Framework.

Manage

Manage

“Risks are prioritized and acted upon based on a projected impact.”

– NIST AI Risk Management Framework (AI RMF 1.0)

The Manage function consists of “allocating risk resources” to your “mapped and measured risks” on a “regular basis” as defined in the Govern Function. This is where risk treatment plans come into play, so you are able to “respond to, recover from, and communicate about incidents or events.”

The use of outside resources from experts — as well as “input from relevant AI actors” — helps you and your stakeholders more comprehensively understand the entire picture. These perspectives help bolster your strategy and can increase transparency and accountability of the AI systems. They also “decrease the likelihood of system failures and negative impacts.”

Following the Manage Function, it is important to note that this entire framework is an iterative process. Risk management for artificial intelligence should be an ongoing process, like in NIST RMF. The categories and subcategories of the Manage Function can be found on page 31 of the official documentation for the NIST AI Risk Management Framework.

AI RMF Profiles: Navigating the NIST AI
Risk Management Framework

Two people collaborating

There are multiple profiles included in the NIST AI Risk Management Framework;  profiles help “illustrate and offer insights into how risk can be managed at various stages of the AI lifecycle.” Additionally, they help organizations understand how they “might best manage AI risk” that “is well-aligned with their goals, considers legal/regulatory requirements and best practices, and reflects risk management priorities.”

However, this framework “does not prescribe profile templates,” thus allowing for maximum flexibility with implementation.

Use-case profiles

Use-case profiles are “implementations of the AI RMF functions, categories, and subcategories for a specific setting or application based on the requirements, risk tolerance, and resources of the Framework user.”

Temporal profiles

Temporal profiles describe current or desired/targeted state of “specific AI risk management activities within a given sector, industry, organization, or application context.”

NIST

“An AI RMF Current Profile indicates how AI is currently being managed and the related risks in terms of current outcomes. A Target Profile indicates the outcomes needed to achieve the desired or target AI risk management goals.

NIST AI Risk Management Framework (AI RMF 1.0)

The objective of temporal profiles is to allow users of the NIST AI RMF to better assess and “reveal gaps” that need to be addressed to meet current AI risk management objectives. From there, action plans can be developed for the gaps, driven by “the user’s needs and risk management processes.” 

Using this risk-based approach “also enables Framework users to compare their approaches with other approaches and to gauge the resources needed . . . to achieve AI risk management goals in a cost-effective, prioritized manner.”

Cross-sectoral profiles

Cross-sectoral profiles cover risks and models used across use cases or sectors. They can also “cover how to govern, map, measure, and manage risks for activities or business processes” that are “common across sectors” — such as “large language models (LLM), cloud-based services, or acquisition.”

How Hyperproof can help with the NIST AI Risk Management Framework

NIST AI RMF is a strategy for managing AI and related risks. Using the NIST AI risk management framework, you can begin to carve out a culture of compliance and a continuous cycle of AI risk management. With software platforms like Hyperproof, you can implement frameworks and controls — including tying them directly to your risks. 

As your organization begins incorporating more AI functionality into day-to-day operations, it’s important to track these measures and their related risks. With a risk management platform like Hyperproof, you can manage day-to-day compliance operations, eliminating tedious, repetitive tasks like manual evidence collection and control assessments. Hyperproof automates evidence collection and continuously monitors your controls to ensure your organization is protected against AI risk.

Schedule a demo today to learn more about how Hyperproof can help you manage the risks associated with artificial intelligence.

More AI Risk Resources from Hyperproof

Ready to see
Hyperproof in action?

G2 Crowd Leader
G2 Crowd Best Estimated ROI
G2 Crowd Best Customer Support Enterprise
G2 Crowd Fastest Implementation
G2 Crowd Momentum Leader