Explore the NIST AI Risk Management Framework
Learning Objectives
After completing this unit, you’ll be able to:
- Identify the key functions of the NIST AI Risk Management Framework (AI RMF).
- Explain how the traditional Risk Management Framework and the AI RMF can complement each other.
- Describe the advanced security measures specific to AI systems.
Traditional enterprise cybersecurity tools, like Security Information and Event Management (SIEM) systems, vulnerability scanners and intrusion detection systems, are typically secured and maintained through a combination of technical and operational controls like patching and security audits. However, when these protective tools are powered by artificial intelligence they require a different level of security and oversight to handle their unique risks. To ensure you are prepared for what’s next, let’s revisit a few concepts from the previous unit.
Knowledge Check
Ready to review what you’ve learned? The following knowledge check isn’t scored—it’s just an easy way to quiz yourself. To get started let’s review what you’ve learned by matching the software engineering approach on the left to the correct engineering characteristic on the right. When you finish matching all the items, click Submit to check your work. To start over, click Reset.
The NIST AI RMF Framework
If you currently use antivirus software, anti-malware tools, two-factor or biometric authentication on your devices, you're taking excellent risk management steps to protect your personal and professional data.
However, if you use and explore AI applications to create images, write text, predict behavior, or other complex tasks, the landscape of potential risks becomes more complex. This can potentially expose vulnerabilities that traditional security measures may not fully address and threat actors can potentially exploit.
This is where the NIST AI Risk Management Framework becomes essential, offering a structured and adaptable plan to proactively identify, assess, and mitigate the risks inherent in the development, deployment, and use of AI technologies.
The NIST-AI-600-1, Artificial Intelligence Risk Management Framework is a voluntary framework designed to help organizations manage AI risks. The framework is structured around four core functions: govern, map, measure, and manage.
-
Govern: Establishes policies, procedures, and processes for managing AI risks.
-
Map: Focuses on identifying and categorizing risks.
-
Measure: Evaluates risks using metrics and indicators.
-
Manage: Implements risk mitigation strategies for continuous improvement
The AI RMF framework is designed to be flexible and able to adjust to different situations. It’s less about recommending specific and explicit controls and more about helping organizations find a structured way to manage AI risks. This is a different approach compared to traditional frameworks like the NIST Risk Management Framework (RMF).
The NIST RMF, alongside frameworks like the Center for Internet Security (CIS) Controls and ISO 27001, provide a solid foundation for managing general cybersecurity risks across traditional hardware and software systems, including those involving AI. The AI RMF builds on this foundation.
NIST RMF and AI RMF in Action: Developing a Chatbot for a Website
Let's review a use case where the NIST RMF and AI RMF each contribute to securing the hardware and software that support a website chatbot.
A company decides to build an AI-enabled chatbot to better engage with customers and provide a more personalized experience. Developers and cybersecurity professionals work together in the following ways.
NIST RMF
For the traditional hardware and software involved in the chatbot’s development, developers use a threat model to design and implement a secure technical infrastructure. Cybersecurity professionals using the NIST RMF secure customer data by enforcing administrative controls, which include policies for handling sensitive information. Enforcement involves implementing and monitoring encryption and firewalls to safeguard against cyberthreats.
AI RMF
For those AI-enabled parts of the system, developers focus on creating the chatbot’s AI functionality making sure it securely and ethically learns and adapts based on customer interactions. Cybersecurity professionals, guided by the AI RMF, proactively manage AI-specific risks, such as preventing the chatbot from generating harmful responses or being manipulated and social engineering attacks. While developers include safety mechanisms in the AI design, cybersecurity experts ensure continuous testing, monitoring, and human oversight to minimize potential risk.
Organizations leveraging AI in cybersecurity should use both frameworks: NIST RMF (or similar framework) for overall security hygiene and AI RMF for AI-specific challenges.
Why Can’t Organizations Rely on Existing Controls?
As discussed in the previous unit, the unique architecture of AI systems makes the sole use of traditional controls insufficient for AI-driven systems. Let's explore why.
First, let's examine how some NIST RMF security controls are currently applied to protect security tools built with traditional software. Select a control to learn more.
While AI can benefit from some of these traditional security controls, they are not as effective on their own. Consider these examples:
Traditional Security Control |
Limitations |
---|---|
Secure coding practices |
Rely on strict code reviews and static analysis to prevent vulnerabilities. AI models depend more on data quality and continuous model training than on static code, making traditional secure coding less effective. |
Encryption |
Requires stable, static data in order to be effective. While AI models can be encrypted, traditional methods can limit learning and growth, and create exposure points due to AI’s frequent need to process unencrypted data. |
Patch Management |
Involves regular, predictable updates to fix vulnerabilities. AI models evolve based on constantly changing data inputs creating new vulnerabilities faster than traditional patching can address. |
To address these challenges, we need to enhance traditional controls and introduce new ones specifically designed for AI systems. The following is a curated list of security measures categorized into “control families,” similar to the NIST Security and Privacy Control families. A control family consists of distinct but related controls designed to protect a specific area of security. These AI-specific control families address both traditional and AI security concerns.
-
AI model protection: Focuses on protecting AI models from unauthorized access, tampering, and adversarial attacks (e.g., data poisoning) through measures like use of the AI Secure Software Development Life Cycle (SDLC), homomorphic encryption, integrity checks, differential privacy, and adversarial training.
-
AI data integrity and protection: Ensures the confidentiality, integrity, and availability of data used to train and operate the AI model. This includes measures like model watermarking, secure data storage, access control, and anonymization techniques.
-
AI transparency: Promotes clear understanding and interpretation of the AI's decision-making process, helping to identify potential biases, vulnerabilities, and unintended consequences. Includes measures like explainability tools, and algorithm audits.
-
AI continuous monitoring: Involves regular observation and analysis of the AI tool’s behavior and performance to identify anomalies or deviations from expected behavior. Measures include behavioral analytics, and real-time threat detection systems.
-
Human oversight and decision-making: Embeds human judgment into the AI's operation, especially for critical security actions. Protective measures include human-in-the-loop and role-based access controls.
-
Bias mitigation: Proactively addresses bias that AI can inherit from training data. Measures include use of diverse datasets and fairness-aware algorithms.
A Practical View of the AI RMF
The table outlines how each function of the AI RMF could be applied to assess and manage AI-powered security tools. It includes key questions, examples of AI protections or controls, suggested control types, and potential cybersecurity and AI roles involved in the process.
AI RMF Function |
Key Questions |
AI Control Examples |
Suggested Control Types |
Potential Roles Involved |
---|---|---|---|---|
Govern |
Have we established clear policies and procedures for AI use? Are there guidelines for human oversight and decision-making? |
Human oversight and decision-making; transparency controls |
Administrative controls (for example, policies, guidelines, training) |
AI governance lead; chief information security officer (CISO); data privacy officer; legal and compliance teams |
Map |
What specific risks does the AI tool introduce? Have we identified potential vulnerabilities and data misuse risks? |
AI model protection; AI data integrity |
Operational controls (for example, risk assessment, access management, simulations) |
Risk management specialist; AI risk analysis; data scientist; security architect |
Measure |
How will we measure the tool’s performance and detect bias or errors? Are there metrics to evaluate data quality and model reliability? |
Continuous monitoring; data integrity checks |
Technical controls (for example, monitoring tools, anomaly detection systems) |
AI quality assurance (QA) specialist; data engineer; security operations center (SOC) analyst |
Manage |
What mitigation strategies are in place for AI-specific risks? Do we have a response plan for unexpected AI behaviors or security incidents? |
Adversarial training; Real-time threat detection systems |
Operational and technical Controls (for example, incident response, continuous improvement) |
Incident response team; security engineer; AI ethics officer; DevOps engineer |
The information in this table helps connect the broader guidance of the AI RMF to practical security tasks and relevant roles, providing a useful framework for decision-making during strategic risk management discussions. This table is just a starting point—you can modify and adapt it to fit your organization's specific risk appetite and management plan.
Sum It Up
In this unit, we examined different types of AI protective measures and explored the NIST AI Risk Management Framework to understand how it can help address the unique security challenges of AI systems.
In the next unit, we apply the AI RMF to a real-world business scenario, demonstrating how a cybersecurity leader can make this framework practical and effective.
Resources
- Trailhead: Artificial Intelligence and Cybersecurity
- Trailhead: Responsible Creation of Artificial Intelligence
- External Site: Securiti An Overview of Emerging Global AI Regulations
- External Site: Modern Diplomacy: The Global Landscape of AI Security
- External Site: Forbes: Human-In-The-Loop AI: A Collaborative Teammate In Operations And Incident Management
- External Site: Fairlearn: Improve fairness of AI systems
- External Site: ISO: ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
- External Site: OECD: Organisation for Economic Co-operation and Development (OECD) Principles for trustworthy AI
- External Site: Artificial Intelligence Act: EU Artificial Intelligence Act
- External Site: OWASP: OWASP AI Security and Privacy Guide
- External Site: RAI: Department of Defense’s Responsible Artificial Intelligence (AI) Toolkit