Skip to main content

Estimación de tiempo

Apply the NIST AI Risk Management Framework

Learning Objectives

After completing this unit, you’ll be able to:

  • Explain the AI RMF process.
  • Describe how an organization can use the AI RMF to manage the risks to AI cybersecurity tools.

Let’s explore how the NIST AI RMF can be applied to a hypothetical AI-powered endpoint detection and response (AI EDR) security tool. This tool represents a major advancement in an organization’s security by continuously learning the typical behavior patterns of organizational devices. Once it understands what’s typical, the tool can detect even small changes that might signal an attack, whether it’s something known or something new. The AI EDR performs real-time analysis, helping to quickly detect and respond to threats, sometimes before an attack can even begin. It can also automatically take actions, like isolating an infected machine or blocking malicious communications.

One company exploring such advancements is SecureBank, a fictitious mid-size financial institution dealing with growing cyber challenges like phishing and insider threats.

A person pointing to a poster displaying an endpoint detection and response process with key activities organized in blue circles and the following labels: data collection; analysis and forensics; incident investigation intelligence and automated response. White circles are labeled to represent aspects of the NIST AI RMF and are placed where they will integrate with the AI EDR process.

Scenario: Acquiring an AI-Powered Endpoint Security Solution

SecureBank faces increasing cyberthreats, including phishing attempts and insider threats. To strengthen their security, the company considers purchasing a new AI EDR tool. The tool promises real-time threat detection, automated incident response, and continuous learning to protect against new threats. While the security team is excited about the potential benefits, Emma, the chief information security officer (CISO), understands the need to manage both the benefits and risks of this new technology. To do so, she decides to apply the NIST AI Risk Management Framework.

Step 1: Establish Governance (Govern)

Scenario context: Emma recognizes that governance is key to managing both external threats and the inherent risks posed by AI’s ability to learn and evolve. She wants to ensure responsible and secure deployment of the new AI EDR tool.

Actionable Steps

  • Introduce the NIST AI RMF: Emma presents the framework to the board, explaining how it will help them create a governance structure for managing the AI EDR tool. She also highlights the importance of establishing policies and procedures to guide its use and oversight, emphasizing the need to address the challenges of working with a third-party solution.
  • Obtain approval and resources: The board, acknowledging the importance of responsible AI adoption, agrees to provide the funding and other resources needed to develop, implement, and manage policies. This includes resources for vendor management and ongoing communication.
  • Form a governance committee: Emma sets up a cross-functional AI governance committee with representatives from IT, compliance, legal, risk management, and each of SecureBank’s business lines. The committee’s responsibilities are to:
    • Develop policies covering the use, monitoring, and ethical considerations of AI tools
    • Oversee the tool’s use and performance, ensuring any risks are quickly detected and addressed.
    • Regularly review and update policies to align with SecureBank's goals
    • Review and negotiate service level agreements (SLAs) with the vendor to ensure they address performance expectations, risk mitigation, and processes for addressing unexpected AI behavior or necessary adjustments.

Step 2: Define the Operational Context (Map)

Scenario context: The AI governance committee defines the operational context and potential risks for the AI EDR tool, targeting external threats like phishing and insider attacks while monitoring how the tool itself might change and create new risks. They recognize the need for operational controls, such as AI model protection and data integrity measures, to handle both external and internal risks.

Actionable Steps

  • Identify threats and risks: Conduct a risk assessment to identify external threats (like phishing) and inherent AI risks (like model drift).
  • Define data sources and use cases: Clearly define the data sources (for example, network traffic, user behavior logs) and use cases for the AI tool, ensuring they align with SecureBank's risk profile and needs.
  • Configure tool settings and monitoring parameters: During the trial phase, configure the tool to look for the threats and risks previously identified. Aso put in place monitoring systems to track how well the tool is performing and to flag any actions that seem out of the ordinary.

Step 3: Evaluate and Monitor (Measure)

Scenario context: The AI Governance Committee oversees continuous monitoring and evaluation of the AI tool’s performance to ensure it aligns with SecureBank’s risk tolerance.

Actionable Steps

  • Performance evaluations and risk assessments: The team leverages technical controls, like anomaly detection systems, to regularly assess the AI tool’s performance and identify potential vulnerabilities.
  • Monitor for unintended consequences: Actively watch for any unintended effects or changes in AI behavior that could impact SecureBank’s operations or security posture. Promptly communicate any observations to the vendor.
  • Adjust and adapt: Based on evaluation results, collaborate with the vendor to adjust the tool’s settings, training data, or models to ensure it remains effective and aligned with changing security needs. This may involve requesting specific modifications or feature enhancements from the vendor.

Step 4: Implement Ongoing Risk Management (Manage)

Scenario context: The AI governance committee manages ongoing risks to ensure the AI tool remains a valuable asset. They rely on both operational and technical controls to maintain the tool’s effectiveness and adapt to new challenges.

Actionable Steps

  • Continuous monitoring and improvement: Continuously monitor the AI tool’s performance, make necessary adjustments, and seek feedback to address emerging risks.
  • Training and preparedness: Regularly train the security team on AI management and stay updated on AI developments and best practices.
  • Incident response and adaptation: Update incident response protocols to address AI-related incidents, establish clear communication channels, and adapt plans as the AI tool evolves.

Sum It Up

By applying the four core functions of the AI RMF (Govern, Map, Measure, Manage), SecureBank creates a repeatable process to integrate AI technology securely and responsibly. This framework helps the organization proactively identify and manage risks, maintain strong governance, and continuously adapt to the dynamic AI landscape. The NIST AI Risk Management Playbook for generative AI can provide additional guidance, further enhancing SecureBank's ability to navigate the complexities of AI technology.

Organizations can follow a similar approach by tailoring these controls and practices to their unique needs, risk appetite, and strategic objectives, ensuring that AI technologies are managed securely and effectively.

Resources

Comparta sus comentarios sobre Trailhead en la Ayuda de Salesforce.

Nos encantaría conocer su experiencia con Trailhead. Ahora puede acceder al nuevo formulario de comentarios cuando quiera desde el sitio de la Ayuda de Salesforce.

Más información Continuar para compartir comentarios