Skip to main content

Establish AI Governance

Learning Objectives

After completing this unit, you’ll be able to:

  • List some of the risks associated with AI.
  • Explain what AI governance is.
  • Inventory your organization’s current AI usage.
  • Evaluate your organization’s AI risks.
  • List examples of AI risk mitigation strategies.

The Risks of AI

It’s almost impossible to talk about AI without acknowledging the risks that come along with technology. We’ve all heard about the security threats, concerns about inaccuracies and bias, possibility of data breaches, fears about reputational harm, and more.

To make things even more complicated, the technology is advancing at such a rapid rate. It’s no wonder that, according to the Salesforce Trends in AI for CRM report, 68% of people believe that advances in AI make a company’s trustworthiness more important.

But in many cases, the AI revolution is outpacing the ability of organizations to adapt to the changes. And that’s one of the things that worries Alex, our AI champion and executive at Coral Cloud Resorts. How can Coral Cloud make sure it’s adopting AI in a trustworthy way?

The Need for Governance

​​As you probably guessed, trust in AI is fundamentally tied to its governance. Let’s learn what AI governance is.

AI governance is a set of policies, processes, and best practices that help organizations ensure that AI systems are developed, used, and managed in a responsible and scalable way to maximize benefits and mitigate risks.

In this unit, we follow along with Alex as he works with the Coral Cloud AI leadership team to establish AI governance. Your organization might already have one or more groups dedicated to overseeing governance. If you’re not planning to centralize AI governance with an AI committee, make sure any existing internal governing bodies have the necessary expertise about AI to enhance existing practices and address any gaps.

Develop Principles for Responsible AI

Before Coral Cloud starts creating their governance program, Alex encourages the leadership team to take a step back and think about how to stay focused on their commitment to responsibly developing AI. What’s their north star?

Many organizations adopting AI find it helpful to establish responsible AI principles. With a set of AI principles, businesses can clarify their position on AI and consider the technology’s impact on employees, customers, and society at large. This north star creates a common understanding among employees so that your AI principles can be applied at every level of the business.

A series of blue line icons representing Salesforce’s five trusted AI principles: responsible, accountable, transparent, empowering, and inclusive.

For inspiration, take a look at Salesforce’s Trusted AI principles. But keep in mind that your organization’s AI principles should align with your corporate mission and values. It’s normal to have a little overlap with other organizations, but don’t skip the work of developing your own set of principles, getting stakeholder buy-in, and publicly committing to those values.

Note

Principles are great, but how do you turn them into a responsible AI practice? Most organizations start out governing AI in an ad hoc way, and then gradually their efforts become more formalized. Check out Salesforce’s AI Ethics Maturity Model to learn more.

Survey the Regulatory Landscape

Coral Cloud’s leadership team is ready to dive into governance, but there are a couple questions on everyone’s minds: What about AI regulations? What’s actually required by law?

Right now, regulations around AI are a patchwork of emerging guidelines and policies that vary by region and industry. Governments and regulatory bodies are playing catch-up with the rapid advancements in technology, making it a challenge to predict exactly what the rulebook will look like even a few years down the line.

Despite the uncertainty, there are some proactive steps you can take. Follow the best practices in this unit, and stay informed about AI regulatory trends. By reviewing updates from regulatory bodies and industry groups, you can gain early insight into potential legislative changes, and support and resources when new requirements arise.

Inventory Your Organization’s AI Usage

To push Coral Cloud’s governance efforts forward, Alex recommends that they catalog how the organization is currently using AI.

It’s hard to properly assess the risks until you know where the risks are coming from. So it’s a good idea to inventory all AI tools to understand the extent of their integration into business processes.

  • Identify AI technologies: List all the AI technologies currently in use, including everything from simple automation tools to complex machine learning models. Don’t overlook AI integrated into third-party services and software.
  • Document use cases: For each AI technology, document its specific use cases. Understanding what each AI solution does and why it’s used helps you to evaluate its impact and importance.
  • Map data flows: Track how data flows to and from each AI application. This includes sources of input data, what the AI modifies or analyzes, and where it sends output data.
  • Establish ownership: Assign ownership for each AI tool to specific individuals or teams. Knowing who is responsible for each tool ensures accountability and simplifies future audits and assessments.
  • Update regularly: Make the AI inventory a living document that is updated to reflect new AI deployments or changes to existing ones. This keeps the inventory relevant and useful for ongoing compliance.

Coral Cloud’s leadership team feels confident that their inventory can uncover all the official ways the organization is currently using AI. But what about unauthorized usage? Also called shadow AI, unapproved AI tools present significant risk to the business. Check out the CIO Magazine article about how to prevent disaster from shadow AI.

Note

If your business is already using a decent number of AI tools, the inventory can take some time to complete. But the inventory process shouldn’t stand in the way of your organization exploring AI use cases and experiments. Just make sure you conduct a risk assessment for any AI pilot projects, and continue working on your company-wide AI audit in parallel.

Evaluate AI Risks

Now that Coral Cloud has its AI inventory, the team can start assessing the organization’s risks from AI. It’s a critical step for establishing governance, but it can also help you prepare for regulatory requirements. Some policies, such as the EU AI Act, take a risk-based approach to governing the technology. So if you implement a risk assessment process early on, you’re in a better position to comply with regulations.

Here’s how Alex and the AI leadership team evaluate the organization's AI risks.

Identify and Categorize Risk Factors

Review the AI inventory. For every use case, brainstorm potential risks. Engage stakeholders from different parts of the organization because they might see risks you hadn’t considered. Once you have a list, categorize the risks into logical groups such as technical, ethical, operational, reputational, regulatory, and so on. Use categories that make sense for your business.

Assess the Impact and Likelihood

For each risk, assess the potential impact on your business if the risk were to happen. Then determine the likelihood of each risk occurring. These factors can be rated as low, medium, or high. Historical data, industry benchmarks, and expert opinions are valuable in making these assessments.

Prioritize Risks

Use the impact and likelihood to prioritize the risks. A common method is to use a risk matrix that plots the likelihood against the impact, helping you to focus on high-impact, high-probability risks.

If you’re not sure how to get started, download the Google DeepMind AI Risk Management Framework template from the National Institute of Standards and Technology (NIST) AI Resource Center site, or do some research to find other examples online. Keep in mind that every organization has to develop a framework that’s relevant to their specific context, which can include geographic region, industry, use cases, and regulatory requirements.

Develop Risk Mitigation Strategies

Now that Coral Cloud’s AI committee has completed their assessment, they’re ready to implement some strategies to help mitigate all those risks. Below are some examples of safeguards for different types of risk, but keep in mind it’s not an exhaustive list.

Type of Risk

Common Risk Mitigation Strategies

Technical and Security

  • Security policies and protocols
  • Anomaly detection systems and fallback options
  • Secure AI infrastructure, tenancy, and hosting
  • Cybersecurity red-teaming

Data and Privacy

  • Access controls
  • Data anonymization and encryption techniques
  • Regular data audits
  • Data misuse policies
  • Data quality standards
  • Data cleaning and validation processes

Ethical and Safety

  • Responsible AI principles
  • Acceptable use policies
  • Ethical red-teaming
  • Bias assessment and mitigation tooling
  • Model benchmarking
  • Model transparency, such as explainability and citations
  • Watermarks for AI-generated content
  • Feedback mechanisms
  • Audit logs

Operational

  • Risk assessments
  • Incident response plans
  • Change management
  • Documentation and company-wide education
  • Metrics and monitoring
  • Internal ethics reviews for new AI products and features

Compliance and Legal

  • Compliance protocols and training
  • Legal consultations and contracts

Alex and the rest of Coral Cloud’s AI leadership team know that it’s impossible to avoid all AI-related risks. Instead, their goal is to develop processes and tools that give their organization confidence that risks can be effectively identified and managed. If you want to find out how to implement AI governance in your organization, check out the World Economic Forum’s implementation and self-assessment guide.

Improve Your Governance Practices

Coral Cloud’s AI committee knows that governance is an ongoing process. Here’s what the organization can do to improve its capacity for managing risk and better adapt to the changing regulatory landscape.

  • Training and education: Implement AI compliance training and measure the success of the education program. Foster an ethical AI culture and encourage teams to consider the broader impact of their work.
  • Monitoring and review: Regularly monitor the effectiveness of implemented risk management strategies and make adjustments as needed. This is crucial as new risks emerge and existing strategies need refinement.
  • Documentation and reporting: Keep detailed records of all risk mitigation activities. This documentation can be vital for regulatory compliance and useful for internal audits. Develop metrics for governance initiatives, and report findings to stakeholders.

Next it’s time to dive into one of the most exciting elements of AI strategy: identifying and prioritizing AI use cases.

Resources

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback