Skip to main content
Stream TDX Bengaluru on Salesforce+. Start learning the critical skills you need to build and deploy trusted autonomous agents with Agentforce. Register for free.

Discover How Salesforce Builds Trusted Agentic AI

Learning Objectives

After completing this unit, you’ll be able to:

  • Define what trusted agentic AI means.
  • Describe the key risks associated with agents.
  • Explain the guiding principles for responsible agentic AI.

Before You Start

We recommend that you review these ‌badges to get a better understanding of the governance and trust strategies for AI solutions.

What’s Trusted Agentic AI?

At Salesforce, trust is our number one value. And that applies to our agentic AI. Salesforce agentic AI is built on Salesforce‌ guidelines for responsible AI, the foundation for addressing the emerging challenges associated with the rapid growth of AI agents on our platform.

While values and guiding principles are crucial, it’s equally important to back them up with concrete actions. Salesforce has done just that with the implementation of the Einstein Trust Layer, Agentforce guardrails, and trust patterns. Additionally, we employ ethical red-teaming and an AI Acceptable Use Policy (AUP) to ensure that AI systems operate within safe and ethical parameters. These measures not only reinforce the company's values but also provide a solid foundation for building and maintaining trust in AI technologies.

Before we talk about how Salesforce can help you build a trusted Agentforce, let’s make sure you understand some terminology.

Agentforce

Agentforce is the brand for the agentic layer of the Salesforce Platform, and it encompasses customer and employee-facing agents.

Agents

Agents are autonomous, goal-oriented, and perform tasks and business interactions with minimal human input. They can begin and complete a task or a sequence of tasks, handle natural language conversations, and securely provide relevant answers drawn from business data. Agents can be used to support and collaborate with a Salesforce user in the flow of work. They can also act on behalf of a user or customer. They can be made available in your Salesforce interface or in your customer channels.

Agentic AI

Agentic AI is an AI system that enables AI agents to operate autonomously, make decisions, and adapt to change. It fosters collaboration between AI agents and humans by providing tools and services that facilitate learning and adaptation.

Guiding Principles for Responsible Agentic AI

Salesforce is committed to developing and using agents responsibly. Here are our key principles.

Accuracy

Our reasoning engine, the brain behind Agentforce, uses topic classification to map a user’s request to specific topics. The topic includes a set of instructions, business policies, and actions that an agent can take. This keeps the agent on the task that it’s intended to do.

Using the grounding process—including relevant context from your Salesforce org into the prompt—agents use your organization’s data in Salesforce to base their response on to improve accuracy and relevance. This enables agents to make the most of your organization's Salesforce data when generating their replies.

Safety

Our agents have built-in safeguards to prevent unintended consequences and ensure safe responses.

  • We have system policies to limit the scope of agent responses, ensuring that they stay on topic and respond in a safe and ethical manner. Check out the prompt defense section in the Einstein Trust Layer help topic.
  • The Einstein Trust Layer detects harmful content in the agent’s response and logs them in the audit trail so you can monitor and respond accordingly.
  • We have a zero data retention policy with third-party large language model (LLM) providers to make sure your data is not retained outside the Salesforce trust boundary. The zero data retention policy and our contractual commitments with the LLM providers makes sure your data isn't used for training the third-party LLM.

Honesty

We respect the source of data and get consent to use it. We're transparent about AI-generated content, and state when an AI has created it. We include standard disclosures when AI is generating the response so it’s transparent with users.

Empowerment

We focus on the partnership between humans and AI. AI must support humans, especially in tasks that require human judgment. Some tasks can be fully automated, but others need human oversight. We empower people to make high-risk decisions, while automating routine tasks, ensuring that humans and AI work together effectively.

Sustainability

We aim to create efficient AI models to reduce environmental impact. Smaller, well-trained models can often outperform larger ones. We also use efficient hardware and low-carbon data centers. Agents use optimized models like xLAM and xGen-Sales, which are tailored to specific tasks, ensuring high performance with minimal environmental impact.

Icons representing Accuracy, Safety, Honesty, Empowerment, and Sustainability

By following these principles, we design agents that are reliable, safe, transparent, empowering, and sustainable. At Salesforce, we're dedicated to using AI to enhance human capabilities while upholding our core values.

Key Risks and Concerns

As AI systems become more autonomous, the potential for misuse and unintended consequences increases. Ensuring that these systems operate ethically and transparently is crucial to maintaining user trust and safeguarding against harm. Here are some of the key risks to consider.

Unintended Consequences

The autonomous actions of AI agents can lead to unexpected and potentially harmful outcomes. These can include generating biased or offensive content, making incorrect decisions, or interacting with users in ways that are not aligned with Salesforce’s ethical guidelines. Interactions between the AI's programming and learned patterns, can lead to actions not anticipated or desired, and erode trust and create safety concerns.

Security and Privacy

Security and privacy are paramount, especially because agents handle sensitive data. When designed without appropriate security considerations, it could lead to inadvertent leakage of sensitive data, compromising user trust.

Ethical and Legal Considerations

Agents must adhere to policies and legal requirements. Ensuring that agents act ethically and comply with laws is crucial to avoiding legal issues and maintaining trust.

Loss of Human Control

As agents become more autonomous, it can be challenging for humans to maintain oversight. This can lead to errors, ethical breaches, and harm to users and the platform's reputation.

Automation Bias

Users can over-trust AI outputs, assuming they're always accurate and reliable. Retrieval-augmented generation (RAG) can enhance this bias by making AI outputs seem highly authoritative and credible even when they're erroneous. This overreliance can lead to mistakes.

Increased User Misuse

As more users interact with generative AI, the chance of misuse rises. Users might exploit AI for harmful purposes or misunderstand its proper use, leading to issues like generating inappropriate content, or violating privacy.

We use a range of mitigation strategies. We build guardrails into our platform and products, conduct adversarial testing through red-teaming, and by developing an AI Acceptable Use Policy that helps protect you. We also provide you the ability to customize the guardrails in the product to ensure that the guardrails reflect your organization's values and compliance requirements. We’ll talk about this in more detail in the next two units..

In this unit, you learned about autonomous agents and the associated risks. You also explored the fundamental principles and practices for developing AI agents and how guardrails are designed at Salesforce. In the next unit, you learn how Agentforce guardrails and trust patterns are used to implement trusted agentic AI.

Resources

Condividi il tuo feedback su Trailhead dalla Guida di Salesforce.

Conoscere la tua esperienza su Trailhead è importante per noi. Ora puoi accedere al modulo per l'invio di feedback in qualsiasi momento dal sito della Guida di Salesforce.

Scopri di più Continua a condividere il tuo feedback