Skip to main content
Build the future with Agentforce at TDX in San Francisco or on Salesforce+ on March 5–6. Register now.

Meet the Einstein Trust Layer

Learning Objectives

After completing this unit, you’ll be able to:

  • Discuss the #1 Salesforce value, Trust.
  • Describe concerns related to trust and generative AI.
  • Explain why Salesforce created the Einstein Trust Layer.
Note

This module addresses some future functionality of the Einstein Trust Layer. Any unreleased services or features referenced here are not currently available and may not be delivered on time or at all. Customers should make their purchase decisions based upon features that are currently available.

Before You Begin

We know you’re eager to learn how Salesforce protects your company and customer data as we roll out new generative artificial intelligence (AI) tools. But, before you begin, be sure to earn these badges—Generative AI Basics, Prompt Fundamentals, and Large Language Models—so you’re familiar with terms like LLMs, prompts, grounding, hallucinations, toxic language, and more. You can find links to each of these in the Resources section, along with a glossary of AI terms used at Salesforce.

Generative AI, Salesforce, and Trust

Everyone is excited about generative AI because it unleashes their creativity in whole new ways. Working with generative AI can be really fun—for instance, using Midjourney to create images of your pets as superheroes or using ChatGPT to create a poem written by a pirate. Companies are excited about generative AI because of the productivity gains. According to Salesforce research, employees estimate that generative AI will save them an average of 5 hours per week. For full-time employees, that adds up to a whole month per year!

But for all this excitement, you probably have questions, like:

  • How do I take advantage of generative AI tools and keep my data and my customers’ data safe?
  • How do I know what data different generative AI providers are collecting and how it will be used?
  • How can I be sure that I’m not accidentally handing over personal or company data to train AI models?
  • How do I verify that AI-generated responses are accurate, unbiased, and trustworthy?

Salesforce and Trust

At Salesforce, we’ve been asking the same questions about artificial intelligence and security. In fact, we’ve been all-in on AI for almost a decade. In 2016, we launched the Einstein platform, bringing predictive AI to our clouds. Shortly thereafter, in 2018, we started investing in large language models (LLMs).

We’ve been hard at work building generative AI solutions to help customers use their data more efficiently and help make companies, employees, and customers more productive and efficient. And because Trust is our #1 value, we believe it’s not enough to deliver only the technological capabilities of generative AI. We believe we have a duty to be responsible, accountable, transparent, empowering, and inclusive. That’s why, as Salesforce builds generative AI tools, we carry our value of Trust right along with us.

Enter the Einstein Trust Layer. We built the Trust Layer to help you and your colleagues use generative AI at your organization safely and securely. Let’s take a look at what Salesforce is doing to make its generative AI the most secure in the industry.

What Is the Einstein Trust Layer?

The Einstein Trust Layer elevates the security of generative AI through data and privacy controls that are seamlessly integrated into the end-user experience. These controls enable Einstein to deliver AI that’s securely grounded in your customer and company data, without introducing potential security risks. In its simplest form, the Trust Layer is a sequence of gateways and retrieval mechanisms that together enable trusted and open generative AI.

The Einstein Trust Layer process.

The Einstein Trust Layer lets customers get the benefit of generative AI without compromising their data security and privacy controls. It includes a toolbox of features that protect your data—like secure data retrieval, dynamic grounding, data masking, and zero data retention—so you don’t have to worry about where your data might end up. Toxic language detection scans prompts and responses for accuracy and to assure they are appropriate. And for additional accountability, an audit trail tracks a prompt through each step of its journey. You learn more about each of these features in the next units.

We designed our open model ecosystem so you have secure access to many large language models (LLMs), both inside and outside of Salesforce. The Trust Layer sits between an LLM and your employees and customers to keep your data safe while you use generative AI for all your business use cases, including sales emails, work summaries, and service replies in your contact center.

In the next units, dive into the prompt and response journeys to learn how the Einstein Trust Layer protects your data.

Resources

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback