Learn How Salesforce Uses Large Language Models
Learning Objectives
After completing this unit, you’ll be able to:
- Explain how Salesforce brings trust to LLMs
- Choose the correct LLM option for your business.
- Describe the limitations of the available LLM options.
Trailcast
If you'd like to listen to an audio recording of this module, please use the player below. When you’re finished listening to this recording, remember to come back to each unit, check out the resources, and complete the associated assessments.
Leading with Trust
Trust is the number one value at Salesforce. So it makes sense that Salesforce uses Large Language Models (LLMs) in a secure and trusted way. The key to maintaining this trust is through the Einstein Trust Layer. The Einstein Trust Layer ensures generative AI is secure by using data and privacy controls that are seamlessly integrated into the Salesforce end-user experience. These controls let Einstein deliver AI that securely uses retrieval augmented generation (RAG) to ground responses with your customer and company data, without introducing potential security risks. In its simplest form, the Einstein Trust Layer is a sequence of gateways and retrieval mechanisms that work together to enable trusted and open generative AI.
Trusted Salesforce Agents
Agentforce agents use leading LLMs through the Einstein Trust Layer by using RAG to build secure prompts with Salesforce and Data Cloud data. This creates a rich and secure environment to use AI agents capable of supporting employees and customers. These agents don't just offer suggestions—they can complete tasks independently. For example, they can handle customer inquiries, troubleshoot issues, and even make sales recommendations without human intervention. All while using the Trust Layer to secure the data and provide confident responses.
Choose the Best Large Language Model
All Agentforce reasoning engine calls use OpenAI GPT-4o, and in some cases Azure OpenAI GPT-4o, as the default model. However, you can choose from other models that support your business needs. It’s important to note that ensuring you have the right model for the right task will help you get started with generative AI faster and achieve the results you expect. Salesforce provides deployment capabilities for many different LLMs while also helping companies maintain their data privacy, security, residency, and compliance goals.
Your business may choose to use more than one LLM to handle different types of use cases, like coding, sentiment analysis, or content generation. When choosing a model for a use case, keep the model capabilities, cost, response quality, and speed in mind. You can also choose models that are geo-aware. These models automatically route LLM requests to a nearby data center based on where Data Cloud is provisioned for your org. This gives you greater control over data residency and reduces latency.
Use Salesforce-Managed LLMs
Salesforce-managed LLMs are a great way to access LLMs across the internet and get started using generative AI quickly. You can customize your AI implementation with different Salesforce-managed models using the Models API or Prompt Builder. Salesforce offers a variety of models that are enabled by default to help speed up the configuration process.
For a list of current Salesforce-managed models, visit the Large Language Model Support help documentation.
Use Salesforce Hosted Third-Party LLMs
You can also host models inside of Salesforce. As part of Salesforce’s commitment to an open ecosystem, Einstein is designed to host LLMs from Amazon, Anthropic, Cohere, and others—entirely within the Salesforce infrastructure. Einstein will help maintain customer prompts and responses in the Salesforce infrastructure. In addition, Salesforce and OpenAI have established a shared trust partnership to securely deliver content through the Einstein Trust Layer.
Bring Your Own Large Language Model (BYOLLM)
If you’re already investing in your own LLM, you can connect it to Salesforce to use within custom Prompt Builder templates. You can benefit from Einstein even if you’ve trained your own domain-specific models outside of Salesforce while storing data on your own infrastructure. When you execute a prompt through an external LLM you’ve connected, it works just the same as an internally connected LLM and routes the request through the LLM Gateway and the Einstein Trust Layer before sharing content with your users.
The BYOLLM options are changing and fast! Keep an eye on the resources for new updates.
Resources
- Trailhead: The Einstein Trust Layer
- Trailhead: Large Language Model Data Masking in the Einstein Trust Layer
- Salesforce Help: Einstein Trust Layer: Designed for Trust
- Salesforce Help: Bring Your Own Model
- Trailhead: Prepare Your Data for AI
- Salesforce: What are LLMs (Large Language Models)?
- News and Insights: Salesforce Launches BYOM to Make It Easy for Businesses to Use Proprietary Data to Build and Deploy AI Models
- Developers’ Blog: Bring Your Own AI Models to Data Cloud