Skip to main content

Learn the Basics of the Models API

Learning Objectives

After completing this unit, you’ll be able to:

  • Describe what you can do with the Models API.
  • Explain when to use the Models API.

Before We Begin

In this module, we are covering functionality using the Models API. As a reminder, the Models API is a pilot or beta service that is subject to the Beta Services Terms at Agreements | Salesforce.com or a written Unified Pilot Agreement if executed by Customer, and applicable terms in the Product Terms Directory. Use of this pilot or beta service is at the Customer's sole discretion.

Introduction to the Models API

The Models API provides Apex classes and REST endpoints that connect your application to large language models (LLMs) from Salesforce partners, including Anthropic, Google, and OpenAI. You can use any Salesforce-enabled model that can be configured in Einstein Studio.

There are four Models API capabilities available through both REST endpoints and Apex methods. Let’s take a closer look.

Models API Key Capabilities

Capability

Description

Generate text.

The Models API can generate text from a single prompt instead of a whole chat conversation. This capability is useful for simple, nonconversational tasks and testing the capabilities of a model.

Generate embeddings.

An embedding is a numerical representation of a chunk of content. Sometimes an embedding is called an embedding vector. To measure the semantic similarity between two chunks of content, you can use mathematical operations on their embedding vectors, such as cosine similarity, Euclidean distance, or dot product. Embeddings are commonly used for retrieval-augmented generation (RAG) and semantic search features.

Generate chat.

The Models API can generate a message for a chat conversation. This allows you to prompt the model with a list of messages rather than being limited to one prompt. Each message in the list represents a part of a conversation history.

Submit feedback.

You can provide feedback on any generated text created by the Models API. You can use this data, which is stored in Data Cloud, to review the quality of the responses and then update your requests or your model configurations.

How to Use the Models API

At first glance, the Models API appears similar to Einstein Generative AI Prompt Builder. Connecting an AI model to a Salesforce org and grounding the model in Salesforce data is easily handled with Prompt Templates.

The Models API is designed with flexibility and extensibility in mind, and it complements existing Salesforce AI offerings for developers. Prompt Builder and the Prompt Template Connect API are effective for fast prompt management and Agentforce provides an interactive chat experience. The Models API provides additional features like embeddings and chat generations with history which enable developers to architect custom AI applications.

Now, let’s take a look at a real-world example of the Models API.

DreamHouse Realty needs a way to keep its employees informed on local and national housing market conditions. Home buyers respond better to more targeted communications than DreamHouse Realty has the resources to provide. Younger home buyers are especially sensitive to broader housing market conditions and are doing research to find the best fit. If DreamHouse Realty can crystalize market research using generative AI, the firm can increase buyer and seller confidence.

Maria Garza is a developer at DreamHouse Realty. She is building an internal dashboard that uses the Models API to analyze data from an external Housing Market API and summarize it back to DreamHouse realty employees. Eventually, DreamHouse Realty plans for this dashboard to be a fully functional AI-powered enablement tool. Not only will the dashboard help brokers understand market conditions, but it will also better connect them to their clients through organization data.

For now, Maria is focused on the very first steps: Setting up her environment, and creating a simple dashboard that leverages the Models API chatGenerations endpoint.

Maintaining Trust

Trust is Salesforce’s #1 value, so it is crucial that user data stays protected when interfacing with LLMs. Salesforce Einstein generative AI solutions are designed, developed, and delivered based on our five principles for trusted generative AI.

  • Accuracy
  • Safety
  • Transparency
  • Empowerment
  • Sustainability

Salesforce also has agreements with LLM providers, such as OpenAI. These agreements include commitments for zero data retention, letting you use generative AI without concern for your private data being stored by third-party LLM providers.

All Models API calls go through the Einstein Trust Layer. The Einstein Trust Layer is a secure AI architecture, built into the Salesforce Platform. It’s a set of agreements, security technology, and data and privacy controls used to keep your company safe while you explore generative AI solutions.

Generations calls to the Models API automatically perform data masking and toxicity scoring. The API passes back a flag indicating whether toxicity was detected, along with score information. This information is also stored in Data Cloud. Within Data Cloud you can view more information on toxicity score information, data masking, and feedback data.

The Einstein Trust Layer isn’t a replacement for human judgment. If you plan to share generative AI outputs with customers, it’s important to review all responses for accuracy, bias, and toxicity.

Supported Models

The Models API supports large language models (LLMs) from multiple providers, such as Amazon Bedrock, Azure OpenAI, OpenAI, and Vertex AI from Google.

The Models API supports Einstein Studio’s bring your own LLM (BYOLLM) feature. With BYOLLM, you can add a foundation model from a supported provider, configure your own instance of the model, and connect to the model using your own credentials. Although inference is handled by the customer’s model, the request is still routed through the Models API and Trust Layer features are fully supported.

For more information on the models available directly through the Models API and supported with Einstein Studio’s BYOLLM feature, refer to the Supported Models page in the resources section of this unit.

Now that you know what the Models API is and what it is capable of, it’s time to lay the groundwork for your very own Models API Lightning Web Component!

Resources

Condividi il tuo feedback su Trailhead dalla Guida di Salesforce.

Conoscere la tua esperienza su Trailhead è importante per noi. Ora puoi accedere al modulo per l'invio di feedback in qualsiasi momento dal sito della Guida di Salesforce.

Scopri di più Continua a condividere il tuo feedback