Skip to main content
Build the future with Agentforce at TDX in San Francisco or on Salesforce+ on March 5–6. Register now.

Get to Know Fine-Tuning

Learning Objectives

After completing this unit, you’ll be able to:

  • Describe fine-tuning.
  • Describe the mechanics of fine-tuning.

Before You Start

This badge contains terms and ideas that are described in the Natural Language Processing Basics, Large Language Models and Prompt Fundamentals badges. We recommend that you earn those badges first.

A Quick Refresher

Large language models (LLMs), like OpenAI’s GPT series of models, are massive neural networks trained to understand and generate human-like text. They’re trained on vast amounts of data, so they have a broad, general knowledge base.

What Is Fine-Tuning?

Fine-tuning is the process of taking a vast, broad, and general pretrained language model, and further training (or “tuning”) it on a smaller, specific dataset. For LLMs, this means transforming a general-purpose base model into a specialized model for a particular use case. This helps the model become more specialized in a particular task. Fine-tuning adds to a pretrained model and modifies its weights to achieve better performance.

In other words... Say you have a digital assistant that can cook just about any dish pretty well. While it can make a basic version of any dish, you’re looking for an amazing Italian dish made just like you remember from a trip to Venice. It should master the nuances of Italian cuisine. To achieve this, you’d expose it to more Italian recipes and techniques, refining its existing skills. That’s similar to what happens in fine-tuning.

Few-Shot Learning

Few-shot learning is a type of fine-tuning using a small number of task-specific examples in the prompt, enabling the model to perform better on a task. We can already do this with prompt design and the base LLM. We include instructions and sometimes several examples in a prompt. In a sense, prefeeding the prompt with a small dataset that is relevant to the task.

Fine-tuning improves on few-shot learning by training on a much larger set of examples than can fit in the prompt. This extended training can result in better performance on specific tasks. After a model has been fine-tuned, you won’t need to provide as many examples in the prompt. This saves costs and enables faster requests and responses.

Mechanics of Fine-Tuning

Let’s go over some of the necessary steps to fine-tune an LLM.

Select the Specialized Dataset

The first step is to choose a dataset that is representative of the specific task you’re interested in. This dataset is usually much smaller than the one used for initial training. Focus on these key areas.

  • The selected dataset should align with the specific task or domain you’re targeting. For instance, if you’re tuning a model for medical diagnoses based on patient notes, your dataset should consist of relevant clinical notes and their corresponding diagnoses.
  • Data quality, as always, is important with the specialized data. This often requires a smaller, more focused dataset. However, it’s essential to have a sufficient amount of data to capture the nuances of the specific task. Noisy data, filled with errors or irrelevant information, can hamper the fine-tuning process. It’s crucial to preprocess and clean the data.

Adjust the Model

While the core architecture of the model being fine-tuned remains the same, certain hyperparameters (like learning rate) might be adjusted to suit the nuances of the new dataset.

Continue Training

Instead of starting the training from scratch, you continue training the pretrained model on the new dataset. Since the model has already learned a lot of general knowledge, it can quickly pick up the specifics from the new dataset.

Apply Regularization Techniques

To prevent the model from becoming too adapted to the new dataset (a phenomenon called overfitting), techniques like dropout or weight decay might be employed.

Sum It Up

Fine-tuning is a powerful tool to adapt large, general models to specific tasks. However, like any tool, its success is dependent on the techniques used and considerations taken during its application. The next unit covers why you might want to fine-tune your LLM.

Resources

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback