Skip to main content
Build the future with Agentforce at TDX in San Francisco or on Salesforce+ on March 5–6. Register now.

Prepare for Fine-Tuning

Learning Objectives

After completing this unit, you’ll be able to:

  • Explain dataset preparation for fine-tuning.
  • Give a high-level account of the fine-tuning process.

Prepare Your Dataset

The first step involves preparing the task-specific dataset for fine-tuning. This may include data cleaning, text normalization, and converting the data into a format that is compatible with the LLM’s input requirements (in other words, data labeling). It is essential to ensure that the data is representative of the task and domain, and that it covers a range of scenarios that the model is expected to encounter in production. Here’s how you can get your dataset ready for fine-tuning.

Data Collection

Accumulate relevant data for the specific domain or task. This might involve gathering user interactions or using domain-specific data.

Data Cleaning

Remove irrelevant data, correcting errors, and possibly anonymizing sensitive information.

Dataset Splitting

Partition your data into training, validation, and test sets. The model trains on the training set, hyperparameters are tuned using the validation set, and performance is evaluated on the test set.

Configure Your Model

Selecting the appropriate base model and fine-tuning method depends on the specific task and data available. There are various LLM architectures to choose from, including GPT-3.5 Turbo, BERT, and RoBERTa, each with its own strengths and weaknesses. The fine-tuning method can also vary based on the task and data, such as transfer learning, sequential fine-tuning, or task-specific fine-tuning.

Model Selection

Consider the following when choosing your base model.

  • Whether the model fits your specific task
  • Input and output size of the model
  • Your dataset size
  • Whether the technical infrastructure is suitable for the computing power required for fine tuning

Architecture Selection

Adjust certain components depending on the task, such as the final layer for classification tasks. Note that the core model architecture will remain the same.

Hyperparameter Choices

Determine values for learning rate, batch size, number of epochs, and regularization parameters. Sometimes, a smaller learning rate is preferred as aggressive updates might make the model forget its pretrained knowledge.

Fine-Tuning Your Model

After the LLM and fine-tuning method have been selected, the pretrained model needs to be loaded into memory. This step initializes the model’s weights based on the pretrained values, which speeds up the fine-tuning process and ensures that the model has already learned general language understanding.

Initialize with Pretrained Weights

Start with the weights from the pretrained model. This is the essence of transfer learning, leveraging knowledge from previous training.

Adaptive Learning

In some advanced scenarios, you might employ techniques that adapt the learning rate for different layers. For instance, earlier layers (which capture general features) might be updated with smaller learning rates compared to the later layers.

Regularization

Techniques like dropout, weight decay, or layer normalization can be crucial to prevent overfitting, especially when the fine-tuning dataset is relatively small.

Monitor and Evaluate Your Model

This step involves training the pre-trained LLM on the task-specific dataset. The training process involves optimizing the model’s weights and parameters to minimize the loss function and improve its performance on the task. The fine-tuning process may involve several rounds of training on the training set, validation on the validation set, and hyperparameter tuning to optimize the model’s performance.

Track Loss and Metrics

Continuously monitor the loss on your training and validation sets during training. This helps in detecting overfitting or issues in training.

Early Stopping

Halt training if the performance on the validation set starts degrading (even if training set performance is improving), it’s a sign of overfitting. This helps prevent the model from fitting too closely to the training data.

Evaluation Metrics

Use appropriate metrics (like accuracy, F1 score, BLEU score) to gauge the model’s performance on the test set. The metrics being used depends on the task being performed, such as classification, regressions, generation, and so on.

Make Post Fine-Tuning Adjustments

After the fine-tuning process is complete, the model’s performance needs to be evaluated on the test set. This step helps to ensure that the model is generalizing well to new data and is performing well on the specific task. Common metrics used for evaluation include accuracy, precision, and recall.

Calibration

Adjust the model's outputs to better reflect true probabilities. Sometimes, a fine-tuned model might be overconfident or underconfident in its predictions

Feedback Loop

Set up a system where end-users can provide feedback on model outputs. This feedback can be used for further rounds of fine-tuning, leading to continuous improvement.

Deploy Your Model

After the fine-tuned model is evaluated, it can be deployed to production environments. The deployment process may involve integrating the model into a larger system, setting up the necessary infrastructure, and monitoring the model’s performance in real-world scenarios.

Model Size

Consider model distillation or pruning post fine-tuning to reduce the model size without significantly compromising performance. This may change based on where your model is being deployed, such as edge devices, web servers, and so on.

Sum It Up

While the concept of fine-tuning might sound straightforward, in practice, it involves a series of carefully considered steps and decisions. Each stage, from data preparation to deployment, can significantly impact the model's effectiveness and efficiency in the target domain or task.

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback