Skip to main content
Join the Agentforce Hackathon on Nov. 18-19 to compete for a $20,000 Grand Prize. Sign up now. Terms apply.

Learn About Factors Affecting Scale

Learning Objectives

After completing this unit, you’ll be able to:

  • Explain the challenges that negatively impact scaling.
  • Recall three ways to improve the lack of proper bulkification.
  • Recognize the effects of a poor sharing model.
  • Discuss the four decisions necessary to assess scale.

When designing solutions, it’s important to design for the long term. If you’re able to support one customer, you should think about supporting millions of customers. Have you ever heard the phrase “Details matter”? Well in the case of scaling your org and maximizing its performance, that’s exactly what you need to focus on, the details. If your org lacks proper bulkification and governance, has locking issues, and uses heavy Apex post-processing, it directly affects how effectively it will scale. In turn, this becomes critical to your customers’ success. If you can’t receive data at volume, then their products could be useless in the future.

Let’s review these pitfalls and figure out how they can be addressed to better support your own architectures.

Lack of Proper Bulkification

Bulkifying transactions improves response times and resource consumption, and it helps you avoid hitting limits. On the other hand, the lack of proper bulkification, especially in trigger code, hinders scale. It’s imperative to have good developers on your team who:

  1. Make a habit of optimizing their trigger code.
  2. Follow Apex best practices.
  3. Ensure they use optimum batch sizes to improve bulkification, if they’re using Bulk V1.

Consolidate multiple API calls into a single call using Composite APIs. When using Bulk API, optimize your batch size in order to avoid limit retries and timeouts. Batch size is a top reason why batch jobs fail. If the batch size is too high, the system can’t process it within the allotted 10 minutes before the batch is suspended. 

Salesforce times out batches that run more than 10 minutes. If your org uses Bulk API 2.0, it automatically resolves batch size issues. However, if an org uses V1, typically customers start with a higher batch size (10,000) and then try to reduce it if they run into timeouts. 

Note

If Bulk V1 is running, it’s appropriate to move the batch size down to 7,000, then to 5,000 and so on, to make sure that the batch size doesn’t hit a timeout. The batch size reduces the throughput while also eliminating locking and minimizing the amount of retry attempts. If timeouts are continuously a problem, transition to Bulk API 2.0.

Lack of Governance

When it comes to governance, there are two avenues that affect scalability.

  1. Running multiple conflicting integrations
  2. A poor data/sharing model

It’s important to avoid running multiple conflicting integrations in parallel. For instance, one integration is inserting Cases for Account 1, and a second integration is inserting Contacts also for Account 1 at the same time. This type of conflicting integration causes locking. This lack of governance is common when there are multiple functional and technical teams that work on the same application while not having an overall picture of the integrations that make up the application.

A poor sharing model also affects scalability. The data and sharing model is the foundation of your application. If it’s designed poorly, everything else including the integration, reporting, or single sign-on (SSO) is destined to fail.

Having a running dialogue with your customer helps to understand all of their needs for now and in the future. Deciding if the data transactions need to be in real time or near–real time is a key to avoid governor limits. Knowing this offloads a lot of queries, updates, and inserts into the future. This also simplifies the process of selecting an integration. If the data doesn’t need to live on the platform, using techniques like Salesforce Connect becomes plausible.

Locking

Customers often upload large amounts of data while also maintaining integrations with other systems that update their data. If you have multiple integrations trying to update the same record at the same time, it can lead to record locking. 

Record-level database locking is in place to keep the integrity of data while updates are happening. The locks don’t have the same performance risks as other organization locks, but they can still cause updates to fail. It’s important that customers avoid running conflicting updates to the same records in different threads.

Let’s look at an example. Cloud Kicks, a custom shoe manufacturing company, has 150,000 contacts that were generated from marketing campaigns for a new sneaker brand release. But those contacts aren’t associated with any business account. The contacts are required to be associated with accounts. It may seem simple (because you are without a parent account) to assign all new contacts to a dummy account. In turn, these users are not associated with the account; they are just users that are accounts themselves. The optimal number of child records with a single parent account is 10,000. If Cloud Kicks assigns 150,000 contacts to an account, it causes performance problems when it’s time to do maintenance.

Heavy Apex Post-Processing

We’ve all heard how important the order of execution is. You may have even witnessed developers who write 12–15 triggers on the same object. That leads to heavy recursion and data integrations that take more time purely due to bad design. 

Have you ever tried to use various patterns but still didn’t get your desired throughput? Your first thought may be that it has something to do with the pattern you chose. However, the Apex post-processing could be the culprit that’s affecting your throughput. Take a second, look at your Apex post-processing, process flows, and triggers to make sure that they are optimal. 

Verify that the Apex post-processing is minimized. This is a key factor when using Bulk API. A great mechanism to identify if a customer is hogging resources is finding out how much time each batch is taking. If your batches are consistently taking longer than usual even though you’re doing a bulk load, you’ll notice a large increase in latency because of heavy Apex post-processing.

Assessing the Integration You Need

There are four decisions that need to be made to make an accurate assessment of how to scale. These decisions also help to define the service level agreement (SLA).

Source and Target

It's important to define what your source and target is. As the driver for creating the requirements, it’s necessary to know which system is initiating the integration and which system(s) is (are) integrated with the source. 

Type

This characteristic focuses on which type of integration you’ll use. Is the integration at the presentation, business process, or data layer?

Data Volume

What is the projected amount of data and transactions across the systems? When you think about data volume, don’t just think about the number of rows of data. Also keep in mind the size of the data, because there needs to be an understanding of the amount of throughput that can be handled.

Timing

Timing is equally important because it dictates the pattern that you select. Do you want the source to wait for a real-time response with synchronous timing? Does the source initiate the request and then continue processing similar to asynchronous timing? 

As you progress through this module, keep these four decisions in mind. Then take the time to answer these additional questions.

  1. Is it necessary to move the data?
  2. Is it a view-only requirement?
  3. How often does the data need to be moved?
  4. Which object will be used?
  5. What is the volume of data per hour and at its peak?
  6. What is the total anticipated volume?

Understanding what the source and target are, the type of integration, the data volume, and the timing will help you as a developer or architect understand how to move forward. 

Up Next

The key to scaling is locating the bottlenecks long before your users do. Often the way that your API is constructed, and in turn, the post-processing of the data, is the culprit. But also, keeping in mind the details to maximize the performance of your API will put you on the right track for your customer success. 

In the next few units, we look at two scenarios that intertwine the factors above that impact scaling your org for the future. Remember, while reading through the scenarios, it’s important to fully understand the concept before you apply certain strategies to the business problem.

Resources

Comparta sus comentarios sobre Trailhead en la Ayuda de Salesforce.

Nos encantaría conocer su experiencia con Trailhead. Ahora puede acceder al nuevo formulario de comentarios cuando quiera desde el sitio de la Ayuda de Salesforce.

Más información Continuar para compartir comentarios