Skip to main content

Grow your business with Salesforce Starter

Deepen customer relationships with sales, service, and marketing in one app.

Start your free 30-day trial
Time Estimate

Understand Scalability at Salesforce

Learning Objectives

After completing this unit, you’ll be able to:

  • Assess how scale impacts a company’s performance on the Salesforce Platform.
  • Identify factors that influence a company’s scale.

Before You Begin

We recommend that you complete the Large Data Volumes module on Trailhead before starting this badge.

What Is Scalability?

Salesforce is a powerful tool for managing data and users within your org. But when the number of users in your org reaches the hundreds of thousands, you need to accommodate the increased traffic and related data flow. You need to adjust your org to a new scale.

An org has good “scalability” when it performs effectively and remains stable regardless of the demands on it. This means the org can handle changes, upgrades, overhauls, and resource reduction without freezing, crashing, or UI lags. In other words, scalability addresses the unique problems that stem from thousands of users using your org at once, all viewing, adding, and deleting data. 

Scalability in Everyday Life

Imagine your org is a supermarket. A cashier can check out one person in one minute. But if two shoppers go to the cashier every minute, the second shopper waits two minutes, the third waits for three, and so on. The cashier is still performing one check out per minute, but the line can get so long that customers are waiting 20 minutes and leave the store. When shoppers get in line at a faster rate than the cashier can check them out, it becomes a scalability issue, and you’ll need additional cashiers to serve the unexpected number of customers. Scalability makes sure that the customer experience remains the same regardless of how many shoppers are in line.

Scalability measures the “cashiers” in your org. Even if they are doing stellar work, you care about how long the average customer is waiting. Scalability is making sure that the customer’s experience is quick and consistent, no matter how many people are in the store. 

Customers queued at the cashier to check out.

Why Should I Care About Scalability?

You should care about your org’s scalability now so that you don’t have to worry about it later. Scalability issues that adversely affect platform performance eventually impact trust. 

Incorporating scalability during the design phase maximizes resiliency. Factor scale into your data management plan so that you’re ready when your org has a workload spike. 

Workload is anything that contributes to the demand on your org’s resources. This could be simultaneous users, increased data, or complex transactions. Heavy workloads push the system past its original capacity.

Earlier you learned that an org has good scalability when it can handle changes, upgrades, overhauls, and resource reduction without crashing. But this is more of an extreme case. More precisely, an org is scalable if you don’t need to redesign it to maintain performance during or after a steep increase in workload.

If you prioritize scalability as part of the system design, you ensure a better user experience, lower costs, and higher agility in the long term. Scalability isn’t a “bonus feature”—it determines the lifespan of your org.

Factors Impacting Scalability

Scalability is definitely about workload, but it’s also about how your data is structured and how it’s accessed by the people in your company. It’s not just the amount of data that matters; it’s also important to understand how your data will be used. The following table provides a summary of several key workload and data management factors. Each of these factors impacts your org and the overall customer experience. Addressing these considerations in your data management plan builds scalability into your org.

Factor

Description

Impact

Design considerations to mitigate it

Large data volume (LDV)

Any excessively high number of records on a single object is considered a large data volume. There are multiple factors that impact data volume, including data skews, overly complex data structures, and access rules, to name a few.

Can result in slower queries, search, list views, and reports, as well as cause more timeouts.

Use data modeling and governance, to create an archival strategy. Also consider using custom indexes, skinny tables, and data virtualization with Salesforce Connect. Avoid data skew.

Disable triggers during large data loads if possible.

User registration rate

Large number of on-demand new user signups.

Can result in more timeouts. 

Wherever applicable, create the user accounts in advance.

Complexity of transactions

Combination of triggers, workflows, UI transactions, and asynchronous events.

Running concurrent triggers, workflows, async batch jobs, and data sync jobs can lead to locking scenarios. 

Design transactions using a combination of low-code and pro-code approaches, focusing on low-code solutions to meet UX requirements and pro-code solutions to leverage automation as detailed in these Architect Decision Guides. Run non-business critical batch jobs and data sync operations at off-peak hours. 

Complexity of org

Large number of custom components, large numbers of users in a variety of roles, sharing rules, multiple packages.

Cyclical and recursive triggers can result in locking scenarios with a deeper role hierarchy and slower transactions.

Avoid cyclical or recursive triggers. 

Complexity of role hierarchy

Large number of users with a large number of different roles. 

Performance/scale issues arise with an increase in role depth. 

Can increase the complexity of data management.

Keep the role hierarchy to under 10 levels of branches in the hierarchy and don’t mirror your org hierarchy to the role hierarchy. Group together roles based on the necessary data access level. (Consolidate different job titles into single roles wherever possible.) 

Complexity of sharing rules

Redundant sharing rules and excessive sharing rule calculations.

Slower transactions can impact business operations when changing the role hierarchy during business hours. 

Plan and design your sharing rules to limit the number of ownership-based sharing rules to 100 per object and the number of criteria-sharing rules to 50 per object. Consider deferring sharing rule calculations during data loads.

Number of triggers 

Excessive number of triggers

Excessive use of triggers is difficult to maintain in a large enterprise implementation.

Only use triggers when necessary. Disable triggers during large data loads if possible. Keep trigger logic simple—use only one trigger for each object and a consistent naming convention that includes the object name.

Bulkify “helper” classes and methods, as well as triggers to process up to 200 records for each call.

Use collections for each DML statement and in SOQL “WHERE” clauses.

Data skew

Occurs when more than 10,000 child records are associated with the same parent record within an org. 

Slows queries, searches, list views, reports, dashboards, and sandbox refreshing.

Plan your data model to keep the number of child records per parent below 10,000, and distribute new child records across these accounts as they’re created. 

Ownership skew

Single owner owns more than 10,000 records of a single object.

Changing user roles or group membership impacts a very large number of entries in the sharing tables, resulting in lengthy access rights recalculations.

Distribute ownership of records across a greater number of users.

Putting the Super in Your Supermarket Experience

Let’s return to our previous cashier example and think about preparing to serve customers on Black Friday. Customers will be expecting the same quality of service, so your org must be prepared to handle a rapid spike in demand, be able to handle larger than normal purchases, and locate additional inventory. 

Your org has to plan for these things well before the shopping day begins in order to ensure a positive experience for customers. Building scalability into your system provides the resiliency you need for higher than normal workloads. It is extremely important for your org to get this right—a good customer experience during this time can greatly increase customer loyalty while failing to provide a good experience for customers will likely lose them permanently.

If you manage these factors effectively, your org can provide every user an amazing experience regardless of demand or volume.

Next, you look at what it takes to manage your data at scale using the Salesforce Platform.

Resources

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback