Skip to main content

Explore B2C Commerce Replication

Learning Objectives

After completing this unit, you’ll be able to:

  • List the three instances a replication impacts.
  • List the three ways you can run a replication.
  • List three types of data replication that automatically trigger a cache refresh.
  • Describe a replication rollback.
Note

Commerce Cloud is now Agentforce Commerce, and B2C Commerce is now Agentforce Commerce for B2C. You may see references to Commerce Cloud and B2C Commerce in our applications and documentation.

B2C Commerce admins and developers regularly replicate data and code from the staging instance to the production and development instances.

For example, a developer just created a faster checkout process and added a bonus product discount feature that required changes to Business Manager configuration settings. Also, the B2C admin imported product data from an external system to staging for a new spring line. The process to move code, configuration settings, and data between B2C Commerce instances is replication.

A replication process is a collection of tasks that push defined data or code from a source instance to a target instance. The source instance holds the code or data you want to move. The target instance is where it ends up. In replication, staging is the source and development and production are the targets.

The source and target instances

Distinguish PIGs and SIGs

B2C Commerce sites typically have these instances.

  • Three instances on the primary instance group (PIG):
    • Staging
    • Development for testing
    • Production for deployment
  • One demo instance
  • On-demand sandbox instances for code development, site staging, testing, and deployment

You run replication only on a PIG—for both data and code.

Data replication copies data, metadata, and files from staging to either the development or production instance. It works at two levels.

  • Global replication includes configuration information and data for the entire organization.
  • Site replication includes data for one or more specific sites (such as product and catalog data, XML-based content, and image files).

Code replication transfers code versions from staging to a development or production instance and activates them.

A developer uploads code from the developer’s machine to a sandbox. Code deployment can also be done from the local machine to staging.

Typically, developers are responsible for moving code from a SIG to a PIG. When developers finish coding on their local machine, they upload their code to a sandbox or to staging. They use Visual Studio Code to upload it, or they use a code repository (such as Git) to synchronize the work between multiple developers. The code repository is then the source for a release to a staging instance. This type of development environment has its own build process with an automated upload.

Replication pushes data and code from a staging instance to production and development instances.

The Replication Process

A replication involves these steps.

  • Run a replication from staging to development to test it.
  • Test the development instance and view the logs.
  • Verify that search works and that the data is correct on the development system.
  • Make any changes on the staging instance.
  • Replicate from staging to development and test again until everything works.
  • Replicate from staging to production.
  • Test everything on production, even if it replicated properly to development.

When you configure a data or code replication process, you choose to run it immediately, schedule it to run later, or assign it to a job.

A data replication is a long-running process with two phases.

  • Copy: The data transfers to the target system but is not yet visible there.
  • Publish: This quick step makes all changes available at the same time.

You configure data replication processes to recur daily, weekly, or monthly at a specific time. A scheduled process replicates the system state at run time—not at the time you created the process.

A code replication transfers code versions from a staging instance to a development or production instance and then activates them.

Avoid Conflicts During Replication

During a replication, avoid other updates. Making manual edits in Business Manager on either the source or target instance while a replication is running can result in critical errors and data inconsistency.

Make sure no jobs are running on the target instance during replication, and avoid data replications during a B2C Commerce standard maintenance window. If a recurring data replication process fails, it stops recurring.

Evaluate Page Cache Impact

A page cache invalidation removes or updates stale data in a page cache. When you replicate data or code, B2C Commerce initiates a cache invalidation automatically. When you initiate a cache invalidation manually, the system begins invalidating cached data. To reduce performance impact and minimize disruptions, the system processes cache eviction over a period of 15 minutes (the "Clear Period").

Note

In this module, we assume that you’re a B2C Commerce admin with the proper permissions to perform these tasks. If you’re not a B2C Commerce admin, that’s OK. Read along to learn how your admin takes these steps. Don’t try to follow the steps in your Trailhead Playground. This functionality isn’t available in the Trailhead Playground.

To access and invalidate the cache:

  1. In Business Manager, click App Launcher, and select Administration | Sites | Manage Sites.
  2. Select your site name.
  3. Click the Cache tab.
  4. Select the staging instance.
  5. In the Cache Invalidation section, click Invalidate to invalidate the Static Content Cache and Entire Page Cache for Site, or the Entire Page Cache for Site.

Business Manager Cache Invalidation

Clearing the page cache creates a heavy load on app servers. Clear the page cache manually only when necessary, and avoid clearing it during high traffic times. For example, for minor updates, wait for a scheduled cache clearing at night rather than clearing immediately.

A clear cache command takes up to 15 seconds to reach the web server. The update does not appear immediately. To support successful replications, evaluate the change scope and keep changes as small as possible.

For both automatic and manual cache refreshes, B2C Commerce delays refreshing all pages in a production instance for 15 minutes to distribute load across app servers.

Summarize Code Replication Cache Behavior

By default, the final step of a data replication process also automatically invalidates and refreshes the cache. While you can configure the process to skip this step, do so with caution, skipping it can cause inconsistent data on your storefront that is difficult to troubleshoot.

Here’s an example. B2C Commerce caches product description pages for 24 hours. You schedule cache clearing for the product page for the next night and then notice that several product prices are incorrect on the production instance. You ask a merchandiser to correct the prices in Business Manager on the staging instance, and then you replicate the changes to production by using a process that skips page cache clearing because you already scheduled it.

Doing it this way keeps the price data in sync on staging and production and makes sure that the correct prices appear in the basket (which isn’t cached). The storefront product description pages show the old, incorrect prices until the scheduled page cache clearing occurs.

Note

Basket is the term used within B2C Commerce APIs for the shopping cart.

By skipping the automatic cache refresh, you risk showing incorrect prices on product description pages. You accept the trade-off to avoid the performance impact of a production cache refresh. What is most important is that baskets on the production instance reflect correct prices in sync with the staging instance.

Here are some other page caching considerations.

When you replicate...

B2C Commerce...

Site-specific data

Clears the page cache for the affected site, unless you’re only replicating coupons, source codes, Open Commerce API settings, or active data feeds.

Global data

Clears the page cache for all affected sites, unless you’re only replicating geolocations or customer lists.

Catalogs, sites, or price books

Automatically clears the cache.

Promotions or static content

Doesn’t automatically clear the cache.

Catalogs

Selectively clears the page cache of affected sites, by using the rules described next.

Catalogs are a special case, as follows.

When you replicate...

B2C Commerce clears cache from...

All catalogs for all sites of an organization

All sites of the organization.

A single catalog that you assign to one or more sites

The sites to which the catalog is assigned.

A product catalog not directly assigned to a site but serves as a product repository for one or more site or navigation catalogs

The sites, determined programmatically, that offer products from the product catalog.

Apply Best Practices for Focused Replication

To make sure your targeted replications run smoothly and don't introduce errors, follow these best practices for focused, granular data transfers:

Identify Your Dependencies

Dependencies are the most common source of replication failure. Before you select a task for a granular replication, identify and make plans to resolve any dependencies.

For instance, if you have campaigns that use source codes and coupons, replicate the source codes and coupons data before, or together with, the campaigns data. Replicating campaigns first can result in data corruption on the target instance. This error happens because the campaigns depend on the source code and coupon data.

Avoid Concurrent Processes

Don’t run multiple replication processes at the same time. Replication is a resource-intensive process, and running processes concurrently can lead to performance issues and unpredictable results.

Replicate Dependencies First

If you modify a system object type (for example, adding a custom attribute to the Product object), replicate the global system object type definition before replicating any objects of that type.

Always Test Before Production

Before replicating any changes to your live production instance, run the same replication process from your staging instance to a development instance first. This practice tests the changes and verifies that everything is correct, including storefront functions like search, without any risk to your production environment.

Run During Low Traffic Times

Replication processes require significant system resources. Schedule any major replication to run during low-traffic times, such as late night or early morning hours. Scheduling replication for low-traffic times minimizes the performance impact on your storefront and gives your shoppers the best experience.

Examine the Logs

When replicating to a production instance, first transfer the data or code to verify the transfer process. Examine the logs before publishing the data or activating the code.

Rebuild Search Indexes

Always rebuild your search indexes and make sure that the process is complete before replicating them. When replicating search indexes, turn off incremental and scheduled indexing, and stop other jobs. Replication fails when you run it during the rebuild or modification of an index.

Limit User Access

Limit user privileges to perform data replication, and split the responsibility between multiple user roles. For example:

  • A product manager defines replication processes, but doesn’t have permission to schedule or run them.
  • A replication manager manages and runs replication processes.

Avoid Replicating Static Content

Avoid modifying or moving static content files whenever possible. These changes take a long time to replicate.

Use Existing Processes

Copy an existing replication process when possible, instead of creating a process from scratch.

Roll Back a Replication

To roll back a replication and restore the target instance to its previous state, run another replication process with the replication type set to Undo. You can only roll back the most recent data or code replication.

Data and code replications do not affect each other's rollbacks. For example, if you run a data replication and then a code replication, you can still undo each independently.

Note

The undo action is unavailable for data replication processes that ran before a General Availability (GA) release. For example, a process on version 24.7 cannot be undone after the system upgrades to version 24.8. This does not apply to GA Updates—for example, 24.8.1 to 24.8.2.

Next Steps

In this unit, you learned which instances to use for code and data replication. You learned about page cache implications, best practices, and how to roll back a replication. Next, learn how to configure and run a data replication.

Resources

Salesforce 도움말에서 Trailhead 피드백을 공유하세요.

Trailhead에 관한 여러분의 의견에 귀 기울이겠습니다. 이제 Salesforce 도움말 사이트에서 언제든지 새로운 피드백 양식을 작성할 수 있습니다.

자세히 알아보기 의견 공유하기