Skip to main content

Automate Import and Upload During Integration

Learning Objectives

After completing this unit, you’ll be able to:

  • Explain how to automate data import.
  • List three ways to upload code and metadata.
  • Explain the importance of a robust data upload process to staging.
  • Describe three metadata update approaches.

Move Things to the Right Place

An application’s latest code, data, and metadata must end up in the right place. That means ensuring they are imported, uploaded, and running where they need to be, preferably via automation.

In this unit, we explore how to automate the following tasks as part of the integration process.

  • Import the latest data to various instances.
  • Upload code and metadata to various instances.
  • Upload data to the staging instance.
  • Update and integrate metadata.

Import Data

Admins typically update instances using the standard site import process. They can populate different instances with the same data to quickly create sandboxes. A best practice is to store site import files in a common area such as Git, so everyone on the team can synchronize data changes. Specify a different site import folder for each build environment, as shown in this table.

Folder

Description

common / site_template

  • Can be uploaded safely to either the sandbox, development, or staging instances
  • Includes metadata, such as system-object-definitions

testdata / site_template

  • Should be uploaded only to sandbox or development environments
  • Includes data used for automated integration and acceptance tests

Create a NodeJS script (dataUpload.js) that selects the correct site import folder for your build process. Here’s an example.

// dataUpload.js
const environment = process.env.TARGET_ENVIRONMENT;
let folder;
if (environment === 'common') {
folder = 'common/site_template';
siteTemplate = 'site_template';
}
else if (environment === 'test') {
folder = 'testdata/site_template';
}
...

Upload Code and Metadata

A great way to upload code and metadata is with the Open Commerce API (OCAPI). The SFCC-CI (command line interface) tool provides a wrapper around OCAPI, making it easy. The SFCC-CI tool is open source.

Install the SFCC-CI tool by entering this in a terminal or command prompt.

 npm install sfcc-ci

Take a look at the README file (credentials required) in the SFCC-CI GitHub repository to learn how to set up and use SFCC-CI.

Execute SFCC-CI commands using the command-line or the NodeJs-API, depending on your configuration. The NodeJS-API provides better response handling, while the command-line approach is faster to set up.

Here are the commands.

SFCC-CI commands

Details

client:auth

Lets you authenticate using Account Manager credentials with clientId and clientSecret. Store clientSecret as a secured environment variable in your build system, such as in GitHub Secrets 

code:upload

code:activate

Provides a mechanism to upload a zipped code version to B2C Commerce WebDAV and activate the code. 

instance:upload

instance:import

Uploads zipped site import data to B2C Commerce instances and runs the import job.

job:run

Runs a job in B2C Commerce to process data on the instance.

Here are some examples of how to upload and activate a code version file.

Code Upload: Command-Line API Approach

$> sfcc-ci client:auth $client_ID $client_SECRET package.json
{
"name": "your application",
"version": "1.0.0",
....
"scripts": {
....
"auth:unattended": "sfcc-ci client:auth $client_ID $client_SECRET",
"code:upload": "sfcc-ci code:deploy code_version.zip",
"code:activate": "sfcc-ci code:activate code_version",
"code:deploy": "npm run code:upload && npm run code:activate"
},
"dependency": {
"Sfra":
"https://github.com/SalesforceCommerceCloud/storefront-reference-architecture",
},
....
}

Code Upload: NodeJS-API Approach

 /**
* Upload to webDav using sfccCi
* @param {string} codeVersionToUpload - path of code version
* @param {string} codeVersionName - name of code version
*/
uploadCode(codeVersionToUpload, codeVersionName) {
return new Promise((resolve, reject) => {
console.info('Start code upload');
sfcc.auth.auth(this.clientId, this.clientSecret, (err, token) =>
{
sfcc.code.deploy(
This.instance,
`${codeVersionToUpload}.zip`,
Token,
{},
err => {
if (err) {
console.error('Code deploy error: %s', err);
reject(err);
Return;
}
console.info('Finished code deploy');
/**
* Active the code version
*/
sfcc.code.activate(this.instance, codeVersionName, token, err => {
if (err) {
reject(err);
Return;
}
resolve();
});
},
);
});
});
}

Upload Metadata

Depending on the kind of data you upload, sometimes you have to trigger additional steps to make the data fully functional. For example, assigning products to a category might involve triggering site-import processes.

This example shows how to upload, process, and post-process metadata.

 # Upload the metadata file in B2C Commerce
> sfcc-ci instance:upload yourmetadata.zip
# Import the uploaded metadata file
> sfcc-ci instance:import yourmetadata.zip
/**
* Upload to webDav using sfccCi
* @param {string} zipPath - path of zip file
* @param {string} zipName - name of zip file
*/
uploadAndImportMetaData(zipPath, zipName) {
return new Promise((resolve, reject) => {
sfcc.auth.auth(this.clientId, this.clientSecret, (err, token)
=> {
if (err) {
reject(err);
return;
}
sfcc.instance.upload(this.instance, zipPath, token, {},
err => {
if (err) {
reject(err);
Return;
}
sfcc.instance.import(this.instance, zipName, token,
err => {
if (err) {
reject(err);
Return;
}
resolve();
});
});
});
});
}

After the data is uploaded, a job process begins that triggers a Search-Index rebuild. This job is part of the SFRA data bundle, which you must configure on the target instance before you can use it. For example, in a typical CI/CD process, jobs like this are clear cache and reindexing for inventory, price books, and catalogs.

 # Reindex is the name of the job on B2C Commerce
> sfcc-ci job:run Reindex
/**
* Triggers a job on B2C commerce
*/
uploadAndImportMetaData(packageId, jobId, jobParams) {
return new Promise((resolve, reject) => {
sfcc.auth.auth(this.clientId, this.clientSecret, (err, token)
=> {
if (err) {
reject(err);
Return;
}
sfcc.job.run(this.instance, jobId, jobParams, token, {},
err => {
if (err) {
reject(err);
Return;
}
resolve();
});
});
});
}

Upload Data on Staging

Additionally, you need a robust process for uploading data that doesn’t overwrite it on the instance where people are working, which is typically staging. Let’s say a developer is building a new content slider for the homepage. They want to be able to change the images in the slider without having to touch the code. They also want the image links to be interchangeable. They decide to add a new custom attribute to the content asset, which lets merchandisers provide the links.

The developer must push two kinds of data with the deployment to the staging instance, system-object-extensions and the library that includes the content. If the library remains in the version control system, it will overwrite the content on the staging instance with each deployment. To avoid having to add the library manually to each instance, create a one time upload. You can also upload data to staging via an integration.

One-Time Upload and Data Seeding

This custom code can recognize if a certain set of data has already been pushed to an instance. The structure in the repository might look like this.

 ###
# Shows how to organize data for data seeding inside a repository
### data_{TimeStamp1}
|
|--- site_template
|
|--- sites
|
|--- {SiteID}
|
|---library.xml
###
# Another folder
### data_{TimeStamp1}
|
|--- site_template
|
|--- sites
|
|--- {SiteID}
|
|---library.xml

This provides the storefront with the latest upload timestamp either in a cloud or local database, which it reads during deployment. By comparing the timestamps, the logic within the CI process then decides if the data should be added to the instance.

This approach works well if you store import data under a separate repository. Over time, however, with lots of one-time uploads, build time will start to degrade because the system has to pull these one-time changes as well.

Update Metadata

Don’t forget about metadata! You need to integrate it along with the code and data. You also need to consider data that’s under the merchant’s control. Depending on the abilities of the merchant and your development team, one of the following approaches might help.

It’s too easy to overwrite existing metadata with a metadata import. The system metadata import functionality replaces the entire definition. Say, for example, you have a custom group with three attribute groups. If you want to add another attribute group to make it four, you must specify all four attribute group definitions in the import file to replace them all.

Frequent Merges

Merge code changes often. Make sure developers add changes somewhere in the middle of a file, within a commit conflict resolution. They should avoid making changes at the end of a file, which can cause conflicts.

Breaking Metadata Changes

Sometimes new business requirements are so different from the original requirements that they require drastic changes to metadata and code. Sometimes, the old code only works with the old metadata, and the new code only works with the new metadata. This makes a deployment challenging until both are updated completely, because the code and metadata are updated separately.

During the transition, we recommend that you make the old code work with the new metadata, or if it’s easier, make the new code work with the old metadata. This decouples their deployment, provided you always have a compatible version of the two.

Version Control the Data

Sometimes features require specific data objects present in the database for the code to use. Developers are usually eager to put this data into version control, so that feature deployment includes all the changes. However, merchants can access these data objects, and might change them. The next deployment might overwrite their changes.

Here are some ways to address this.

  • Make sure merchants use version control.
  • Make sure the code has fallback logic if the data objects don’t exist and then don’t deploy the data from version control.
  • Write more complex deployment logic that uses Commerce API and Data API to only deploy a data object if it doesn’t already exist.

Let’s Wrap It Up

In this unit, you explored ways to automate how you move code, data, and metadata to instances during the integration process. Now take the final quiz to earn an awesome badge.

Resources

Keep learning for
free!
Sign up for an account to continue.
What’s in it for you?
  • Get personalized recommendations for your career goals
  • Practice your skills with hands-on challenges and quizzes
  • Track and share your progress with employers
  • Connect to mentorship and career opportunities