Start tracking your progress
Trailhead Home
Trailhead Home

Create Alexa Skills and Lambda Functions with the ASK CLI

What You'll Learn

In this step, you'll learn how to:

  • Create a skill and Lambda function using the ASK CLI.
  • Allow testing on your skill.
  • Update your Lambda function to only respond to your skill.
  • Allow your Lambda function to call DynamoDB.
  • Run a basic test using the ASK CLI simulator.

Create a Skill and Lambda Function using the ASK CLI

  1. Go clone or download the following GitHub repository: https://github.com/alexa/skill-sample-nodejs-salesforce
  2. Go to the directory where you downloaded the files and open a command prompt at the skill-sample-nodejs-salesforce/lambda/custom/ directory. Enter the following command to download necessary dependencies that deploy with our Lambda function:

    $ npm install
  3. With your command prompt, return to skill-sample-nodejs-salesforce/ and run the following command to create the skill and Lambda function at one time. Note: There are many different calls that happen behind the scenes here; having a fast connection helps you avoid timeouts.

    $ cd ../../
    $ ask deploy
    -------------------- Create Skill Project --------------------
    ask profile for the deployment: default
    Skill Id: amzn1.ask.skill.<Skill ID>
    Skill deployment finished.
    Model deployment finished.
    Lambda deployment finished
  4. Make sure to write down your skill ID returned in the previous output. We use that often in the future steps.

Update Your Lambda Function to Only Respond to Your Skill

  1. Modify the following line in skill-sample-nodejs-salesforce/lambda/custom/constants.js by adding your Skill ID that was created earlier:

    appId : '';

  2. Modify the following line by adding your domain name that you stored earlier, including https:// at the front:

    INSTANCE_URL : '';
    Your final setting should look something like this, using your domain name from Step 2 - INSTANCE_URL : “https://brave-moose-406615-dev-ed.my.salesforce.com”;
  3. After modifying the constants.js file, save the file and exit.
  4. Use the following command to update your Lambda function to include these latest changes:

    $ ask deploy

Allow Your Lambda Function to Call DynamoDB

The standard execution role that is used for Lambda doesn’t allow permission to access DynamoDB. In order to fix that, you need to add a policy to the role that runs the Lambda function. 

  1. In Step 3, you created an AWS account. Use this account to log in to the AWS Console.
  2. Type IAM in the AWS services search box at the top of the page and select IAM.
  3. Click Roles.
  4. Find the role that was created in Step 4. It should be called ask-lambda-Salesforce-Demo. Click it.
  5. In the Permissions tab, click Attach policy.
  6. In the search box, enter AmazonDynamoDBFullAccess and then check the box next to the policy that shows up.
  7. Click Attach policy at the bottom right.

Allow Testing on Your Skill

  1. In order to test the skill before publishing, you need to enable testing on your skill. To do this, log in to the Amazon Developer Console.
  2. Directly jump to the page by substituting your Skill ID into the following URL (insert your ID in place of the bold text):  https://developer.amazon.com/edw/home.html#/skill/<Skill ID>/en_US/testing
    Interaction Model tab showing a status of “Disabled.”
  3. Click the slider next to Disabled for testing. It should now say Enabled.

Run a Basic Test Using the ASK CLI Simulator

  1. Run the following command to execute a command against your skill.

    $ ask simulate -l en-US -t "alexa, open salesforce demo"
    ✓ Simulation created for simulation id: 0c857923-0753-43a5-b44c-ee2fca137aab
    ◜ Waiting for simulation response{
     "status": "SUCCESSFUL",
     "result": {
    ...
  2. Check for the output message to also see what Alexa would have said.

    ...
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak> A Salesforce account is required to use this skill. I've placed more
    information on a card in your Alexa app. </speak>"
    },
    ...

If you see an error message that says “Request to skill endpoint resulted in an error.”, try the simulate command again. This error may occur the first time a skill using DynamoDB is invoked due to creating a database table behind the scenes.

To help explain more about how the skill works and what the Lambda function is doing, keep on reading.

How Is This a Private Skill?

The private skill definition is within the file skill-sample-nodejs-salesforce/skill.json. This configuration file contains the skill name, description, sample utterances, image links for the skill, publishing information, and more. You can see all of these details in the Amazon Developer Console, too. You can leave the default values in here for now, since we will be deploying this skill as a private skill that won’t be accessible by others. One line in this file also indicates that this is a private skill. Don’t remove this!


"distributionMode": "PRIVATE",
...

What Is the Lambda Function Doing?

While the deployment process is working, you can review the Lambda function code to see some of the different functions that have been built for this demo.

Let’s quickly review a simple state diagram to understand the way the skill works to authenticate a user when invoking the skill. 

State diagram for the skill handling logic.

States

  • START: This is the standard state that a first-time user has when they invoke the skill. The default behavior is to enter the CODE state.
  • CODE: The CODE state handles creating a first-time voice code or validating an existing voice code. On success, enters the SECURE state.
  • SECURE: This represents an authenticated user state where they can now interact with Salesforce org data. Each intent is still checked to ensure that the voice code has been validated recently; if 5 minutes has passed since the code was spoken, revert back to CODE state. If the user wants to change their voice code, enter CHANGE_CODE state.
  • CHANGE_CODE: This state makes the user reverify their current voice code before allowing them to set a new one. This handling flow and messaging is different from the normal voice code flow. On successful validation of the current code, enters NEW_CODE state.
  • NEW_CODE: Once validating the current voice code, this state handles the creation of a new code. On success, user goes back to a SECURE state.
  • HELP: The Help state can be entered from any other state when the user asks for Help.

Diving deeper, open the file skill-sample-nodejs-salesforce/lambda/custom/voiceCodeHandlers.js to view the Lambda function source code around our voice code handling process. All Alexa state handlers are defined here except for the SECURE state handler.

...
    codeStateHandlers : Alexa.CreateStateHandler(constants.STATES.CODE, {

    changeCodeHandlers : Alexa.CreateStateHandler(constants.STATES.CHANGE_CODE, {

    newCodeHandlers : Alexa.CreateStateHandler(constants.STATES.NEW_CODE, {

    helpStateHandlers : Alexa.CreateStateHandler(constants.STATES.HELP, {
...

These sections define the handling of intents for a given state. For example, to understand how the NEW_CODE state handles the spoken voice code number, you look at the functions defined within the newCodeHandlers variable.

    // Hash the code - you can choose a different # of rounds, 10 is default
    // See https://www.npmjs.com/package/bcryptjs for more details.
    var salt = bcrypt.genSaltSync(10);
    var hash = bcrypt.hashSync(code.toString(), salt);

    // Add the code to Salesforce
    if (this.handler.state == constants.STATES.CODE) {
        sf.createVoiceCode(hash, salesforceUserId, accessToken, handleCreateCode, this);
    } else if (this.handler.state == constants.STATES.NEW_CODE) {
        sf.updateVoiceCode(hash, salesforceUserId, accessToken, handleUpdateCode, this);
    }

Bcrypt is the recommended hashing algorithm used to store secure content. Here, we are using the bcryptjs package to salt and hash the user’s voice code before storing it as a protected, custom setting using helper methods that we’ve defined to interact with Salesforce.

            // User has a code, check the hashes
            var hashed_code = resp.records[0]._fields.code__c;

            // Check to see if the code provided matches the hash
            if (!bcrypt.compareSync(code.toString(), hashed_code)) {

Here, we are extracting a user’s existing hash for their code from a Salesforce query, then using the bcryptjs function compareSync method to validate the user’s spoken code against the hash.

Open the file skill-sample-nodejs-salesforce/lambda/custom/secureHandlers.js to view the Lambda function source code for methods that can be called once the user has confirmed their voice code. Let’s quickly review how the process works to authenticate who the proper user is when using the skill.

    'RecentLead': function() {

        if (!codeCheck(this.attributes)){
            this.handler.state = constants.STATES.CODE;
            this.emitWithState("PromptForCode");
        } else {

            var accessToken = this.event.session.user.accessToken;

            if (accessToken) {
                var currentUserId = this.attributes[constants.SALESFORCE_USER_ID];
                if (currentUserId) {
                    sf.query("Select Name,Company From Lead ORDER BY CreatedDate DESC LIMIT 1", accessToken, handleLeads, this);
                }
            }
        }
    },

First, the code uses the codeCheck method to make sure the user has said their voice code within the past 5 minutes. This timeout is adjustable in skill-sample-nodejs-salesforce/lambda/custom/constants.js. If the code needs to be revalidated, the state is reset to CODE and we invoke the PromptForCode function in the voiceCodeHandlers.js file.

If the voice code is still valid, the code then makes a call to a Salesforce helper function that makes a SOQL query to get a recent lead. It utilizes a callback method in order to handle the response from Salesforce, called handleLeads

Open the file skill-sample-nodejs-salesforce/lambda/custom/salesforce.js to view helper code to interact with Salesforce using nforce. Let’s highlight a few key areas of the code.

function getOauthObject(accessToken) {
    // Construct our OAuth token based on the access token we were provided from Alexa
    var oauth = {};
    oauth.access_token = accessToken;
    oauth.instance_url = constants.INSTANCE_URL;
    return oauth;
}

Since Alexa is providing us an access token in the skill request, it’s not necessary to use the nforce authenticate method. Instead, we build an OAuth object with the provided access token and use your domain name that we obtained in Step 2 of this project.

    query : function(query, accessToken, callback, origContext) {
        var q = query;
        org.query({oauth: getOauthObject(accessToken), query: q}, callback.bind(origContext));
    },

This is a sample wrapper function that exposes the nforce query function that accepts SOQL to perform a query. The callback.bind(origContext) allows us to use the this context in our callback handler functions.

    createVoiceCode : function(code, userId, accessToken, callback, origContext ) {
        var new_code = nforce.createSObject(constants.VOICE_CODE_OBJECT_NAME);
        new_code.set("SetupOwnerId", userId);
        new_code.set(constants.VOICE_CODE_FIELD_NAME, code);
        org.insert({ sobject: new_code, oauth: getOauthObject(accessToken) }, callback.bind(origContext));
    }

This function creates a new sObject representing the voice code for a user if they didn’t have one.

Keep in mind, the functions here are for illustrative purposes. Take a look at the nforce package description for more ideas of how you can interact with your data. 

Resources

We won't check any of your setup. Click Verify Step to proceed to the next step in the project.

retargeting