Skip to main content

Review Engagement Scoring for Mobile

Learning Objectives

After completing this unit, you’ll be able to:

  • Navigate the Engagement Scoring mobile app dashboard.
  • Identify audience health ratings and model confidence.
  • Adjust engagement score thresholds.

Einstein Engagement Scoring for Mobile Apps

You’ve seen what engagement scoring looks like for emails—now let’s take a look at the mobile app dashboard. In Marketing Cloud Engagement, from the Einstein tab click Einstein Engagement Scoring.

Mobile Engagement Scoring dashboard with callouts for each section.

 Choose the mobile tab, then select the mobile app you wish to view (1). Once you’ve selected your app, you can view each tile. 

(2) Predicted App Engagement: A blended metric that reviews engagement based on predicted app sessions, direct opens, inferred opens (views but doesn’t click), and time in-app. 

(3) Predicted App Sessions: How many sessions your audience is expected to generate on average.

(4) Predicted Push Message Direct Opens: How often your audience is expected to tap directly on a push notification to open an app on average.

(5) Predicted Push Message Inferred Opens: How often your audience is expected to open the application within 24 hours of receiving a push message, but not directly open a message. 

(6) Predicted Time in App: How much time in-app your audience is expected to spend on average.

Tile Features

Now that you’ve explored the Einstein Engagement Scoring dashboards, let’s take a closer view of the information provided in a tile. 

Email click predictions with model confidence and percent change since last week circled.

Each tile shows a percent change (1) from the previous week, helping you determine if the scores are improving or if you need to make changes. Each tile, whether in email or mobile, also has an Audience Health (2) indicator. This indicator shows whether the average grade for that score across all your subscribers is excellent, good, fair, or poor. The grades are equivalent to the likelihoods that are assigned to each subscriber. For example, the likelihood of a subscriber to click on your email link. Here’s how they correlate.

  • Excellent= Most likely
  • Good= More likely
  • Fair= Less likely
  • Poor= Least likely

You may also wonder: Are blue dots a good thing? Yep. Model confidence (3) or model accuracy is shown for each prediction—and the more dots, the better. Note that fair or poor model confidence may indicate that there is not enough subscriber data for predictions.

Threshold Settings

So how does Einstein calculate this information? This information relies on Einstein Engagement Scoring threshold settings. Depending on your content, Einstein uses the following default settings based on your account settings. Here are your options.

Channel
Default
Alternative Default
Custom

Email

Local or business-unit-based historical performance averages

If your account is opted into the global models feature, you can leverage a wider pool of data by using global model data as your default threshold setting. 

You or your admin can edit and customize engagement score thresholds from the Engagement Scoring section in Setup

Mobile

A specific app’s own historical performance averages

Note

Visit the help page to learn more about Einstein’s global modeling feature.

Next Up: Use Engagement Scoring

Now that you’re familiar with both types of dashboards and know how to adjust thresholds, in the next unit we cover how Engagement Scoring data is used to create personas that can be used in Journey Builder.

Resources

Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.

在 Salesforce 帮助中分享 Trailhead 反馈

我们很想听听您使用 Trailhead 的经验——您现在可以随时从 Salesforce 帮助网站访问新的反馈表单。

了解更多 继续分享反馈