Integration Patterns and Performance Optimization
Learning Objectives
After completing this unit, you’ll be able to:
- Identify the four core integration patterns supported by Heroku AppLink.
- Choose the right pattern for specific business scenarios.
- Understand how AppLink optimizes Salesforce performance.
The Four Core Integration Patterns
Based on real-world customer implementations, Heroku AppLink supports four proven integration patterns. Each pattern addresses specific business needs and technical challenges.
Pattern 1: Salesforce API Access
This pattern enables Heroku web applications to integrate seamlessly with Salesforce APIs and Data Cloud APIs, providing data exchange and automation across multiple connected Salesforce orgs. It's ideal for scenarios where you need to extend existing applications with Salesforce connectivity and real-time customer insights.
Consider a consulting company serving multiple clients, each with their own Salesforce org, needing a unified customer portal that can access data across all client orgs. Traditional approaches face significant challenges, including:
- Managing authentication across multiple Salesforce orgs
- Handling different API versions and configurations
- Efficiently processing large data volumes with Bulk API
- Maintaining security and user permissions across orgs
- Providing a unified user experience
Shown below is a sample application we have provided that illustrates a basic Node.js powered web page that is using AppLink to retrieve accounts from multiple orgs. It uses the AppLink SDK to authenticate to each org and then perform SOQL queries to access the information before presenting it in a single view. The sample also additionally demonstrates how the Salesforce Bulk API, exposed via the AppLink SDK can be used to bulk-create information.
We discuss this mode of integration further in the architecture section later. For now, the key benefits of this pattern are:
-
Multi-Org Access: Seamlessly connect to multiple Salesforce orgs from a single Heroku app
-
Data Cloud Integration: Access real-time customer insights and unified profiles through Data Cloud APIs
-
Automatic Authentication: AppLink handles OAuth flows and token management across all orgs and Data Cloud
-
Bulk API Integration: Efficiently handle large data volumes with built-in Bulk API support
-
Permission Inheritance: Respect user permissions and sharing rules across all connected orgs
Pattern 2: Extending Apex, Flow, and Agentforce
This pattern addresses scenarios where Salesforce's native processing power isn't sufficient for complex calculations and data transformations. It enables you to use Heroku’s computational capabilities while maintaining seamless integration with Salesforce workflows. Heroku applications published into Salesforce appear directly under the Setup menu.
Extending Flow
Consider an airline company that needs to see the carbon footprint of their flights directly on Flight record detail pages. The calculation requires complex environmental data processing from multiple APIs, route optimization algorithms, and real-time emissions calculations that would exceed Flow's computational capabilities.
The process starts when Flow logic calls Java code (1) through the AppLink Add-on (2) to invoke a Heroku application (3) that performs complex carbon footprint calculations. The app queries Flight and Passenger records via SOQL (4), processes route data, aircraft specifications, and passenger counts using environmental APIs and emission factors. The calculated results are returned to the Flow (5), which displays the carbon footprint assessment directly on the Flight detail page, showing total CO2 emissions, per-passenger emissions, and per-kilometer metrics based on DEFRA 2023 emission factors and ICAO Aviation Emissions Guidelines.
Extending Apex
Marketing teams use Apex to automate the creation of marketing campaigns, but they need to extend this automation to generate professional social media cards that help promote campaigns across different platforms. This requires image processing, template rendering, and design capabilities that go beyond Apex's native functionality.
The process begins when Apex code calls Java code through the AppLink Add-on (1→2), which routes the request to a Heroku application (3) that uses specialized libraries for image processing and template rendering. The Heroku app generates professional social media cards and returns the content data back to Salesforce via DML operations (4), automatically creating or updating Campaign records with the generated social media assets ready for marketing team use.
Implementation Benefits
-
Image Processing: Use specialized libraries for professional graphic design and template rendering
-
No Governor Limits: Process image generation without Apex heap size or CPU timeout concerns
-
Rich Media Creation: Generate high-quality social media assets with consistent branding
-
Scalable Automation: Handle bulk campaign asset creation across multiple marketing channels
Extending Agentforce
Consider the Koa Cars dealership, where customers need instant, competitive finance estimates for vehicle purchases. The agent must perform complex financial calculations involving loan terms, interest rates, competitive market analysis, and real-time vehicle pricing that exceed the computational capabilities of standard agent actions.
The process begins when a customer requests a finance agreement through the Koa Cars Agent (1). The Agent Action calls Java code through the AppLink Add-on (2) to invoke a Heroku application (3) that performs sophisticated financial calculations. The app queries Vehicle records via SOQL (4) to get pricing and specifications, then integrates with external services for competitor comparisons (5). The calculated finance agreement data is returned to the agent (6), which presents a comprehensive response including total financing cost, monthly payments, loan terms, final car price, and adjusted interest rates based on competitive market analysis.
Pattern 3: Scaling Batch Jobs
This pattern addresses scenarios where Salesforce batch jobs are too slow, hit governor limits, or need to process data from multiple sources simultaneously. It uses Heroku's scalable worker architecture to handle large-scale data processing efficiently.
Consider a large multinational business generating complex quotes from opportunities. Quote calculations become intensive with complex pricing rules based on region, products, and discount thresholds. Rather than overwhelming Salesforce with these calculations, the system delegates quote generation to scalable Heroku worker processes. Traditional Batch Apex faces significant limitations:
- 2,000 record per batch limit
- 5 concurrent batch jobs maximum
- Complex error handling for partial failures
- No easy way to process external data
- Limited monitoring and retry capabilities
The architecture shows Opportunity updates triggering a scalable Heroku worker queue system that processes quote generation in parallel across multiple dynos. The worker processes perform complex pricing calculations using external data sources and business rules, then efficiently update Salesforce records via the Bulk API. This distributed processing approach eliminates the constraints of traditional Batch Apex while providing sophisticated monitoring, error handling, and retry capabilities for enterprise-scale data processing.
Performance Improvements
-
144% Faster Execution: Opportunity to Quote processing took ~24 seconds with Heroku vs ~150 seconds with Batch Apex
-
Intelligent Retry: Sophisticated error handling and automatic retries
-
Unlimited Scale: Process millions of records without Salesforce limits
-
Near-Instant Queuing: No queue wait times compared to Batch Apex dequeue delays
-
Rich Monitoring: Real-time progress tracking and detailed logging
Pattern 4: Real-Time Eventing
This pattern enables immediate responses to data changes, real-time synchronization, and event-driven architectures. It's perfect for scenarios requiring instant processing and feedback loops between Salesforce and external systems.
Building on Pattern 3's batch processing approach, this pattern demonstrates event-driven quote generation. When Opportunity data is updated in Salesforce, Change Data Capture events automatically trigger real-time quote processing on Heroku, with results flowing back through custom notifications and Platform Events to continue processing in Salesforce Flow.
The architecture demonstrates a complete event-driven loop where Opportunity updates trigger Change Data Capture events that are consumed by Heroku eventing services. The system processes quote generation in real-time using worker processes, then publishes results back to Salesforce through Platform Events. This creates seamless integration between Salesforce Flow processes and external computational services, enabling immediate user notifications and continued workflow processing while maintaining transactional integrity through event buffering and reliable message delivery.
Real-Time Benefits
-
Immediate Processing: Automatic quote generation triggered by Opportunity updates
-
Event-Driven Flow: Seamless transition from Heroku processing back to Salesforce Flow
-
User Notifications: Custom notifications sent to desktop and mobile devices
-
Streaming vs Batch: Process work as needed rather than in scheduled batches
-
Transaction Buffering: Groups related CDC events by transaction key for efficient processing
Choose the Right Pattern
Use this decision framework to select the optimal integration pattern.
Choose API Access Pattern
- To build web, mobile, or API experiences in Heroku that need Salesforce data
- To connect to multiple Salesforce orgs at once
- For complex data transformation between different data sources to display to the user
Choose Extension Pattern
- To reach Salesforce governor limits (CPU, memory, API calls)
- For specialized libraries (Python ML, R analytics, and more)
- For complex algorithms that benefit from general-purpose languages
- To reuse logic across multiple Salesforce contexts
Choose Batch Scaling Pattern
- To process large datasets (>100k records)
- For parallel processing across multiple data sources
- To require sophisticated error handling and monitoring
- For batch jobs that are taking too long or reaching limits
Choose Eventing Pattern
- For real-time responses to data changes
- To build event-driven architectures
- To synchronize data across multiple systems
- To decouple Salesforce from downstream processing