Skip to main content
Register now for TDX! Join the must-attend event to experience what’s next and learn how to build it.

Explore the Role of API Proxies

Learning Objectives

After completing this unit, you’ll be able to:

  • Explain how API proxies act as intermediaries between consumers and backend services.
  • Describe the benefits of using API proxies for security, monitoring, and traffic control.
  • Identify when and why to use a proxy instead of directly exposing an API implementation.

Learn About API Proxies

At Mule United Airport, the team just finished building their new flight data API. It connects to airline systems, updates schedules in real time, and powers a public dashboard travelers can access from their phones. But before they make the API public, they need to manage how it’s accessed and ensure it runs securely and reliably.

An API proxy helps make that possible. Instead of exposing the backend service directly, the team sets up a proxy that routes requests to it. This gives them control over how the API is used and by whom.

With a proxy in place, they can:

  • Enforce rules like rate limits or client ID requirements.
  • Monitor traffic and performance.
  • Swap out or update backend services without changing the public interface.

This structure helps teams manage APIs more safely and flexibly, especially when working across environments or serving external consumers. Now that you understand how an API proxy enables control and security, the next question is how to apply this layer of management within Anypoint Platform.

A client icon on the left and an API on the right. A thin line labeled “API Proxy” is between them.

API Proxy vs API Autodiscovery

In Anypoint API Manager, there are two common ways to manage and apply policies to an API running on Mule Gateway.

Scenario

Use API Autodiscovery?

Use an API Proxy?

You built the API in Mule and own the code.

Yes. The API runs as a Mule app, and you can configure it for API gateway autodiscovery.

No. It is not needed in this case.

You want the simplest way to apply policies

Yes. Policies are applied directly to the Mule app without deploying another application.

No. It adds extra setup that is usually unnecessary.

The API already exists outside Mule

No. Autodiscovery cannot be used because the API is not a Mule application.

Yes. A proxy lets you manage a non-Mule API through API Manager.

The backend API cannot be modified

No. Autodiscovery requires adding an API ID to the application code.

Yes. A proxy applies policies without touching the backend.

The API is owned by another team or vendor

No. Autodiscovery requires access to update the Mule application.

Yes. A proxy acts as a controlled front door for an external API.

API Autodiscovery is typically the simplest path when an API already runs on Mule and its configuration can be updated. API proxies are a better fit when you need to manage APIs you can’t—or prefer not to—change directly, or when the backend isn’t running on Mule at all.

How Proxies Work in Anypoint Platform

In Anypoint Platform, an API proxy receives requests from clients, evaluates those requests based on configured policies, and then forwards them to the backend service. This backend service is often a Mule application deployed to CloudHub or another supported runtime.

Here’s how the process works.

  1. A client sends a request to the API's public endpoint.
  2. The request is routed to the proxy managed in API Manager.
  3. The proxy checks for conditions such as authentication, rate limits, or header requirements.
  4. If the request meets the conditions, it is forwarded to the backend implementation.
  5. The backend processes the request and returns a response.
  6. The proxy can apply additional policies to the response before sending it back to the client.

This approach separates the API’s interface from its implementation. Developers can update or move the backend without changing how consumers interact with the API. API Manager handles policy enforcement, monitoring, and version control, all through the proxy.

Create a Proxy in API Manager

In Anypoint API Manager, creating a proxy involves defining how your API should be exposed and managed. You begin with an API specification from Anypoint Exchange, then configure settings that control how the proxy operates.

During setup, you choose:

  • Where to deploy the proxy, such as CloudHub or Runtime Fabric
  • The internal URI where the backend service is hosted
  • Which policies to apply, like client ID enforcement or throttling

Once created, the proxy becomes the public-facing entry point for your API. From API Manager, you can monitor usage, update policies, and publish new versions as the API evolves. This allows you to manage access and enforce governance without changing the underlying implementation.

Secure and Manage APIs Through the Proxy

Once a proxy is in place, you can apply security and management policies using Anypoint API Manager. These policies help protect your API and ensure it performs reliably in production.

Common policy areas include:

  • Authentication: Require client applications to identify themselves using credentials like client ID and secret.
  • Rate limiting and throttling: Control how many requests are allowed over a given time period to prevent misuse or system overload.
  • Header and IP filtering: Accept or block traffic based on custom request headers or IP addresses.
  • Compliance checks: Add policies for logging, auditing, or validating requests against defined rules.

These policies are applied at the proxy level. You can change them at any time without modifying the backend implementation.

Monitoring is also built into API Manager. Dashboards show metrics like usage trends, response times, and error rates. This visibility helps you identify issues early, optimize performance, and plan for growth.

Together, policy enforcement and monitoring give you the control you need to secure and scale your APIs, without changing the code behind them.

How It Works at MUA: Securing the Flight Data API

At Mule United Airport, the development team is preparing to launch the new flight data API that powers the public arrival and departure board. Before making it publicly accessible, they need to control who can access it, how often it can be called, and how it performs under load.

Using Anypoint API Manager, the team creates a proxy for the flight data API. They apply a client ID enforcement policy so only approved partner apps can call the API. They also set rate limiting rules to prevent overuse during peak travel hours.

With the proxy in place, they monitor usage in real time, review error rates, and track request volume across endpoints. If any issues arise, they can adjust policies or version the API—without touching the backend application that connects to airline systems.

By managing the API through a proxy, MUA ensures secure access, clear visibility, and the flexibility to evolve their API as the airport’s needs grow.

Now that you’ve explored how proxies help manage and protect APIs, it’s time to examine the policies themselves. In the next unit, you learn how to use specific access control policies to secure your APIs and define who gets to use them.

Resources

Teilen Sie Ihr Trailhead-Feedback über die Salesforce-Hilfe.

Wir würden uns sehr freuen, von Ihren Erfahrungen mit Trailhead zu hören: Sie können jetzt jederzeit über die Salesforce-Hilfe auf das neue Feedback-Formular zugreifen.

Weitere Infos Weiter zu "Feedback teilen"