Skip to main content
Register now for TDX! Join the must-attend event to experience what’s next and learn how to build it.

Manage Access with API Policies

Learning Objectives

After completing this unit, you’ll be able to:

  • Describe the purpose of access control policies in API management.
  • Identify key types of access policies available in Anypoint API Manager.
  • Explain when and why to apply specific access policies based on use cases.

Access Control Policies

Not every API is meant for everyone. Some APIs are built for internal teams, others for trusted partners, and some are open to the public. But even public APIs need limits. That’s where access control policies can help.

Access control policies define the rules for who can call an API and under what conditions. These policies are applied at the proxy level in Anypoint API Manager, so teams can manage access without modifying backend systems.

They help protect APIs in several key ways.

  • Authentication: Ensuring only authorized apps or users can make requests
  • Rate control: Limiting how often an API can be called to preserve system performance
  • Request validation: Checking that incoming requests meet specific requirements

Access policies give API owners confidence that their services are being used as intended. Whether you’re protecting sensitive data, avoiding traffic spikes, or simply enforcing usage limits, policies let you do it consistently and at scale.

An API icon, next to a checklist with icons representing authentication, rate limiting and client identity checks.

Types of Access Policies in Anypoint API Manager

Anypoint API Manager provides a range of policy types to help teams secure and govern traffic effectively. These policies are applied to the API proxy, making them easy to configure and update without touching backend logic.

Here are some key policy types available.

  • Client ID Enforcement: Requires applications to provide a valid client ID and client secret to access the API. This is one of the most common policies and is often used to register and authenticate consuming apps.
  • OAuth 2.0 Access Token Enforcement: Enforces token validation against an external OAuth 2.0 provider. This is typically used when APIs are part of a broader authorization system or when integrating with identity providers for delegated access.
  • Rate Limiting: Restricts how many requests a client can make in a given time frame (for example, 1,000 requests per hour). Helps prevent overuse and ensures system stability.
  • Spike Control: Focuses on preventing sudden bursts of traffic that could overwhelm backend services. It smooths out traffic by temporarily delaying or rejecting excessive requests during short time windows.

Each policy serves a distinct purpose and can be combined as needed. For example, an API might enforce both authentication and rate limits to protect against unauthorized access and resource exhaustion. By applying the right mix of policies, teams can secure APIs at scale while maintaining performance and availability for trusted consumers.

How Policies Work at the Proxy Level

In Anypoint Platform, access policies are enforced at the proxy, well before any traffic reaches your backend service. The proxy acts as a filter, inspecting each request against the active policies and determining whether to allow, block, or modify the request.

This happens in real time. For example, if a rate limiting policy is in place, the proxy counts requests from each client and enforces limits before forwarding requests to the backend service. If an OAuth 2.0 policy is enabled, the proxy checks the token before passing the call along.

Policy logic is processed by the gateway, not the API code itself. That means updates to policies can be made centrally in API Manager and take effect immediately across environments.

This approach keeps enforcement consistent, scalable, and isolated from the application layer, which is critical for managing APIs in production.

When and Why to Use Specific Policies

Developers choose which policies to use based on the audience, use case, and level of sensitivity involved. Here are a few common scenarios that influence policy decisions.

  • Internal-only APIs: If the API is only used by internal systems, developers might skip strict authentication in favor of IP filtering and request validation. Policies focus more on ensuring reliability and avoiding accidental misuse.
  • Partner-facing APIs: When sharing APIs with trusted partners, client ID enforcement becomes essential. This helps you authenticate who’s calling the API and track usage by application. Rate limiting may also be used to guarantee fair usage across consumers.
  • Public APIs: Public-facing APIs need strong boundaries. OAuth 2.0 is often used to verify users, while rate limiting and spike control help protect performance. Some teams also apply logging or compliance policies to meet audit requirements.
  • High-traffic or critical systems: For APIs that drive core business processes or serve a large user base, performance protection is important. Spike control smooths out sudden request surges, and rate limits keep backend systems from being overwhelmed.

Policy selection is a big part of good API design. The right mix protects your services while ensuring consumers can still use them effectively.

How It Works at MUA: Controlling Access to the Flight Data API

The Mule United Airport team is almost ready to go live with their new flight data API. It powers the real-time arrival and departure board, and soon, it will also support a mobile app and share updates with airline partners.

Before launch, the team uses API Manager to apply the right access policies.

  • For the public dashboard, they set rate limits to handle traffic from thousands of travelers checking updates throughout the day.
  • For partner apps, they apply client ID enforcement so only registered applications can access detailed flight data.
  • For added protection, they enable spike control to smooth out traffic surges during weather events or system alerts.

With these policies in place, the team can manage access across different use cases, all without modifying the backend service. They monitor all traffic through API Manager and make adjustments as needed. By applying thoughtful access controls at the proxy level, MUA secures its flight data API, supports reliable experiences for users, and stays in control as traffic grows.

Securing your APIs is just the beginning. In the next unit, you explore how Anypoint Platform helps you monitor traffic, detect issues, and maintain high performance in production so your APIs stay reliable long after they go live.

Resources

Comparta sus comentarios sobre Trailhead en la Ayuda de Salesforce.

Nos encantaría conocer su experiencia con Trailhead. Ahora puede acceder al nuevo formulario de comentarios cuando quiera desde el sitio de la Ayuda de Salesforce.

Más información Continuar para compartir comentarios