NGINX.COM
Web Server Load Balancing with NGINX Plus

Today’s enterprise is often made up of globally distributed teams building and deploying APIs and microservices, usually across more than one deployment environment. According to F5’s State of Application Strategy Report, 81% of organizations operate across three or more environments ranging across public cloud, private cloud, on premises, and edge.

Ensuring the reliability and security of these complex, multi‑cloud architectures is a major challenge for the Platform Ops teams responsible for maintaining them. According to software engineering leaders surveyed in the F5 report, visibility (45%) and consistent security (44%) top the list of multi‑cloud challenges faced by Platform Ops teams.

With the growing number of APIs and microservices today, API governance is quickly becoming one of the most important topics for planning and implementing an enterprise‑wide API strategy. But what is API governance, and why is it so important for your API strategy?

What Is API Governance?

At the most basic level, API governance involves creating policies and running checks and validations to ensure APIs are discoverable, reliable, observable, and secure. It provides visibility into the state of the complex systems and business processes powering your modern applications, which you can use to guide the evolution of your API infrastructure over time.

Why Do You Need API Governance?

The strategic importance of API governance cannot be overestimated: it’s the means by which you realize your organization’s overall API strategy. Without proper governance, you can never achieve consistency across the design, operation, and productization of your APIs.

When done poorly, governance often imposes burdensome requirements that slow teams down. When done well, however, API governance reduces work, streamlines approvals, and allows different teams in your organization to function independently while delivering on the overall goals of your API strategy.

What Types of APIs Do You Need to Govern?

Building an effective API governance plan as part of your API strategy starts with identifying the types of APIs you have in production, and the tools, policies, and guidance you need to manage them. Today, most enterprise teams are working with four primary types of APIs:

  • External APIs – Delivered to external consumers and developers to enable self‑service integrations with data and capabilities
  • Internal APIs – Used for connecting internal applications and microservices and only available to your organization’s developers
  • Partner APIs – Facilitate strategic business relationships by sharing access to your data or applications with developers from partner organizations
  • Third-Party APIs – Consumed from third‑party vendors as a service, often for handling payments or enabling access to data or applications

Each type of API in the enterprise must be governed to ensure it is secure, reliable, and accessible to the teams and users who need to access it.

What API Governance Models Can You Use?

There are many ways to define and apply API governance. At NGINX, we typically see customers applying one of two models:

  • Centralized – A central team reviews and approves changes; depending on the scale of operations, this team can become a bottleneck that slows progress
  • Decentralized – Individual teams have autonomy to build and manage APIs; this increases time to market but sacrifices overall security and reliability

As companies progress in their API‑first journeys, however, both models start to break down as the number of APIs in production grows. Centralized models often try to implement a one-size-fits-all approach that requires various reviews and signoffs along the way. This slows dev teams down and creates friction – in their frustration, developers sometimes even find ways to work around the requirements (the dreaded “shadow IT”).

The other model – decentralized governance – works well for API developers at first, but over time complexity increases. Unless the different teams deploying APIs communicate frequently, the overall experience becomes inconsistent across APIs: each is designed and functions differently, version changes result in outages between services, and security is enforced inconsistently across teams and services. For the teams building APIs, the additional work and complexity eventually slows development to a crawl, just like the centralized model.

Cloud‑native applications rely on APIs for the individual microservices to communicate with each other, and to deliver responses back to the source of the request. As companies continue to embrace microservices for their flexibility and agility, API sprawl will not be going away. Instead, you need a different approach to governing APIs in these complex, constantly changing environments.

Use Adaptive Governance to Empower API Developers

Fortunately, there is a better way. Adaptive governance offers an alternative model that empowers API developers while giving Platform Ops teams the control they need to ensure the reliability and security of APIs across the enterprise.

At the heart of adaptive governance is balancing control (the need for consistency) with autonomy (the ability to make local decisions) to enable agility across the enterprise. In practice, the adaptive governance model unbundles and distributes decision making across teams.

Platform Ops teams manage shared infrastructure (API gateways and developer portals) and set global policies to ensure consistency across APIs. Teams building APIs, however, act as the subject matter experts for their services or line of business. They are empowered to set and apply local policies for their APIs – role‑based access control (RBAC), rate limiting for their service, etc. – to meet requirements for their individual business contexts.

Adaptive governance allows each team or line of business to define its workflows and balance the level of control required, while using the organization’s shared infrastructure.

Implement Adaptive Governance for Your APIs with NGINX

As you start to plan and implement your API strategy, follow these best practices to implement adaptive governance in your organization:

Let’s look at how you can accomplish these use cases with API Connectivity Manager, part of F5 NGINX Management Suite.

Provide Shared Infrastructure

Teams across your organization are building APIs, and they need to include similar functionality in their microservices: authentication and authorization, mTLS encryption, and more. They also need to make documentation and versioning available to their API consumers, be those internal teams, business partners, or external developers.

Rather than requiring teams to build their own solutions, Platform Ops teams can provide access to shared infrastructure. As with all actions in API Connectivity Manager, you can set this up in just a few minutes using either the UI or the fully declarative REST API, which enables you to integrate API Connectivity Manager into your CI/CD pipelines. In this post we use the UI to illustrate some common workflows.

API Connectivity Manager supports two types of Workspaces: infrastructure and services. Infrastructure Workspaces are used by Platform Ops teams to onboard and manage shared infrastructure in the form of API Gateway Clusters and Developer Portal Clusters. Services Workspaces are used by API developers to publish and manage APIs and documentation.

To set up shared infrastructure, first add an infrastructure Workspace. Click Infrastructure in the left navigation column and then the  + Add  button in the upper right corner of the tab. Give your Workspace a name (here, it’s team-sentence – an imaginary team building a simple “Hello, World!” API).

Screenshot of Workspaces page on Infrastructure tab of API Connectivity Manager UI
Figure 1: Add Infrastructure Workspaces

Next, add an Environment to the Workspace. Environments contain API Gateway Clusters and Developer Portal Clusters. Click the name of your Workspace and then the icon in the Actions column; select Add from the drop‑down menu.

The Create Environment panel opens as shown in Figure 2. Fill in the Name (and optionally, Description) field, select the type of environment (production or non‑production), and click the + Add button for the infrastructure you want to add (API Gateway Clusters, Developer Portal Clusters, or both). Click the  Create  button to finish setting up your Environment. For complete instructions, see the API Connectivity Manager documentation.

Screenshot of Create Environment panel in API Connectivity Manager UI
Figure 2: Create an Environment and onboard infrastructure

Give Teams Agency

Providing logical separation for teams by line of business, geographic region, or other logical boundary makes sense – if that doesn’t deprive them of access to the tools they need to succeed. Having access to shared infrastructure shouldn’t mean teams have to worry about activities at the global level. Instead, you want to have them focus on defining their own requirements, charting a roadmap, and building their microservices.

To help teams organize, Platform Ops teams can provide services Workspaces for teams to organize and operate their services and documentation. These create logical boundaries and provide access to different environments – development, testing, and production, for example – for developing services. The process is like creating the infrastructure Workspace we made in the previous section.

First, click Services in the left navigation column and then the  + Add  button in the upper right corner of the tab. Give and give your Workspace a name (here, api-sentence for our “Hello, World” service), and optionally provide a description and contact information.

Screenshot of Workspaces page on Services tab of API Connectivity Manager UI
Figure 3: Create a services Workspace

At this point, you can invite API developers to start publishing proxies and documentation in the Workspace you’ve created for them. For complete instructions on publishing API proxies and documentation, see the API Connectivity Manager documentation.

Balance Global Policies and Local Control

Adaptive governance requires a balance between enforcing global policies and empowering teams to make decisions that boost agility. You need to establish a clear separation of responsibilities by defining the global settings enforced by Platform Ops and setting “guardrails” that define the tools API developers use and the decisions they can make.

API Connectivity Manager provides a mix of global policies (applied to shared infrastructure) and granular controls managed at the API proxy level.

Global policies available in API Connectivity Manager include:

  • Error Response Format – Customize the API gateway’s error code and response structure
  • Log Format – Enable access logging and customize the format of log entries
  • OpenID Connect – Secure access to APIs with an OpenID Connect policy
  • Response Headers – Include or exclude headers in the response
  • Request Body Size – Limit the size of incoming API payloads
  • Inbound TLS – Set the policy for TLS connections with API clients
  • Backend TLS – Secure the connection to backend services with TLS

API proxy policies available in API Connectivity Manager include:

  • Allowed HTTP Methods – Define which request methods can be used (GET, POST, PUT, etc.)
  • Access Control – Secure access to APIs using different authentication and authorization techniques (API keys, HTTP Basic Authentication, JSON Web Tokens)
  • Backend Health Checks – Run continuous health checks to avoid failed requests to backend services
  • CORS – Enable controlled access to resources by clients from external domains
  • Caching – Improve API proxy performance with caching policies
  • Proxy Request Headers – Pass select headers to backend services
  • Rate Limiting – Limit incoming requests and secure API workloads

In the following example, we use the UI to define a policy that secures communication between an API Gateway Proxy and backend services.

Click Infrastructure in the left navigation column. After you click the name of the Environment containing the API Gateway Cluster you want to edit, the tab displays the API Gateway Clusters and Developer Portal Clusters in that Environment.

Screenshot of Environment page on Infrastructure tab of API Connectivity Manager UI
Figure 4: Configure global policies for API Gateway Clusters and Developer Portal Clusters

In the row for the API Gateway Cluster to which you want to apply a policy, click the icon in the Actions column and select Edit Advanced Configuration from the drop‑down menu. Click Global Policies in the left column to display a list of all the global policies you can configure.

Screenshot of Global Policies page in API Connectivity Manager UI
Figure 5: Configure policies for an API Gateway Cluster

To apply the TLS Backend policy, click the icon at the right end of its row and select Add Policy from the drop‑down menu. Fill in the requested information, upload your certificate, and click Add. Then click the  Save and Submit  button. From now on, traffic between the API Gateway Cluster and the backend services is secured with TLS. For complete instructions, see the API Connectivity Manager documentation.

Summary

Planning and implementing API governance is a crucial step ensuring your API strategy is successful. By working towards a distributed model and relying on adaptive governance to address the unique requirements of different teams and APIs, you can scale and apply uniform governance without sacrificing the speed and agility that make APIs, and cloud‑native environments, so productive.

Get Started

Start a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

Hero image
Managing Kubernetes Traffic with F5 NGINX: A Practical Guide

Learn how to manage Kubernetes traffic with F5 NGINX Ingress Controller and F5 NGINX Service Mesh and solve the complex challenges of running Kubernetes in production.



About The Author

Andrew Stiefel

Product Marketing Manager

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.