NGINX.COM
Web Server Load Balancing with NGINX Plus

Microservices, as a concept, are almost synonymous with cloud‑native applications by their very nature – having small components, networked together and then distributed as loosely coupled services. APIs have been around for decades, so API gateways are not a new concept. But as part of the transition from monolithic to microservices‑based apps, the industry has largely moved from SOAP APIs with XML‑encoded payloads to REST APIs with JSON‑encoded payloads. Not only have the underlying technology and data formats changed, we’ve also changed the way we build APIs and the messaging and interaction style they use.

Microservices and API Management

A hugely distributed environment, with hundreds of APIs that are owned by individual developers, inherently creates challenges by virtue of its size and distributed nature. You may have multiple API gateways – one for each API, one for each team that is building the APIs, and so on – and as APIs are exposed in production, security becomes important. If TLS is configured end to end, for example, you need to make sure that traffic is secure end to end. Another issue is the conflicting priorities of application and security teams: DevOps puts a premium on speed, whereas it’s the job of NetOps to test changes carefully for compliance with policies that protect the organization’s assets.

Deploying faster and more frequently can only work if you have an efficient CI/CD pipeline. Your API gateway may be in this pipeline, but how can you update an API and tell the API gateway that it needs to be handling traffic or doing the authentication and access control for these endpoints? The gateway probably needs to be configured through its own API. Ideally, you should be making these API calls through a dedicated central system so your API gateway can be updated in the right way.

One approach that NGINX takes with NGINX Controller [now F5 NGINX Management Suite] is to provide an API interface so that you can update APIs and publish new ones, giving application teams an interface to move fast and build their own CI/CD pipelines. At the same time, API management needs to happen in a centralized location so that the right security policies can be applied, but without slowing everything down.

Questions to Ask When Deploying an API Gateway

Cloud vendors are making it easy to deploy API gateways. You can move your workloads to the cloud, and then set up an API gateway for your authentication, rate limiting, and routing, but you need to think about the complexity. API endpoints may have the right infrastructure, but how many APIs will you have in 6–12 months from now? You need to be prepared to scale up or down. How many teams are deploying or changing the APIs? You may need role‑based access control and with the appropriate permissions. Are the APIs for internal or external consumption? Usually the more external an API is, the more attractive it is to deploy in the public cloud.

The questions change when you look at deploying APIs in Kubernetes for orchestration and containerization. You then start to ask what API gateway will you use, and where will it be? It could be in front of the cluster, or part of a service running inside the cluster. Will you need an Ingress controller? The Ingress controller can be your API gateway which can avoid unnecessary hops and single points of failure.

API Gateways and Service Mesh

Service mesh is quite a hot topic, mainly in terms of improving management and visibility of microservices‑based applications as a whole.

With Kubernetes, north‑south traffic usually consists of API calls from clients to endpoints. However, responding to the initial API call might require calling another API and now you have service-to-service communication (east‑west traffic), which is where service meshes come in. The goal of a service mesh is to provide better visibility into east‑west traffic, more security controls (using mutual TLS [mTLS], for example), and the ability to manage this with a control plane. When traffic is east‑west, API endpoints may have a sidecar proxy which can also be an API gateway.

A sidecar proxy can be an API gateway, but it’s best to have your API gateway handle north‑south traffic, separate from the sidecar proxies that handle east‑west. With a service mesh, the sidecar proxies are essentially autonomous and invisible, as you are configuring which service talks to another. The clients don’t know if you have microservices or a service mesh, so it’s usually best to leave the API gateway as a way to publish and expose your APIs externally and let the service mesh handle security, control how services communicate with each other, and provide visibility into what’s going on from a service-to-service perspective.

API Gateways and Hybrid Cloud

It’s important to understand the difference between API management and API gateways here. The API gateway is the data plane that sits between client and the API endpoint and is responsible for routing, policies, and security. With a hybrid cloud, there might be many API gateways that need to be in sync with each other. You need a API management control plane to define policies, push configurations, report, and have visibility over all API gateways. The API management platform, ideally, is agnostic to the infrastructure, to give you the freedom to deploy your API gateways on whatever infrastructure that makes sense.

API Gateways in Standard Cloud Infrastructure

In a DevOps environment, it is important to integrate the configuration of API gateways into your CI/CD pipelines so that the gateways are ready to deliver the APIs you are publishing. Configuring and deploying gateways correctly usually requires conditional logic and custom processing. Use a centralized API management system to handle that instead of trying to capture the logic in your CI/CD scripts. However, you don’t want to store API definitions, deployment locations, policies, and so on in the API management platform itself, but rather in a source control repository. Your CI/CD pipeline scripts then pull the information from the repo.

REST vs. Messaging for APIs

It’s been almost a decade since JSON REST APIs overtook SOAP XML APIs in popularity. There is a great case for combining synchronous (for example, JSON REST APIs) with asynchronous (for example, Pub/Sub messaging) communication. gRPC allows you to do both asynchronous and synchronous via bidirectional streaming – for example message event streams and RPC calls.

An API gateway is vital when you mix synchronous and asynchronous protocols. It’s the consistent single entry point to which the API client sends calls, regardless of the messaging style. This makes the API client’s life easier. Whatever style is going through, it can be consumed.

Conclusion

For successful use of APIs in a microservices‑based environment, you need to provide the tools for developers to design, implement, and publish their APIs in an automated way, ideally as part of a CI/CD pipeline. The goal is to be iterative, with a high focus on security.

Modern‑day architecture is changing at a rapid pace. Your APIs might be in multiple clouds, in containers, managed by platforms such as Kubernetes, and so on. There are many new approaches to controlling and routing the traffic via an API gateway. One is sidecar gateways (or proxies) to provide service mesh capabilities that give you more visibility in managing and securing APIs within a container cluster.

There are multiple communication styles such as JSON‑based REST and RPC, and we expect these formats to change over time. These are serving developers well, so the API gateway needs to support multiple languages, whether they be synchronous or asynchronous, internal or external, and so on.

When dealing with APIs, or even API gateways spanning over multiple clouds, you need a way to manage, distribute, and secure these APIs seamlessly as they grow, using an API management platform that is agnostic to the infrastructure.

Want to try out the industry’s fastest API gateway and API connectivity management platform? Request a free 30-day trial of NGINX Plus and NGINX Management Suite today or contact us to discuss your use cases.

Hero image
Is Your API Real-Time?

Test if your API feels slow to users, with rtapi – NGINX's real‑time API latency measurement test.



About The Author

Micheál Kingston

Solutions Architect

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.