This blog is the third in our five‑part series about Kubernetes networking for Microservices March 2022:
- Program overview: Microservices March 2022: Kubernetes Networking
- Unit 1: Architecting Kubernetes Clusters for High‑Traffic Websites
- Unit 2: Exposing APIs in Kubernetes (this post)
- Unit 3: Microservices Security Pattern in Kubernetes
- Unit 4: Advanced Kubernetes Deployment Strategies
Also be sure to download our free eBook, Managing Kubernetes Traffic with NGINX: A Practical Guide, for detailed guidance on implementing Kubernetes networking with NGINX.
Are you planning to serve API requests from Kubernetes and wondering about best practices for deploying API gateways? Then Unit 2 is for you!
Three activities guide you progressively from a high‑level overview to practical application. We suggest you complete all three to get the best experience.
- Step 1: Watch the Livestream (1 Hour)
- Step 2: Deepen Your Knowledge (1–2 Hours)
- Step 3: Get Hands‑On (1 Hour)
Step 1: Watch the Livestream (1 Hour)
Each Microservices March livestream provides a high‑level overview of the topic featuring subject matter experts from learnk8s and NGINX. If you miss the live airing on March 14 – don’t worry! You can catch it on demand.
In this episode, we answer the question “How do I expose APIs in Kubernetes?” It covers:
- Best practices for deploying API gateways in Kubernetes
- Authorization and authentication
- OpenID Connect (OIDC)
- Rate limiting
Step 2: Deepen Your Knowledge (1–2 Hours)
We expect you’ll have more questions after the livestream – that’s why we curated a collection of relevant reading and videos. This Unit’s deep dive covers two topics: tool options for deploying API gateways in Kubernetes and use cases you can accomplish with these tools.
Start by reading this blog, which guides you through the decision about which technology to use for API gateway use cases, with sample scenarios for north‑south and east‑west API traffic.
In addition to discussing the various tools and use cases, our experts demo how you can use an Ingress controller and service mesh to accomplish API gateway use cases.
Now that you have a good foundation in how API gateways can be deployed in Kubernetes, it’s a good time to dive into common use cases. The following blogs walk through how to implement two common use cases using Kubernetes tools: identity verification and rate limiting.
The Ingress controller is an ideal location for centralized authentication and authorization in Kubernetes, which can help you reduce errors and increase efficiency. In this blog, we walk through how to implement single sign‑on with NGINX Ingress Controller as the relaying party and Okta as the identity provider in the OIDC Authorization Code Flow.
High request volume can overwhelm your Kubernetes services, which is why limiting each client to a reasonable number of requests is a crucial component of your resilience strategy. In this blog and video, we demonstrate how it can take less than 10 minutes to define and apply a rate‑limiting policy using NGINX Service Mesh.
If you’re keen to deepen your knowledge on APIs and API gateways – and have more than 1–2 hours to spend – then we suggest a series of additional resources:
This eMagazine addresses effective planning, architecture, and deployment of APIs for reliable flexibility.
This eBook is a comprehensive guide to reducing latency in your applications and APIs without making any compromises. It covers trends driving real‑time APIs, a practical reference architecture for any enterprise, and helpful vendor comparisons and case studies to make the business case for investing in the proper API management solution.
In this O’Reilly eBook, Mike Amundsen introduces developers and network administrators to the basic concepts and challenges of monitoring and managing API traffic. You’ll learn approaches for observing and controlling external (north‑south) traffic and for optimizing internal (east‑west) traffic. You’ll also examine the business value of good API traffic practice that connects your business goals and internal progress measurements to useful traffic monitoring, reporting, and analysis.
Step 3: Get Hands On (1 Hour)
Even with all the best webinars and research, there’s nothing quite like getting your hands on the tech. The labs run you through common scenarios to reinforce your learning.
In our second self‑paced lab, you use NGINX Ingress Controller to implement rate limiting and prevent an API and app from getting overwhelmed by too many requests. Watch this walkthrough of the lab to see it in action and learn the “why” behind each step.
To access the lab, you need to register for Microservices March 2022. If you’re already registered, the email you received with the Unit 2 Learning Guide includes access instructions. Alternatively, you can try out the lab in your own environment, using NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting as a guide.
Why Register for Microservices March?
While some of the activities (the livestreams and blogs) are freely available, we need to collect just a little personal information to get you setup with the full experience. Registration gives you:
- Access to four self‑paced labs where you can get hands‑on with the tech via common scenarios
- Membership in the Microservices March Slack channel for asking questions of the experts and networking with fellow participants
- Weekly learning guides to help you stay on top of the agenda
- Calendar invites for the livestreams
Unit 3: Microservices Security Pattern in Kubernetes begins on March 21. Learn about the sidecar pattern, policies that can make your services more secure and resilient, service meshes, mTLS, and end-to-end encryption.
For detailed guidance on implementing Kubernetes networking with NGINX, download our eBook, Managing Kubernetes Traffic with NGINX: A Practical Guide.