Kubernetes is an open source container scheduling and orchestration system originally created by Google and then donated to the Cloud Native Computing Foundation. Kubernetes automatically schedules containers to run evenly among a cluster of servers, abstracting this complex task from developers and operators. Recently Kubernetes has emerged as the favored container orchestrator and scheduler.
The NGINX Ingress Controller for Kubernetes provides enterprise‑grade delivery services for Kubernetes applications, with benefits for users of both open source NGINX and NGINX Plus. With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption. NGINX Plus users additionally get session persistence for stateful applications and JSON Web Token (JWT) authentication for APIs.
Note: For NGINX Plus customers, support for the NGINX Ingress Controller for Kubernetes is included at no additional cost.
How the NGINX Ingress Controller for Kubernetes Works
By default, pods of Kubernetes services are not accessible from the external network, but only by other pods within the Kubernetes cluster. Kubernetes has a built‑in configuration for HTTP load balancing, called Ingress, that defines rules for external connectivity to Kubernetes services. Users who need to provide external access to their Kubernetes services create an Ingress resource that defines rules, including the URI path, backing service name, and other information. The Ingress controller can then automatically program a frontend load balancer to enable Ingress configuration. The NGINX Ingress Controller for Kubernetes is what enables Kubernetes to configure NGINX and NGINX Plus for load balancing Kubernetes services.
Note: For installation instructions, see our GitHub repository.
The following example.yml file creates a Kubernetes Ingress resource to route client requests to different services depending on the request URI and
Host header. For client requests with the
Host header cafe.example.com, requests with the /tea URI are routed to the tea service and requests with the /coffee URI are routed to the coffee service.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress annotations: nginx.org/sticky-cookie-services: "serviceName=coffee-svc srv_id expires=1h path=/coffee" nginx.com/jwt-realm: "Cafe App" nginx.com/jwt-token: "$cookie_auth_token" nginx.com/jwt-key: "cafe-jwk" spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80
To terminate SSL/TLS traffic, create a Kubernetes Secret object with an SSL/TLS certificate and key, and assign it to the Kubernetes Ingress resource (a Secret contains a small amount of sensitive data such as the certificate and key to encrypt data). For more about Secrets, see the Kubernetes documentation.
It’s easy to customize the Ingress controller, either by specifying annotations in the ingress resource YAML file or by mapping a Kubernetes resource, such as ConfigMaps, to the Ingress controller. In the example above, we are using annotations to customize the Ingress controller by enabling session persistence to the coffee service and configuring JWT validation. Our GitHub repository provides many complete examples of deploying the Kubernetes Ingress controller with NGINX Plus.
For a detailed list of all additional features that can be configured on the Ingress controller with NGINX and NGINX Plus, see the repository.