Enterprise-grade application delivery for Kubernetes

Kubernetes is an open source container scheduling and orchestration system originally created by Google and then donated to the Cloud Native Computing Foundation. Kubernetes automatically schedules containers to run evenly among a cluster of servers, abstracting this complex task from developers and operators. Recently Kubernetes has emerged as the favored container orchestrator and scheduler.

The NGINX Ingress Controller for Kubernetes provides enterprise‑grade delivery services for Kubernetes applications, with benefits for users of both NGINX Open Source and NGINX Plus. With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption. NGINX Plus users additionally get session persistence for stateful applications and JSON Web Token (JWT) authentication for APIs.

For NGINX Plus customers, support for the NGINX Ingress Controller for Kubernetes is included at no additional cost.

How the NGINX Ingress Controller for Kubernetes Works

By default, pods of Kubernetes services are not accessible from the external network, but only by other pods within the Kubernetes cluster. Kubernetes has a built‑in configuration for HTTP load balancing, called Ingress, that defines rules for external connectivity to Kubernetes services. Users who need to provide external access to their Kubernetes services create an Ingress resource that defines rules, including the URI path, backing service name, and other information. The Ingress controller can then automatically program a front‑end load balancer to enable Ingress configuration. The NGINX Ingress Controller for Kubernetes is what enables Kubernetes to configure NGINX and NGINX Plus for load balancing Kubernetes services.

For installation instructions, see our documentation.

In addition to the regular Ingress Resources, the NGINX Ingress Controller for Kubernetes provides the VirtualServer and VirtualServerRoute resources. The following example.yml file creates a VirtualServer resource to route client requests to different services depending on the request URI and Host header. For client requests with the Host header, requests with the /tea URI are routed to the tea service and requests with the /coffee URI are routed to the coffee service.

kind: VirtualServer
  name: cafe
    secret: cafe-secret
  - name: tea
    service: tea-svc
    port: 80
  - name: coffee
    service: coffee-svc
    port: 80
  - path: /tea
      pass: tea
  - path: /coffee
      pass: coffee

To terminate SSL/TLS traffic, create a Kubernetes Secret object with an SSL/TLS certificate and key, and assign it to the VirtualServer resource (a Secret contains a small amount of sensitive data such as the certificate and key to encrypt data). For more about Secrets, see the Kubernetes documentation.

It’s easy to customize the Ingress controller, either by specifying annotations in the Ingress resource YAML file or by mapping a Kubernetes resource, such as ConfigMaps, to the Ingress controller. Our GitHub repository provides many complete examples of deploying the Kubernetes Ingress controller with NGINX Plus.

For a detailed list of all additional features that can be configured on the Ingress controller with NGINX and NGINX Plus, see our documentation for ConfigMaps and Annotations.

Compare Versions

SSL/TLS termination
URL rewrites
Prometheus exporter
Helm charts
Real-time monitoring
Health checks
Session persistence
Dynamic reconfiguration
24x7 support