Enterprise-grade application delivery for Kubernetes

Kubernetes is an open source container scheduling and orchestration system originally created by Google and then donated to the Cloud Native Computing Foundation. Kubernetes automatically schedules containers to run evenly among a cluster of servers, abstracting this complex task from developers and operators. Recently Kubernetes has emerged as the favored container orchestrator and scheduler.

The NGINX Ingress Controller for Kubernetes provides enterprise‑grade delivery services for Kubernetes applications, with benefits for users of both NGINX Open Source and NGINX Plus. With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption. NGINX Plus users additionally get session persistence for stateful applications and JSON Web Token (JWT) authentication for APIs.

Release 1.8.0 introduces important new features:

  • Integration with NGINX App Protect, making ours the only enterprise‑grade Ingress Controller on the market with a WAF that sits inside the Kubernetes cluster. Enforcing security policies closer to the app results in greater speed and efficiency and fewer points of failure.
  • Policies, which enable you to create traffic‑management configuration that can be applied in multiple places by all teams involved in application delivery.

For NGINX Plus customers, support for the NGINX Ingress Controller for Kubernetes is included at no additional cost.

How the NGINX Ingress Controller for Kubernetes Works

By default, pods of Kubernetes services are not accessible from the external network, but only by other pods within the Kubernetes cluster. Kubernetes has a built‑in configuration for HTTP load balancing, called Ingress, that defines rules for external connectivity to Kubernetes services. Users who need to provide external access to their Kubernetes services create an Ingress resource that defines rules, including the URI path, backing service name, and other information. The Ingress controller can then automatically program a front‑end load balancer to enable Ingress configuration. The NGINX Ingress Controller for Kubernetes is what enables Kubernetes to configure NGINX and NGINX Plus for load balancing Kubernetes services.

For installation instructions, see our documentation.

In addition to the regular Ingress Resources, the NGINX Ingress Controller for Kubernetes provides the VirtualServer and VirtualServerRoute resources. The following example.yml file creates a VirtualServer resource to route client requests to different services depending on the request URI and Host header. For client requests with the Host header, requests with the /tea URI are routed to the tea service and requests with the /coffee URI are routed to the coffee service.

kind: VirtualServer
  name: cafe
    secret: cafe-secret
  - name: tea
    service: tea-svc
    port: 80
  - name: coffee
    service: coffee-svc
    port: 80
  - path: /tea
      pass: tea
  - path: /coffee
      pass: coffee

To terminate SSL/TLS traffic, create a Kubernetes Secret object with an SSL/TLS certificate and key, and assign it to the VirtualServer resource (a Secret contains a small amount of sensitive data such as the certificate and key to encrypt data). For more about Secrets, see the Kubernetes documentation.

It’s easy to customize the Ingress controller, either by specifying annotations in the Ingress resource YAML file or by mapping a Kubernetes resource, such as ConfigMaps, to the Ingress controller. Our GitHub repository provides many complete examples of deploying the Kubernetes Ingress controller with NGINX Plus.

For a detailed list of all additional features that can be configured on the Ingress controller with NGINX and NGINX Plus, see our documentation for ConfigMaps and Annotations.

Integration with NGINX App Protect

The NGINX Ingress Controller for Kubernetes is now fully integrated with NGINX App Protect, our intelligent web application firewall (WAF). App Protect is the only supported WAF that sits inside the Kubernetes cluster alongside the application Pods it protects from malicious attacks.

Putting the WAF closer to the app benefits both administrators and app developers:

  • Having fewer separate security tools to manage increases efficiency and reduces possible points of failure.
  • App developers can now incorporate WAF functionality into their dev workflows, without having to ask other teams to grant them permissions. They’re more likely to comply with security requirements that don’t slow them down!

You enable NGINX App Protect with annotations in the Ingress resource, as in the following example. For details, see our documentation.

apiVersion: extensions/v1beta 
kind: Ingress 
  name: cafe-ingress 
  annotations: “nginx” "default/dataguard-alarm" "True" "True" "default/logconf" "syslog:server=" 

You apply security policies for NGINX App Protect with the APPolicy custom resource. This example enables Data Guard violation in blocking mode.

kind: APPolicy 
  name: dataguard-alarm 
    applicationLanguage: utf-8 
      - alarm: true 
        block: true 
        name: VIOL_DATA_GUARD 
      creditCardNumbers: true 
      enabled: true 
      enforcementMode: ignore-urls-in-list 
      maskData: true 
      usSocialSecurityNumbers: true 
    enforcementMode: blocking  
    name: dataguard-alarm 


By breaking traffic-management configuration into reusable chunks, policies give you maximum flexibility in defining which teams own specific parts of the configuration.

The following sample policy implements an IP address‑based access control list (ACL). It tells the Ingress Controller to accept traffic only from the subnet. You then reference the policy in the relevant Ingress resources.

kind: Policy 
  name: webapp-policy 

Compare Versions

SSL/TLS termination
URL rewrites
Prometheus exporter
Helm charts
Real-time monitoring
Health checks
Session persistence
Dynamic reconfiguration
24x7 support