Kubernetes is an open source container scheduling and orchestration system originally created by Google and then donated to the Cloud Native Computing Foundation. Kubernetes automatically schedules containers to run evenly among a cluster of servers, abstracting this complex task from developers and operators. Recently Kubernetes has emerged as the favored container orchestrator and scheduler.
The NGINX Ingress Controller for Kubernetes provides enterprise‑grade delivery services for Kubernetes applications, with benefits for users of both NGINX Open Source and NGINX Plus. With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption. NGINX Plus users additionally get session persistence for stateful applications and JSON Web Token (JWT) authentication for APIs.
Release 1.8.0 introduces important new features:
- Integration with NGINX App Protect, making ours the only enterprise‑grade Ingress Controller on the market with a WAF that sits inside the Kubernetes cluster. Enforcing security policies closer to the app results in greater speed and efficiency and fewer points of failure.
- Policies, which enable you to create traffic‑management configuration that can be applied in multiple places by all teams involved in application delivery.
For NGINX Plus customers, support for the NGINX Ingress Controller for Kubernetes is included at no additional cost.
How the NGINX Ingress Controller for Kubernetes Works
By default, pods of Kubernetes services are not accessible from the external network, but only by other pods within the Kubernetes cluster. Kubernetes has a built‑in configuration for HTTP load balancing, called Ingress, that defines rules for external connectivity to Kubernetes services. Users who need to provide external access to their Kubernetes services create an Ingress resource that defines rules, including the URI path, backing service name, and other information. The Ingress controller can then automatically program a front‑end load balancer to enable Ingress configuration. The NGINX Ingress Controller for Kubernetes is what enables Kubernetes to configure NGINX and NGINX Plus for load balancing Kubernetes services.
For installation instructions, see our documentation.
In addition to the regular Ingress Resources, the NGINX Ingress Controller for Kubernetes provides the VirtualServer and VirtualServerRoute resources. The following example.yml file creates a VirtualServer resource to route client requests to different services depending on the request URI and
Host header. For client requests with the
cafe.example.com, requests with the /tea URI are routed to the tea service and requests with the /coffee URI are routed to the coffee service.
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: cafe spec: host: cafe.example.com tls: secret: cafe-secret upstreams: - name: tea service: tea-svc port: 80 - name: coffee service: coffee-svc port: 80 routes: - path: /tea action: pass: tea - path: /coffee action: pass: coffee
To terminate SSL/TLS traffic, create a Kubernetes Secret object with an SSL/TLS certificate and key, and assign it to the VirtualServer resource (a Secret contains a small amount of sensitive data such as the certificate and key to encrypt data). For more about Secrets, see the Kubernetes documentation.
It’s easy to customize the Ingress controller, either by specifying annotations in the Ingress resource YAML file or by mapping a Kubernetes resource, such as ConfigMaps, to the Ingress controller. Our GitHub repository provides many complete examples of deploying the Kubernetes Ingress controller with NGINX Plus.
Integration with NGINX App Protect
The NGINX Ingress Controller for Kubernetes is now fully integrated with NGINX App Protect, our intelligent web application firewall (WAF). App Protect is the only supported WAF that sits inside the Kubernetes cluster alongside the application Pods it protects from malicious attacks.
Putting the WAF closer to the app benefits both administrators and app developers:
- Having fewer separate security tools to manage increases efficiency and reduces possible points of failure.
- App developers can now incorporate WAF functionality into their dev workflows, without having to ask other teams to grant them permissions. They’re more likely to comply with security requirements that don’t slow them down!
You enable NGINX App Protect with annotations in the Ingress resource, as in the following example. For details, see our documentation.
apiVersion: extensions/v1beta kind: Ingress metadata: name: cafe-ingress annotations: kubernetes.io/ingress.class: “nginx” appprotect.f5.com/app-protect-policy: "default/dataguard-alarm" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "default/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=10.27.2.34:514" spec:
You apply security policies for NGINX App Protect with the
APPolicy custom resource. This example enables Data Guard violation in blocking mode.
apiVersion: appprotect.f5.com/v1beta1 kind: APPolicy metadata: name: dataguard-alarm spec: policy: applicationLanguage: utf-8 blocking-settings: violations: - alarm: true block: true name: VIOL_DATA_GUARD data-guard: creditCardNumbers: true enabled: true enforcementMode: ignore-urls-in-list maskData: true usSocialSecurityNumbers: true enforcementMode: blocking name: dataguard-alarm template: name: POLICY_TEMPLATE_NGINX_BASE
By breaking traffic-management configuration into reusable chunks, policies give you maximum flexibility in defining which teams own specific parts of the configuration.
The following sample policy implements an IP address‑based access control list (ACL). It tells the Ingress Controller to accept traffic only from the 10.0.0.0/8 subnet. You then reference the policy in the relevant Ingress resources.
apiVersion: k8s.nginx.org/v1alpha1 kind: Policy metadata: name: webapp-policy spec: accessControl: allow: - 10.0.0.0/8