NGINX.COM
Web Server Load Balancing with NGINX Plus

Traditionally, the Kubernetes Ingress resource is used to provision and configure Ingress load balancing in Kubernetes. While the Ingress resource makes it easy to configure SSL/TLS termination, HTTP load balancing, and Layer 7 routing, it doesn’t allow for further customization. Instead, you need to use annotations, ConfigMaps, and custom templates, which have the following limitations:

  • Globally scoped – not fine‑grained
  • Error‑prone and difficult to work with
  • Not secure

NGINX Ingress resources are an alternative available for both the NGINX Open Source and NGINX Plus-based versions of NGINX Ingress Controller. They provide a native, type‑safe, and indented configuration style which simplifies implementation of Ingress load‑balancing capabilities, including:

  • Circuit breaking – For appropriate handling of application errors
  • Sophisticated routing – For A/B testing and blue‑green deployments
  • Header manipulation – For offloading application logic to the NGINX Ingress controller
  • Mutual TLS authentication (mTLS) – For zero‑trust or identity‑based security
  • Web application firewall (WAF) – For protection against HTTP vulnerability attacks

NGINX Ingress resources have an added benefit for existing NGINX users: they make it easier to repurpose load‑balancing configurations from non‑Kubernetes environments, so all your NGINX load balancers can use the same configurations.

The following sample VirtualServer (VS) object provisions basic Ingress controller functionalities like SSL/TLS termination and path‑based routing. Requests entering the domain name app.example.com with URI /products are routed to the products service while requests with the /billing URI are routed to the billing service.

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: app
spec:
  host: app.example.com
  tls:
    secret: app-secret
  upstreams:
  - name: products
    service: products-svc
    port: 80
  - name: billing
    service: billing-svc
    port: 80
  routes:
  - path: /products
    action:
      pass: products
  - path: /billing
    action:
      pass: billing

The VirtualServer object needs to reference a Kubernetes Secret object to establish SSL/TLS connections with clients. The Secret contains sensitive data such as the SSL certificate and key for encrypting data. For more about Secrets, see the Kubernetes documentation.

Sophisticated Routing

Intelligent traffic routing can be provisioned, enabling use cases such as:

  • Debugging traffic – Route requests to new, test instances
  • Traffic splitting – Route a subset of traffic to Kubernetes services
  • Blue‑green deployments – Smooth end‑user transition to updated production workloads

You can route requests based on connection attributes presented by the client. In this example, incoming traffic is routed based on a specific session cookie. Authenticated users with the matched session cookie are routed to the app-edge instance.

kind: VirtualServer
metadata:
  name: cafe
spec:
  host: cafe.example.com
  upstreams:
  - name: app-edge
    service: app-edge-svc
    port: 80
  - name: app-stable
    service: app-stable-svc
    port: 80
  routes:
  - path: /
    matches:
    - conditions:
      - cookie: session
        value: suxxis-12hs6dds-dhfgry-ssss
      action:
        pass: app-edge
    action:
      pass: app-stable

In the following sample configuration, blue‑green deployment is used to switch traffic from the production (blue) version of an app to a new version (green) to verify that the new version can handle production‑level traffic and is actually an improvement, all without disrupting service. In this example, we direct 90% of incoming traffic to the production version (products-v1) and 10% to the new version (products-v2). If it turns out that products-v2 doesn’t perform well, the problem affects only a few users, and we can quickly change the split to reroute all traffic back to products-v1. If products-v2 performs well, we can adjust the split to route more traffic to it (gradually or all at once) and decommission products-v1 when it is no longer receiving any traffic, effectively converting the green instance to the new blue (production) instance.

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: app
spec:
  host: app.example.com
  upstreams:
  - name: products-v1
    service: products-v1-svc
    port: 80
  - name: products-v2 
    service: products-v2-svc
    port: 80
  routes:
  - path: /products
    splits:
    - weight: 90
      action:
        pass: products-v1
    - weight: 10
      action:
        pass: products-v2

Our GitHub repository supplies many complete examples of deploying NGINX Ingress Controller.

Policies

Policies are NGINX Ingress resources that can be defined once and then applied to different areas of applications by different teams. Policies are applied as separate Kubernetes objects to implement features like:

  • Rate limiting – Limit the amount of requests users can make
  • mTLS authentication – Validate both client and server certificates against a configurable Certificate Authority (CA)
  • IP address‑based access control list – Allow or deny traffic based on IP addresses/subnets
  • JWT validation – Authenticate users with ID tokens to enable single sign‑on
  • WAF – Protect your applications from threats and vulnerabilities

The following sample Policy object provisions rate limiting. The rate limit defined in the rate field – in this example 1 request per second – applies to each unique origin IP address, as derived from the request and captured in the ${binary_remote_address} variable.

apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
  name: rate-limit-policy
spec:
  rateLimit:
    rate: 1r/s
    key: ${binary_remote_addr}
    zoneSize: 10M

Policies must be referenced in VirtualServer (VS) and VirtualServerRoute (VSR) objects for the NGINX Ingress Controller to apply them to traffic.

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: webapp
spec:
  host: webapp.example.com
  policies:
  - name: rate-limit-policy
  upstreams:
  - name: webapp
    service: webapp-svc
    port: 80
  routes:
  - path: /
    action:
      pass: webapp

For more details on NGINX Ingress resources, see the documentation.