Web Server Load Balancing with NGINX Plus

In our partnership with Red Hat, we continue to focus on supporting enterprise users who require a high‑performance, scalable, long‑term solution for DevOps‑compatible service delivery in OpenShift. The NGINX Ingress Operator for OpenShift is a supported and certified mechanism for deploying NGINX Plus Ingress Controller for Kubernetes alongside the default router in an OpenShift environment, with point-and-click installation and automatic upgrades. You can leverage the Operator Lifecycle Manager (OLM) to perform installation, upgrade, and configuration of the NGINX Ingress Operator.

Wondering why you would want to use the NGINX Plus Ingress Controller in addition to the default router? Learn how our partnership enables secure, scalable, and supported application delivery in our The Value of Red Hat + NGINX blog.

This step-by-step guide provides everything you need to get started with the NGINX Ingress Operator. Before getting started, make sure you have admin access to an OpenShift cluster environment running OpenShift 4.3 or later.

Installing the Operator

Here’s a video showing how to install the NGINX Ingress Operator, followed by a written summary with screenshots.

We install the NGINX Ingress Operator from the OpenShift console.

  1. Log into the OpenShift console as an administrator.
  2. In the left navigation column, click Operators and then OperatorHub. Type nginx in the search box, and click on the Nginx Ingress Operator box that appears.

  3. After reviewing the product information, click the  Install  button.

  4. On the Create Operator Subscription page that opens, specify the cluster namespace in which to install the operator (in this example it’s nginx-ingress). Also click the Automatic radio button under Approval Strategy, to enable automatic updates of the running Operator instance without manual approval from the administrator. Click the  Subscribe  button.

  5. After the installation completes, run this command in a terminal to verify that the Operator is running:

    # oc get pods –n nginx-ingress
    NAME                                     READY   STATUS    RESTARTS   AGE
    nginx-ingress-operator-dd546d869-xxxxx   1/1     Running   0          7m29s
  6. Click Installed Operators in the left navigation column. On the page that opens, click the NginxIngressController link in the Provided APIs column. NginxIngressController is a custom resource which the Operator uses to deploy the NGINX Plus Ingress Controller on the OpenShift cluster.

  7. On the page that opens, paste a manifest like the following example into the text field, and click the  Create  button to deploy the NGINX Plus Ingress Controller for Kubernetes deployment.

    Note that we are embedding a Secret with a TLS certificate and key in the defaultSecret and wildcardTLS fields. This enables TLS termination and passthrough without requiring the Secret to be included in the Ingress policies.

    There are numerous options you can set when configuring the NGINX Plus Ingress Controller for Kubernetes, as listed at our GitHub repo.

    kind: NginxIngressController
      name: my-nginx-ingress-controller
      namespace: nginx-ingress
      enableCRDs: true
        pullPolicy: Always
        tag: edge
      nginxPlus: false
      serviceType: LoadBalancer
      type: deployment
      replicas: 2
      defaultSecret: nginx-ingress/default-server-secret
      wildcardTLS: default/app-secret
        error-log-level: debug
      enableTLSPassthrough: true
      globalConfiguration: nginx-ingress/nginx-configuration
  8. To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest we used in the previous step deployed two replicas of the NGINX Plus Ingress Controller and exposed them with a LoadBalancer service. (The output from the get command is spread across multiple lines for legibility.)

    # oc get pods -n nginx-ingress
    NAME                                          READY  STATUS   RESTARTS  AGE
    my-nginx-ingress-controller-579f455d7d-xxxxx  1/1    Running  0         5m53s
    my-nginx-ingress-controller-579f455d7d-xxxxx  1/1    Running  0         5m53s
    nginx-ingress-operator-dd546d869-xxxxx        1/1    Running  0         56m
    # oc get svc –n nginx-ingress
    my-nginx-ingress-controller      LoadBalancer   172.xx.48.254    <pending>  ...   
    nginx-ingress-operator-metrics   ClusterIP      172.xx.209.190   <none>     ...   
        ... 80:32028/TCP,443:31973/TCP   10m
        ... 80:32028/TCP,443:31973/TCP   10m
  9. Run the following command to verify that the NGINX Plus Ingress Controller responds to requests. For public_IP_address substitute the external IP address of the LoadBalancer service exposing the NGINX Plus Ingress Controller.

    Notice that at this point the command returns 404 Not Found. That’s because we haven’t yet configured and deployed an Ingress resource to route traffic to the backend Pods. For more information, see our documentation.

    # curl public_IP_address 
    <head><title>404 Not Found</title></head> 
    <body bgcolor="white"> 
    <center><h1>404 Not Found</h1></center> 

    Note that end users can submit multiple manifests of the NginxIngressController resource and a separate deployment is created for each one. The Operator also supports deployments across different namespaces. The namespace can be specified in the metadata section of the manifest.

What’s Next?

The NGINX Ingress Operator helps you manage your NGINX Plus Ingress Controller deployment, in particular with:

  • Configuration – Launch a basic deployment with just a few input parameters and one manifest
  • Scaling – Add and remove replicas seamlessly
  • Upgrading – Leverage rolling updates with no downtime
  • Uninstalling – Ensure that all Operator and Kubernetes Ingress Controller objects are properly and safely removed

Now that you have the NGINX Plus Ingress Controller ready and deployed, see our documentation and GitHub repo to use the advanced capabilities of NGINX Plus in OpenShift.

Not a customer already? To try NGINX Plus and the Ingress Controller, start your free 30-day trial today or contact us to discuss your use cases.

The Value of Red Hat + NGINX

Graphic depicting three benefits of deploying Red Hat and NGINX solutions together: secure your apps, scale your business, and support your vision

Hero image
Ebook: Cloud Native DevOps with Kubernetes

Download the excerpt of this O’Reilly book to learn how to apply industry‑standard DevOps practices to Kubernetes in a cloud‑native context.

About The Author

Amir Rawdat

Solutions Engineer

Amir Rawdat is a technical marketing engineer at NGINX, where he specializes in content creation of various technical topics. He has a strong background in computer networking, computer programming, troubleshooting, and content creation. Previously, Amir was a customer application engineer at Nokia.

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at or join the conversation by following @nginx on Twitter.