NGINX.COM
Web Server Load Balancing with NGINX Plus

[Editor – This post has been updated to use the NGINX Plus API, which replaces and deprecates the separate dynamic configuration and status modules discussed in the original version of the post.]

This post describes an solution that uses Packer and Terraform to automate the installation and configuration of a highly available (HA), all‑active autoscaling deployment of NGINX Plus and NGINX Open Source on the Google Compute Engine (GCE), which is the Google Cloud Platform (GCP) product for running workloads on virtual machines (VMs).

All‑active HA autoscaling solutions are increasingly becoming the norm in the current DevOps landscape. HA deployments use active health checks to restart unhealthy instances. Combined with an all‑active solution, this ensures that a server (such as an NGINX Plus load balancer) is always available to accept client requests. At the same time, autoscaling helps reduce deployment costs by adjusting the number of instances to the appropriate number for the current workload, based on a wide range of configurable parameters.

Packer and Terraform

Manually setting up such a complex environment and modifying it every time there’s a breaking change can be tedious. As such, automating deployments is a key requirement for efficient DevOps workflows. There are many automation tools, and this solution uses two from Hashicorp which are increasingly popular with DevOps engineers, Packer and Terraform.

Packer is an open source tool for creating identical machine images for multiple platforms from a single source JSON configuration file. The images can then be used to quickly create new running instances on a variety of cloud providers.

Terraform is an open source tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage infrastructure on existing and popular cloud providers as well as custom in‑house solutions.

Terraform configuration files specify the components needed to run a single application or your entire data center. Terraform generates an execution plan describing how to reach the desired state and then executes the plan to build the specified infrastructure. As the configuration changes, Terraform determines what changed and creates incremental execution plans to be applied.

The Solution

The automation solution described here is for the HA deployment of NGINX Plus on GCE that is described in our deployment guide, All‑Active HA for NGINX Plus on the Google Cloud Platform. In the guide, NGINX Plus functions as both the reverse proxy/load balancer and the sample application web servers across which traffic is load balanced.

In this automated solution, the web servers instead run NGINX Open Source. Specifically, Packer creates three distinct GCE “gold” images:

  • One image configured with NGINX Plus acting as the reverse proxy and load balancer.

    Active health checks and the NGINX Plus API are enabled as part of this configuration. The NGINX Plus live activity monitoring dashboard is also configured at port 8080 and when installation is complete can be accessed at http://external-ip-address:8080/dashboard.html.

  • Two images configured with NGINX Open Source acting as an application web server for static assets. As in the deployment guide, the configuration on the two web servers is the same.

    The configuration uses the sub_filter directive to set several parameters. This enables the landing HTML site to reflect values that are specific to the NGINX installation.

Then Terraform configures and deploys two instances each of the load balancer image and the two web server images, setting up high availability with GCE health checks. Finally, GCE startup and shutdown scripts are deployed to provide seamless autoscaling, making use of the NGINX Plus API to dynamically configure the upstream groups of NGINX Plus and NGINX Open Source instances.

The overall structure of the deployed solution closely mirrors the deployment guide. It consists of three GCE instance templates, each based on one of the machine images created with Packer.

Each of the instance templates is in turn managed by a GCE group instance manager. A firewall is set up to allow access to ports 80 and 8080 on all of the NGINX Plus GCE instances.

To increase safety and reduce complexity, public IP addresses are advertised and accessible only for the load balancer instances. The load balancer group manager is assigned to a backend pool balancer, and an external static IP address is configured to forward all incoming connections to the backend pool balancer.

Installing the Required Software

To get started with the automation solution, download and install the required software:

  1. Install Packer.

    This solution was tested with Packer v1.0.4 and might not work with other versions. We intend to update the code if a future release introduces incompatible changes.

  2. Install Terraform.

    This solution was tested with Terraform v0.10.2 and might not work with other versions. We intend to update the code if a future release introduces incompatible changes.

  3. Create a Google Cloud account.

  4. Create a Google Cloud project.

  5. Download the credentials for the new project from the Credentials tab in the GCP API Manager. GCP does not preload assets associated with a project, so when you access the tab, you probably will see the message “Compute Engine is getting ready”. You can download the credentials when the message clears.

  6. Copy and rename the credentials to ~/.gcloud/gcloud_credentials.json.

  7. Install the Google Cloud SDK.

  8. Clone or download the files in the GitHub repo for this solution.

Configuring Packer

After completing the instructions in Installing the Required Software, configure Packer:

  1. Edit the variables section of packer/packer.json to specify your project_id and preferred GCP zone.

    "variables": {
       "home": "{{env `HOME`}}",
       "license": "{{env `HOME`}}/.ssh/ngx-certs",
       "project_id": "all-active-nginx-plus-lb",
       "zone": "us-west1-a"
    },
  2. Copy your NGINX Plus certificate and key into the ~/.ssh/ngx-certs subfolder (or alternatively change the license variable in packer/packer.json to point to the location of your license).

Deploying the Machine Images with Terraform

After configuring Packer, complete the deployment by configuring Terraform:

  1. Edit terraform/variables.tf to specify your project_id, preferred deployment region, preferred deployment region_zone, and machine_type, by changing the value in the default field for each variable if necessary. You do not need to change the value for the credentials_file_path variable.

    variable "project_id" {
      description = "The ID of the Google Cloud project"
      default = "all-active-nginx-plus-lb"
    }
    variable "region" {
      description = "The region in which to deploy the Google Cloud project"
      default = "us-west1"
    }
    
    variable "region_zone" {
      description = "The region zone in which to deploy the Google Cloud project"
      default = "us-west1-a"
    }
    
    variable "machine_type" {
      description = "The type of virtual machine used to deploy NGINX"
      default = "n1-standard-1"
    }
    
    variable "credentials_file_path" {
      description = "Path to the JSON file used to describe your account credentials"
      default = "~/.gcloud/gcloud_credentials.json"
    }
  2. If appropriate, modify the GCE health‑check settings in terraform/healthcheck.tf (the time‑related parameters are in seconds).

    You can also change the settings for the GCE instance group manager if you wish, but note that instance names must obey the following conventions for the script to work:

    • Load balancer instances must contain lb in their names, and not app
    • Application instances must contain app in their names, and not lb
    # Configure HTTP health checks for NGINX
    resource "google_compute_http_health_check" "default" {
      name = "nginx-http-health-check"
      description = "Basic HTTP health check to monitor NGINX instances"
      request_path = "/"
      check_interval_sec = 10
      timeout_sec = 10
      healthy_threshold = 2
      unhealthy_threshold = 10
    }
    
    # Configure a GCE instance group manager for the NGINX load balancer
    resource "google_compute_instance_group_manager" "lb" {
      name = "ngx-plus-lb-instance-group"
      description = "Instance group to host NGINX Plus load balancing instances"
      base_instance_name = "nginx-plus-lb-instance-group"
      instance_template = "${google_compute_instance_template.lb.self_link}"
      zone = "${var.region_zone}"
      target_pools = [
        "${google_compute_target_pool.default.self_link}",
      ]
      target_size = 2
      auto_healing_policies {
        health_check = "${google_compute_http_health_check.default.self_link}"
        initial_delay_sec = 300
      }
    }
  3. If appropriate, modify the autoscaling settings in terraform/autoscaler.tf for the three instance group managers (the lb, app-1, and app-2 resources); for information about the settings, see the Terraform documentation. In the solution, each autoscaling policy spawns a new instance whenever available instances exceed 50% CPU usage (cpu_utilization), up to a maximum of five instances per group manager (max_replicas). As an example, this is the stanza for the lb resource.

    # Create a Google autoscaler for the LB instance group manager
    resource "google_compute_autoscaler" "lb" {
      name = "nginx-plus-lb-autoscaler"
      zone = "${var.region_zone}"
      target = "${google_compute_instance_group_manager.lb.self_link}"
      autoscaling_policy {
        max_replicas = 5
        min_replicas = 2
        cpu_utilization {
          target = 0.5
        }
      }
    }

Starting and Stopping the Instances

After completing the steps in the previous three sections, start the NGINX Plus and NGINX Open Source instances by opening a terminal, navigating to the location where you cloned the repository, and running ./setup.sh.

When you’re done with the demo, or if you want to delete the Google Cloud environment you’ve just created, run ./cleanup.sh.

Try out NGINX Plus on the GCE for yourself – start your free 30-day trial today or contact us to discuss your use cases.

Hero image
Free O'Reilly eBook: The Complete NGINX Cookbook

Updated for 2024 – Your guide to everything NGINX



About The Author

Alessandro Fael Garcia

Alessandro Fael Garcia

Technical Marketing Engineer

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.