This guide explains how to use NGINX Plus to complement the native load‑balancing options in the Amazon Web Services (AWS) cloud. We show how to implement our recommended solution, which combines the AWS Network Load Balancer (NLB) for fast and efficient handling of Layer 4 traffic, and NGINX Plus for advanced, Layer 7 features such as load balancing, caching, and content‑based routing. The combined AWS NLB and NGINX Plus solution is fast, powerful, reliable, highly portable to other platforms, and likely to be relatively low‑cost.

This guide explains how to set up an AWS NLB in front of one pair of NGINX Plus load balancers. (You can increase resiliency as needed by following the same steps for additional NGINX Plus instances.)

For your convenience, the Appendix provides instructions for creating Amazon Elastic Compute Cloud (EC2) instances that have NGINX and NGINX Plus installed, as required for the tutorial in this guide.

About AWS NLB

AWS NLB is optimized for fast, efficient load balancing at the connection level (Layer 4). AWS NLB uses a flow hash routing algorithm.

AWS NLB is ideal for fast load balancing of TCP traffic, as it’s able to handle millions of requests per second while maintaining ultra‑low latencies. This enables AWS NLB to more easily handle volatile traffic patterns – patterns with sudden and dramatic changes in the amount of traffic.

Unlike previous AWS solutions, AWS NLB supports both static IP addresses and Elastic IP addresses.

About NGINX Plus

NGINX Plus is complementary to NLB. Operating at Layer 7 (the application layer), it uses more advanced load‑balancing criteria, including schemes that rely on the content of requests and the results of NGINX Plus’ active health checks.

NGINX Plus is the commercially supported version of the open source NGINX software. NGINX Plus is a complete application delivery platform, extending the power of NGINX with a host of enterprise‑ready capabilities that enhance an AWS web application deployment and are instrumental to building web applications at scale.

NGINX Plus provides both reverse‑proxy features and load‑balancing features, including:

Solution Overview

The setup in this tutorial combines AWS NLB, AWS target groups, EC2 instances running NGINX Plus, and EC2 instances running open source NGINX, which together provide a highly available, all‑active NGINX and NGINX Plus solution.

AWS NLB handles Layer 4 TCP connections and balances traffic using a flow hash routing algorithm. By default, an AWS NLB has a DNS name to which an IP address is assigned dynamically, but you can optionally attach an Elastic IP address to the AWS NLB to ensure that it will always be reachable at the same IP address.

The AWS NLB listens for incoming connections as defined by its listeners. Each listener forwards a new connection to one of the available instances in a target group, chosen using the flow hash routing algorithm.

In this tutorial, the target group consists of two NGINX Plus load balancer instances. However, you can register an unlimited number of instances in the target group, or use an AWS Auto Scaling group to dynamically adjust the number of NGINX Plus instances.

Prerequisites and Required AWS Configuration

These instructions assume you have the following:

  • An AWS account.
  • An NGINX Plus subscription, either paid or a 30‑day free trial.
  • Familiarity with NGINX and NGINX Plus configuration syntax. Complete configuration snippets are provided, but not analyzed in detail.

The tutorial uses six EC2 instances: two instances running NGINX Plus as a load balancer and four instances running open source NGINX as a web server. (The four NGINX instances run two different apps which are load balanced by the NGINX Plus load‑balancer instances.)

For your convenience, instructions for installing and configuring these instances are provided in the indicated sections of the Appendix:

  1. Creating Amazon EC2 Instances
  2. Setting Up NGINX Web Server Instances
  3. Setting Up NGINX Plus Load Balancer Instances
  4. Automating Instance Setup with Packer and Terraform

The tutorial uses the following instance names:

  • Four web server instances running NGINX:
    • App 1:
      • ngx-oss-app1-1
      • ngx-oss-app1-2
    • App 2:
      • ngx-oss-app2-1
      • ngx-oss-app2-2
  • Two load balancer instances running NGINX Plus:
    • ngx-plus-1
    • ngx-plus-2

Configuring an AWS Network Load Balancer

With the required AWS configuration in place (see the Appendix for instructions), we’re ready to configure an AWS NLB for a highly available, all‑active NGINX Plus setup.

Step 1 – Allocating an Elastic IP Address

The first step is to allocate an Elastic IP address, which becomes the fixed IP address for your AWS NLB. (While using an Elastic IP address is optional, we strongly recommend that you do so. With a dynamic IP address, the AWS NLB might not remain reachable if you reconfigure or restart it.)

  1. Log in to the AWS Management Console for EC2 (https://console.aws.amazon.com/ec2/).

  2. In the left navigation bar, select Elastic IPs, then click either  Allocate new address  button.

  3. In the Allocate new address window that opens, click the  Allocate  button.

  4. When the message appears indicating that the request for an Elastic IP address succeeded, click the  Close  button.

The new Elastic IP address appears on the Elastic IPs dashboard.

Step 2 – Creating an AWS NLB

  1. In the left navigation bar, select Load Balancers, then click the  Create Load Balancer  button.

  2. In the Select load balancer type window that opens, click the  Create  button in the  Network Load Balancer  panel (the center one).

  3. In the Step 1: Configure Load Balancer window that opens, enter the following values:

    • In the Basic Configuration section:
      • Name – Name of your AWS NLB (aws-nlb-lb in this tutorial).
      • Schemeinternet-facing.
    • In the Listeners section:
      • Load Balancer ProtocolTCP (the only available option).
      • Load Balancer Port – Port on which your AWS NLB listens for incoming connections. In this tutorial, and for most web applications, it is port 80.
    • In the Availability Zones section, the zones that host the EC2 instances to which your AWS NLB routes traffic. Click the appropriate radio button in the Availability Zone column:

     

  4. When you select an availability zone in the table, a drop‑down menu appears in the Elastic IP column. Select the address you allocated in Step 1 – Allocating an Elastic IP Address.

  5. Click the  Next: Configure Routing  button in the lower‑right corner of the window.

Step 3 – Configuring the AWS NLB Routing Options

In the Step 2: Configure Routing window that opens, you create a target group, which contains the set of EC2 instances across which your AWS NLB load balances traffic (you’ll specify those instances in the next section).

  1. In the Target group section, select or enter the following values:

    • Target groupNew target group
    • Name – Name of the target group (in the tutorial, aws-nlb-tg)
    • ProtocolTCP (the only available option)
    • Port – The port you specified for the Load Balancer Port field in Step 3 of the previous section (80 in this tutorial)
    • Target typeinstance
  2.  

  3. In the Health checks section, open the Advanced health check settings subsection and then enter the following values:

    • Protocol – Protocol the AWS NLB uses when sending health checks. The tutorial uses TCP, which means the AWS NLB makes a health check by attempting to open a TCP connection on the port specified in the next field.
    • Port – Port on the target instances to which the AWS NLB sends health checks. In the tutorial, we’re selecting traffic port to send health checks to the same port as regular traffic.
    • Healthy threshold – Number of consecutive health checks an unhealthy instance must pass to be considered healthy.
    • Unhealthy threshold – Number of consecutive health checks a healthy instance must fail to be considered unhealthy.
    • Timeout – Number of seconds the AWS NLB waits for a response to the health check before considering the instance unhealthy.
    • Interval – Number of seconds between health checks.
    •  

    If you want to use HTTP‑based health checks, select HTTP or HTTPS in the Protocol field instead of TCP. Two additional fields open (not shown in the screenshot):

    • Path – The path to which the AWS NLB sends a GET request as the health check.
    • Success codes – Range of HTTP response codes the AWS NLB accepts as indicating a successful health check.
    •  

  4. Click the  Next: Register Targets  button in the lower‑right corner of the window.

Step 4 – Registering Instances in the Target Group

In the Step 3: Register Targets window that opens, you add instances to the empty target group you created in the previous section. For this tutorial, we add both of our NGINX Plus load balancer instances.

  1. In the Instances table, click the radio button in the left‑most column for the two NGINX Plus load balancer instances, ngx-plus-1 and nginx-plus-2.

  2. Click the  Add to registered  button above the table. The instances are added to the Registered targets table.

  3. Click the  Next: Review  button in the lower‑right corner of the window.

Step 5 – Launching the AWS NLB

In the Step 4: Review window that opens:

  1. Verify that the settings are correct. If so, click the  Create  button in the lower‑right corner of the window. To change settings, click the  Previous  button to go back to earlier windows.

  2. The AWS NLB is provisioned. When the success message appears, click the  Close  button to return to the Load Balancers dashboard.

  3. The Load Balancers dashboard opens. As noted in the previous Load Balancer Creation Status window, it can take a few minutes to provision the AWS NLB. When the value in the State column of the table changes to active, click the radio button in the left‑most column to display details about the AWS NLB.

  4. To verify that the AWS NLB is working correctly, open a new browser window and navigate to the AWS NLB’s public DNS name, which appears in the DNS name field in the Basic Configuration section of the dashboard. [If you copy and paste the DNS name, be sure not to include the parenthesized words at the end, (A Record).]

    The default Welcome to nginx! page indicates that the AWS NLB has successfully forwarded a request to one of the two NGINX Plus instances.

  5. To verify that the NGINX Plus load balancer is working correctly, add /backend-one and then /backend-two to the public DNS name. The pages indicate that you have reached NGINX instances serving the two backend applications, App 1 and App 2.

Appendix – Creating and Configuring Amazon EC2 Instances

The instructions in this Appendix explain how to create the EC2 instances used in this tutorial, install and configure NGINX and NGINX Plus on them, and deploy them.

You can chose from three methods: manual, semi‑automated using Ansible, or fully automated using Packer and Terraform:

Creating Amazon EC2 Instances

In this tutorial we are using six Amazon EC2 instances. Two NGINX Plus instances load balance traffic among four open source NGINX web server instances serving two distinct applications.

Note: To automate the creation and configuration of all six instances, see Automating Instance Setup with Packer and Terraform.

To create the Amazon EC2 instances manually, follow these steps:

  1. Log in to the AWS Management Console for EC2 (https://console.aws.amazon.com/ec2/).

  2. In the left navigation bar, select Instances, then click the  Launch Instance  button to create a new instance.

  3. In the Step 1: Choose an Amazon Machine Image (AMI) window, click the  Select  button for the Linux distribution of your choice. In this tutorial, we are using Amazon Linux 2017.09 for the load balancers and Ubuntu 16.04 for the web servers.

  4. In the Step 2: Choose an Instance Type window, click the radio button for the appropriate instance type. For all instances in this tutorial, we are selecting a t2.micro instance, which is normally selected by default and suffices for demo purposes.

    Note: At the time of publication of this guide, AWS gives you 750 hours of free usage per month with this instance type during the first year of your AWS account. Keep in mind, however, that the 6 instances in the tutorial running 24 hours a day use up the 750 hours in just over 5 days.

  5. Click the  Next: Configure Instance Details  button.

  6. In the Step 3: Configure Instance Details window, select the default subnet for your VPC in the Subnet field, then click the  Next: Add Storage  button.

  7. In the Step 4: Add Storage window, leave the defaults unchanged. Click the  Next: Add Tags  button.

  8. In the Step 5: Add Tags window, click the  Add Tag  button, and type the appropriate values in the Key and Value fields. For the tutorial, in the Key field we are using Name, and in the Value field the instance names listed in Prerequisites and Required AWS Configuration, such as ngx-plus-1 for the NGINX Plus instances (Amazon Linux AMIs) and ngx-oss-app1-1 for the NGINX instances (Ubuntu AMIs).

  9. Click the  Next: Configure Security Group  button.

  10. In the Step 6: Configure Security Group window, select or enter the following values:

    • Assign a security groupCreate a new security group for the first instance you are creating. For the subsequent five instances, select Select an existing security group instead (all of the instances in the tutorial use the same security group).
    • Security group name – Name of the group (in the tutorial, aws-nlb-sg)
    • Description – Description for the group (in the tutorial, aws-nlb-sg)
    •  

  11. In the table, replace the default rule with one that allows inbound SSH connections from all sources, by selecting the following values:

    • TypeSSH
    • ProtocolTCP
    • Port Range22
    • SourceCustom 0.0.0.0/0

     

  12. Create a rule that allows inbound HTTP connections from all sources, by clicking the  Add Rule  button and selecting the following values in the new row:

    • TypeHTTP
    • ProtocolTCP
    • Port Range80
    • SourceCustom 0.0.0.0/0

     

  13. If appropriate, repeat the previous step to create a rule for HTTPS traffic.

  14. After creating all the desired rules, click the  Review and Launch  button.

  15. In the Step 7: Review Instance Launch window, verify the settings are correct. If so, click the  Launch  button in the lower‑right corner of the window. To change settings, click the  Previous  button to go back to earlier windows.

  16. A pop‑up appears asking you to select an existing key pair or create a new key pair. Depending on your use case, create a new key pair or select a pre‑existing key pair (as shown in the screenshot). Then click the  Launch Instances  button.

    Note: It’s a best practice – and essential in a production environment – to create a separate key for each EC2 instance, so that if a key is compromised only the single associated instance becomes vulnerable.

  17. On the Launch Status page, click the  View Instances  button.

    The instances you have created so far are listed on the Instances dashboard. The following screenshot shows the first instance we are creating, ngx-plus-1.

  18. Finalize your security group rules. You need to do this only for the first instance, because all six instances use the same security group.

    1. In the left navigation bar, select Security Groups.

    2. Select the security group (in the tutorial, aws-nlb-sg) by clicking its radio button in the leftmost column. A panel opens in the lower part of the window displaying details about it.

    3. Open the Inbound tab and verify that the rules you created in Steps 11 through 13 are listed.

    4. Open the Outbound tab and click the  Edit  button to create a rule for outbound traffic. In the tutorial we need just one rule, because we have used port 80 for both the listener port on the NGINX Plus instances and for health checks to those instances as members of the aws-nlb-tg target group. If you have configured separate ports for the two purposes, or ports other than 80 (such as 443 for HTTPS), make the appropriate adjustments. In the Destination field, type the security group’s ID (which appears in the Group ID field in the upper table). For a detailed discussion of security groups, see the AWS documentation.

  19. Repeat Steps 2 through 17 five more times to create the second NGINX Plus instance (an Amazon Linux Ami) and the four NGINX instances (Ubuntu AMIs).

Connecting to an Instance

To complete the instructions for installing and configuring NGINX and NGINX Plus in the following sections of this Appendix, you need to open a terminal window for each Amazon EC2 instance and connect to the instance over SSH.

  1. Navigate to the Instances tab on the EC2 Dashboard if you are not there already.

  2. Click in the row for an instance to select it.

  3. Click the  Connect  button above the table.

  4. Follow the instructions in the window that pops up, which are customized for the selected instance (here ngx-plus-1) to provide the name of the key file and public DNS name in the steps and in the sample ssh command.

Setting Up NGINX Web Server Instances

The instructions in this section assume that you have followed the instructions in Creating Amazon EC2 Instances to create the six EC2 instances used in the tutorial.

For the purposes of the tutorial, on our four web server instances we are installing the open source NGINX software from the official binary distribution at nginx.org.

To automate the creation and configuration of all six instances used in the tutorial, see Automating Instance Setup with Packer and Terraform.

Note: The instructions assume that you have root privileges, and show the base form of each command. If appropriate for your environment, prefix the commands with the sudo command.

Installing Open Source NGINX on the Web Server Instances

You can install NGINX either manually or using Ansible.

Manual Installation

  1. Connect to the ngx-oss-app1-1 instance, following the instructions in Connecting to an Instance.

  2. Download the NGINX signing key:

    $ wget http://nginx.org/keys/nginx_signing.key
  3. Change directory to /etc/apt:

    $ cd /etc/apt
  4. Open the sources.list file in your preferred text editor and append the following at the end (these commands are appropriate for our four web server instances, which are Ubuntu 16.04 LTS AMIs):

    deb http://nginx.org/packages/ubuntu xenial nginx
    deb-src http://nginx.org/packages/ubuntu xenial nginx
  5. Update your package repositories information and install open source NGINX:

    $ apt update && apt install -y nginx
  6. Repeat Steps 1 through 5 for the other three web server instances: ngx-oss-app1-2, ngx-oss-app2-1, and ngx-oss-app2-2.

  7. Proceed to Configuring Open Source NGINX on the Web Server Instances. (To save time, leave the connection to each instance open for reuse in that section.)

Automated Installation with Ansible

  1. Connect to the ngx-oss-app1-1 instance, following the instructions in Connecting to an Instance.

  2. Install Ansible:

    $ apt update
    $ apt install python-pip -y
    $ pip install ansible
  3. Install the official open source NGINX Ansible Role:

    $ ansible-galaxy install nginxinc.nginx-oss
  4. Create a file called playbook.yml with the following contents:

    ---
    - hosts: localhost
    become: true
    roles:
    - role: nginxinc.nginx-oss
  5. Run the playbook:

    $ ansible-playbook playbook.yml
  6. Repeat Steps 1 through 5 for the other three web server instances: ngx-oss-app1-2, ngx-oss-app2-1, and ngx-oss-app2-2.

  7. Continue to the next section to configure NGINX on the web server instances. (To save time, leave the connection to each instance open for reuse in that section.)

Configuring Open Source NGINX on the Web Server Instances

  1. Return to the terminal for ngx-oss-app1-1 and change directory to /etc/nginx/conf.d:

    $ cd /etc/nginx/conf.d
  2. Move default.conf out of the way:

    $ mv default.conf default.conf.orig
  3. Create a new default.conf file with the following NGINX configuration:

    server {
    listen 80 default_server;
    server_name app_server;
    root /usr/share/nginx/html;
    error_log /var/log/nginx/app-server-error.log notice;
    index demo-index.html index.html;
    expires -1;
    sub_filter_once off;
    sub_filter 'server_hostname' '$hostname';
    sub_filter 'server_address' '$server_addr:$server_port';
    sub_filter 'server_url' '$request_uri';
    sub_filter 'remote_addr' '$remote_addr:$remote_port';
    sub_filter 'server_date' '$time_local';
    sub_filter 'client_browser' '$http_user_agent';
    sub_filter 'request_id' '$request_id';
    sub_filter 'nginx_version' '$nginx_version';
    sub_filter 'document_root' '$document_root';
    sub_filter 'proxied_for_ip' '$http_x_forwarded_for';
    }
  4. Change directory to /usr/share/nginx/html:

    $ cd /usr/share/nginx/html
  5. Create a new file called demo-index.html with the following text, which defines the page served by the web server. In the <title> tag, replace the X with 1 or 2 depending on whether you are deploying this code to an instance of App 1 or App 2.

    <!DOCTYPE html>
    <html>
    <head>
    <title>Hello World - App X <!-- Replace 'X' with '1' or '2' as appropriate --></title>
    <link href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAGPElEQVR42u1bDUyUdRj/iwpolMlcbZqtXFnNsuSCez/OIMg1V7SFONuaU8P1MWy1lcPUyhK1uVbKcXfvy6GikTGKCmpEyoejJipouUBcgsinhwUKKKJ8PD3vnzsxuLv35Q644+Ue9mwH3P3f5/d7n6/3/3+OEJ/4xCc+8YQYtQuJwB0kIp+JrzUTB7iJuweBf4baTlJ5oCqw11C/JHp+tnqBb1ngT4z8WgReTUGbWCBGq0qvKRFcHf4eT/ZFBKoLvMBGIbhiYkaQIjcAfLAK+D8z9YhjxMgsVUGc84+gyx9AYD0khXcMfLCmUBL68HMZ+PnHxyFw3Uwi8B8hgJYh7j4c7c8PV5CEbUTUzBoHcU78iIl/FYFXWmPaNeC3q4mz5YcqJPI1JGKql2Z3hkcjD5EUznmcu6qiNT+Y2CPEoH3Wm4A/QERWQFe9QQ0caeCDlSZJrht1HxG0D3sOuCEiCA1aj4ZY3Ipzl8LiVtn8hxi5zRgWM8YYPBODF/9zxOLcVRVs+YGtwFzxCs1Bo9y+avBiOTQeUzwI3F5+kOwxsXkkmWNHHrjUokqtqtSyysW5gUHV4mtmZEHSdRkl+aELvcFIRN397gPPXD4ZgbxJW1S5OJdA60MgUAyHu1KfAz+pfCUtwr+HuQc8ORQ1jK4ZgGsTvcY5uQP5oYkY2HfcK5sGLpS6l1xZQwNn7Xkedp3OgMrWC1DX0Qwnms/A1rK9cF9atNVo18DP/3o5fF99BGo7LFDRWgMJJQaYQv/PyOcHySP0TITrBIhYb+WSHLrlNGEx5NeXgj2paW8C5rs46h3Dc3kt3G2Ogr9aqoes+f5RvbL1aJ5iXnKnxkfIEoB3N/zHeHAmF9ovwryvYvC9TysnICkEonPX212vvOU8+As6eS+QCDAw0aNLABq6LO8DkJMSSznMMEfScFFGwCJYXbDV7lq17RYIQu+QTYpjRUBM3gZQIt+cOwyTpWRpYBQRsKrgU4ceNS4JkCSxLI1+ZsIS0NvXB6sLE/tL5EQkQJKOm52YON9y7glqJkCSOqzrD6Uvc1wZ1EBA07V/IafmN4ckHG+ugJkSEHuVQQ0ENFy9BLP3R0NR4ymHJGRWFWBnZ6fPVwMBF9EDgrD2z0USqtoaHJKw49SBoZ2dWggIxmcEsvspYLLi4PKNDrvv68OfuKLt/68MqiJAan4Q0IpDm6G7r8fue692X4fI7PiByqA6AqygNh0XHIaClDOkpz9aGVRJABo8CTP+3sqfHZJQeqkSgvHZn+xaqEICKAlhECSGO60MWdVF4IcesDL/ExUSYN3okCrD31fqHZLwcWkq5owPVUoA3UcIgdBv10BrV7vdz3b39kBhw0kVE2BNirG/bqRghyPqIcBKQkKJcVgE1LQ1wR3S5ooqCDBKlSEUzGdyFBNwvq1RTQT0b4BOF5+BgoayCUqAtTLMSXsRzl6uHX8EONoUtXS2KCfAusOsyVwFLV1tznNAuzflAGxb+R/esGuodDcD0bUVbYLelhRf/mWD08ogdYtTjNwYbIsrORhBIwJMPOTWHh1i6Lriz107FUKviivcZvfp8WZvN8TmbVS2rtsHI8mMtn9gSe50KAz79yWw8490OGYpp8lsTUGictd3EA6PHVwB20+mYUNURo/aMs4dhqjsdcoOWGxH5yYu0g0P0EzFBd7DxZoVHY7aHmWtB6VunwhLB6P0gFULk6zhJnvnBw5HW9D9N5GkpQEjMBcQOg+JMBNxjMZgHISawvGZHiKw+0mybv5ozP0txgvk07AQvWxAoh98sXsur3RmwMStxIud9fiIzMAIXTV6yNqxHaH7gg1GA7bgxVvHfEjq1hAl10ZM/A46gO0x0bOPoiHpSEDvsMZhXVVbVRL4TLz2E140EK1dgsnnd9mBaHcmwuigJHeCGLkXvHNaNHOBP4J/HYmoGbGwsJU1ka0nAvM2ht40758ZNmvvRRJ24l3roMa7MxVq4jpRdyMRc8bh9wR0TyIRWdR9hzNXaJs3Ftif6KDWuBcBH0hErky2bNraV5E9jcBjiapE1ExHkO8iEY1OvjLTjAkugezh7ySqFUPoXHTtZAR7ncY4rRrYYgtcCtGHPUgmjEhPmiKXjXc/l4g6HfGJT3ziEw/If86JzB/YMku9AAAAAElFTkSuQmCC" rel="icon" type="image/png" />
    <style>
    body {
    margin: 0px;
    font: 20px 'RobotoRegular', Arial, sans-serif;
    font-weight: 100;
    height: 100%;
    color: #0f1419;
    }

    div.info {
    display: table;
    background: #e8eaec;
    padding: 20px 20px 20px 20px;
    border: 1px dashed black;
    border-radius: 10px;
    margin: 0px auto auto auto;
    }

    div.info p {
    display: table-row;
    margin: 5px auto auto auto;
    }

    div.info p span {
    display: table-cell;
    padding: 10px;
    }

    img {
    width: 176px;
    margin: 36px auto 36px auto;
    display:block;
    }

    div.smaller p span {
    color: #3D5266;
    }

    h1, h2 {
    font-weight: 100;
    }

    div.check {
    padding: 0px 0px 0px 0px;
    display: table;
    margin: 36px auto auto auto;
    font: 12px 'RobotoRegular', Arial, sans-serif;
    }

    #footer {
    position: fixed;
    bottom: 36px;
    width: 100%;
    }

    #center {
    width: 400px;
    margin: 0 auto;
    font: 12px Courier;
    }
    </style>

    <script>
    var ref;
    function checkRefresh() {
    if (document.cookie == "refresh=1") {
    document.getElementById("check").checked = true;
    ref = setTimeout(function(){location.reload();}, 1000);
    } else {
    }
    }

    function changeCookie() {
    if (document.getElementById("check").checked) {
    document.cookie = "refresh=1";
    ref = setTimeout(function(){location.reload();}, 1000);
    } else {
    document.cookie = "refresh=0";
    clearTimeout(ref);
    }
    }
    </script>
    </head>

    <body onload="checkRefresh();">
    <img alt="NGINX Logo" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAWAAAABICAMAAAD/N9+RAAAAVFBMVEUAAAAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQAAmQDBect+AAAAG3RSTlMAB0AY8SD5SM82v1npsJ/YjSl0EVLftqllgMdZgsoQAAAHd0lEQVR42szZ6XabMBCG4ZGFxSazLzZz//fZc9I4JpbEN8LQ0/dnGwJ5DJGG0HdpM9kkuzVXiqussmRpLrRdnwqDp9ePyY7zXdFbqptHOz00RTVUxWiyquvJ26Upknp2/heWN0Uyzt3qYtKMn805ybsW/LdK01YVC6sVELH81XJ9o6j5q6Qkcepe83dJp8ipf161HSgm1TyPK5//cuN1d5KmE342bsnkLK6hre78LNG0KuWfOrFDwats69w8ln+qFIlrx9Vxf8808e8eJGx9YEXhCpZ3kX2gfFtbrX4m05IonTE7wsGLnpXY1/Kqr3v/5r+NcAOvy8HXCRt74W+alH568KqCJKmM37LafVhe3ZTU1/mmA7uV9Ar8vPjZVCPDZI+CDdwFC68yIooZnbhmIAx8XyoZu5mcYO9HzhSo47gGCqR53ULPlAGPkuyazJVeKWYsjH15Djy/VhPO8LoM/OJE4XNfeJ19LUfRj18KF9gLA2GZL4/UsLdFHQVccWyTCDjZD9wm7Kt2PgIgjH3ZBlf46iDgnOO7nwusavZmVoCaPU0q1pcnshyoOwa44PiS66nANw7U0isbK5x7j3gQB0uPAB54T8WZwA/RHrxhLIx9TbsBnLSfA6uRd9WdBzywCFiNUcJ5wr4eRByu7j8G7nhfpj0LuE0A8OtsSBj7ZooIL+dyYLxFm27+EvfSzgHua/GYXrK3Qol9a03bwNxEAeMt2ix/bptzgCeGwFhY7ouAufwIOA/PSni3nJ8B3DAElgtjXwxs8k+Al/BdiVfDWh0PPDAAjhXGvgTnVjkwujzbk1t4TWkOB24TBBwrjH2JQZnaC6xGsPdCT296MHA/MgKWC2NfL7Blp2ov8AM88/gNbX8osCrc5xMAA2Ho6wIXHTt1+4C1iZwMW8NvzYcCN67vAICBMPZ1galip3QXcAXHXzyVlB8AYyiT5wAYCWNfF1gtYGYWAufhNynyTWqiDwPOjeelnQiYShMQBr5+YNIWzMwy4CX69afv1NNRwHr07FKEwDT4hTPs6wL7P+tCxQKXm/eifJ963wmMF7hCYWBXGJdpAsBUopkZAyv3j3+i9PUtTa/U9VcAGC1wmgAwFsa+LnBooLxj4K0t2qjo8AAwWuAIAO8TznoSANMEZmYErA14p3EyMF7gSgLAQBj4ImBVg5kZAM/8u4VAJwJ7l+2GADAQBr4A2D+1Z0oMnKM3Y2cD4wUOAANh5IuB6cJOsxg4Q0eeCwwXuFETBnZLDfSVA1NwZsbAJXwN/C+B7771BAAjYeyLgX0z8yACVlawx1NaXh+5TcMLHACGwtgXA6OZ2QUObdGsorfabjIsr4wcNOACB4CBMPLFwOHpcuwx8NWgLXTJURW0H1gtngUOA8cLLz1FAsOZWQ4MfFH5B8CV7x75b4D/NHduS47CMBCVwYFAiDEmCQT+/z/3ZWumah1otZdL/MxMZc5gybJanU8tLI9DhF8PESXJ10k64PAxyn1LiPisMhr/N8kNHF+bpwPOis95+juS3IJOrsgQYBlXj2mWFVHRgHGC+4pj2kKjbG4ufKGRLmdtTTJgc12WKn1BofE7zBTXzAhwtlIqP9h5gmTAbq1xcHqpvBbHBgRY7suXPTl/ROMB4wR36mUPKjXnNwLcrVxXXimRZTLgDBSiZ15XYj3XAwAWv3zh7gnAXtIAx6Etnq888cIdX/fZDgDul1tGvf4Vtn0S4M8J7i7ROq1lhCVHzzwGvBpYbJ5AOEgq4EEzZn5K01MrmqvNOmDTLrft+8FSRzQecFBpO05p26tlnw7oIso14YnJ3i5aL6DF0wMuleqkM4Qn+smcAKRTL1Y65UDQVAO+WK2+7gTplH54usjWAXek+K+LCuxEwGMLul0R4EPFfz8L18zzKmDxIKSCN95LIuBGr3GujpevErqxGQDuLaPuyUAfBAPGg6Mx4OME2DhQVgUJWAIzQnBFfRAeMI5N1XEjBBiwjCxg0+qHYG7wt/GA8capDh+CqYkpCoykjPKWesio2gywEwD4qDEuDNjUJGCptQqUAB5MB3w1APBhg4gYsPQtCbib00Zpi3wrwM1FAOBjR2lrZBXCARY3J623bAS4yAQAPnIYHAOWkgSc2xS+T7MV4CAA8LF2BhiwBAwYP4+lPBsBdgIAH2XIgQHjTf+SrRw5auEAG5Dg9ID3t5TBgM3EWR88eMAVCVieYM5aDXgHUyQAmKiZR9nIFckJC/gFnALUgHew9QKAiZq5A3+EXspDAw7gP64GvIcxXQvfHl2B7tiozSf+y1JSNQ31gRYDQb6HteKQ4B3s4QucflRrDW8OKiHBujCO3s0u5qAjwKR0vnkDozL1emgd5W6EWa1ud7l97G0n3jhYzACOEMlHtVpjeBA/mLf/7IOoQsa7y+b7GDR3Rbw98fKQLy+5xv7VIXowIhy1ztUfbdzLYrz7cbrvRb/K+nf7wPPQpAXsEQ/7l2AXW97/AGkCwaNsIif8zU3y5eZaO/mK/jKDV1s872/Fz11K5TLE1zzEiP1km8ndDMcj3JvmFfqdvubhD8TgHPiN+LViAAAAAElFTkSuQmCC"/>

    <div class="info">
    <p><span>Server name:</span> <span>server_hostname</span></p>
    <p><span>Server address:</span> <span>server_address</span></p>
    <p><span>User Agent:</span> <span><small>client_browser</small></span></p>
    <p class="smaller"><span>URI:</span> <span>server_url</span></p>
    <p class="smaller"><span>Doc Root:</span> <span>document_root</span></p>
    <p class="smaller"><span>Date:</span> <span>server_date</span></p>
    <p class="smaller"><span>NGINX Front-End Load Balancer IP:</span> <span>remote_addr</span></p>
    <p class="smaller"><span>Client IP:</span> <span>proxied_for_ip</span></p>
    <p class="smaller"><span>NGINX Version:</span> <span>nginx_version</span></p>
    </div>

    <div class="check">
    <input type="checkbox" id="check" onchange="changeCookie()"> Auto Refresh</div>

    <div id="footer">

    <div id="center" align="center">
    Request ID: request_id<br/>
    © NGINX, Inc. 2016
    </div>

  6. Repeat Steps 1 through 5 on ngx-oss-app1-2, ngx-oss-app2-1, and ngx-oss-app2-2. In Step 5 replace the X with 1 or 2 as appropriate.

Setting Up NGINX Plus Load Balancer Instances

For the purposes of this tutorial, we’re installing two NGINX Plus load balancers.

To automate the creation and configuration of all six instances used in the tutorial, see Automating Instance Setup with Packer and Terraform.

Note: The instructions assume that you have root privileges, and show the base form of each command. If appropriate for your environment, prefix the commands with the sudo command.

Installing NGINX Plus on the Load Balancer Instances

You can install NGINX Plus either manually or using Ansible.

Manual Installation

  1. If you don’t already have NGINX Plus, sign up for a 30‑day free trial.

  2. Connect to the ngx-plus-1 instance, following the instructions in Connecting to an Instance.

  3. NGINX, Inc. provides a key and certificate for each subscription. To install them and the NGINX Plus package, follow the instructions included with the subscription email, or the instructions at the NGINX Plus customer portal.

  4. To verify that NGINX Plus is installed, run this command:

    $ nginx -v
  5. Repeat Steps 2 through 4 for the ngx-plus-2 instance.

  6. Proceed to Configuring NGINX Plus on the Load Balancer Instances. (To save time, leave the connection to each instance open for reuse in that section.)

Automated Installation with Ansible

  1. If you don’t already have NGINX Plus, sign up for a 30‑day free trial.

  2. Connect to the ngx-plus-1 instance, following the instructions in Connecting to an Instance.

  3. Install Ansible:

    $ apt update
    $ apt install python-pip -y
    $ pip install ansible
  4. Install the official NGINX Plus Ansible Role:

    $ ansible-galaxy install nginxinc.nginx-plus
  5. Copy your NGINX Plus certificate and key to ~/.ssh/certs/.

  6. Create a file called playbook.yml with the following contents:

    ---
    - hosts: localhost
    become: true
    roles:
    - role: nginxinc.nginx-plus
    vars:
    - certs: ~/.ssh/certs/
  7. Run the playbook:

    $ ansible-playbook playbook.yml
  8. Repeat Steps 2 through 7 for the ngx-plus-2 instance.

  9. Continue to the next section to configure NGINX Plus on the load balancer instances. (To save time, leave the connection to each instance open for reuse in that section.)

Configuring NGINX Plus on the Load Balancer Instances

  1. Return to the terminal for ngx-plus-1 and change directory to /etc/nginx/conf.d:

    $ cd /etc/nginx/conf.d
  2. Move default.conf out of the way:

    $ sudo mv default.conf default.conf.orig
  3. Create a new default.conf file with the following NGINX configuration:

  4. upstream app1 {
    server <!-- Replace me with ngx-oss-app1-1's internal IP address -->;
    server <!-- Replace me with ngx-oss-app1-2's internal IP address -->;
    zone app1 64k;
    }

    upstream app2 {
    server <!-- Replace me with ngx-oss-app2-1's internal IP address -->;
    server <!-- Replace me with ngx-oss-app2-2's internal IP address -->;
    zone app2 64k;
    }

    server {
    listen 80;

    status_zone backend;

    root /usr/share/nginx/html;

    location / {
    }

    location /backend-one {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_pass http://app1/;
    }

    location /backend-two {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_pass http://app2/;
    }

    location = /status.html {
    }

    location /status {
    access_log off;
    status;
    }
    }

  5. Repeat Steps 1 through 3 for the ngx-plus-2 instance.

You have completed the prerequisites and can continue with Configuring an AWS Network Load Balancer.

Automating Instance Setup with Packer and Terraform

As an alternative to setting up the instances manually or with Ansible, you can use the Packer and Terraform scripts provided in our GitHub repository.

The Packer and Terraform scripts create the setup for this tutorial, with two load balancer instances running NGINX Plus and four web server instances running open source NGINX. The web server instances represent two distinct websites, with the NGINX Plus instances load balancing between the two instances in each website.

After executing these scripts, you can jump into the instructions for creating an AWS NLB without any further setup. Additionally, the scripts create a new set of networking rules and security group settings to avoid conflicts with any pre‑existing network settings.

Note: Instead of using the default VPC – as is the case if you use the manual or semi‑automated setup instructions – this method creates a new VPC.

To run the scripts, follow these instructions:

  1. Install Packer and Terraform.

  2. Clone or download the GitHub repository.

    • The scripts in packer/ngx-oss are for creating an Ubuntu AMI running open source NGINX.
    • The scripts in packer/ngx-plus are for creating an AWS Linux AMI running NGINX Plus.
    • The scripts in terraform are for launching and configuring the two NGINX Plus load balancer instances and the four open source NGINX web server instances.

     

  3. Set your AWS credentials in the Packer and Terraform scripts:

    1. For Packer, set your credentials in the variables block in both packer/ngx-oss/packer.json and packer/ngx-plus/packer.json:

      "variables": {
      "home": "{{env `HOME`}}",
      "aws_access_key": "",
      "aws_secret_key": ""
      }
    2. For Terraform, set your credentials in terraform/provider.tf:

      provider "aws" {
      region = "us-west-1"
      access_key = ""
      secret_key = ""
      }
  4. Copy your NGINX Plus certificate and key to ~/.ssh/certs.

  5. Run the setup.sh script:

    $ chmod +x setup.sh
    $ ./setup.sh
  6. The script launches two NGINX Plus load balancer instances and four NGINX web server instances and configures the appropriate settings on each instance to run the tutorial.

    If you decide you want to delete the infrastructure created by Terraform, run the cleanup.sh script.

    $ chmod +x cleanup.sh
    $ ./cleanup.sh

    Revision History

    • Version 1 (November 2017) – Initial version (NGINX Plus Release 13)