Table of Contents

Introduction

NGINX Plus utilizes Keepalived to provide High Availability (HA) in a standard Active-Passive fashion. This provides failover redundancy in the event of a problem on the primary NGINX Plus node. We can extend this functionality with additional nodes and configuration changes in Keepalived to provide additional redundancy and scalability options. This guide assumes that you have already configured NGINX Plus in an Active-Passive implementation with the NGINX HA solution.

NOTE: In a public cloud deployment NGINX recommends using a Layer 4 or TCP Load Balancer service offered by the cloud provider to distribute traffic to NGINX Plus for Active-Active functionality.

Why Add a Passive Node?

Many organizations have strict requirements on levels of redundancy and a two node active-passive system may not meet these requirements. Adding a third node configured to take over in the event that both other nodes are down, provides further redundancy while keeping the configuration simple. This also allows for maintenance on a node without losing redundancy.

Why Configure Active-Active?

You can run NGINX Plus in an “active-active” fashion, where two or more nodes handle traffic at the same time. This is achieved using multiple active IP addresses. Each IP address is hosted on a single NGINX instance, and the Keepalived configuration ensures that these IP addresses are spread across two or more active nodes.

  • When hosting multiple services, each service’s DNS name should resolve to one of the IP addresses. Share the IP addresses between the services.
  • Use round-robin DNS to map a single DNS name to multiple IP addresses.
  • Use a L3 load-balancing device such as a datacenter edge load balancer to distribute L3 traffic between the IP addresses.

Active-active may be used to increase the capacity of your load-balanced cluster, but be aware that if a single node in an active-active pair were to fail, the capacity would be reduced by half. You can use Active-Active as a form of safety, to provide sufficient resource to absorb unexpected spikes of traffic when both nodes are active, and you can use active-active in larger clusters to provide more redundancy.

Note that NGINX instances in a load-balanced cluster do not share configuration or state. For best performance in an active-active scenario, ensure that connections from the same client are routed to the same active IP address, and use session persistence methods such as sticky cookie that do not rely on server-side state.

Configuration to Add a Passive Node

To configure an additional passive node for your existing NGINX Plus active-passive HA pair, perform the following steps:

  1. Install the nginx-plus and nginx-ha-keepalived packages on the new node
  2. Copy /etc/keepalived/keepalived.conf from the secondary node to the same location on the new node.
  3. Edit keepalived.conf on the new node:
    • lower the priority on any vrrp_instance blocks so that it is lower than the other nodes.
    • change unicast_src_ip to match new node host IP.
    • add the IP of the secondary node in the unicast_peer section so that all other nodes are listed:
    • Below is a configuration example for the keepalived.conf for an additional passive node, assuming the IP address of the new node is 192.168.10.12, and the other two nodes being 192.168.10.10 and 192.168.10.11. The VIP is 192.168.10.100.

      vrrp_script chk_nginx_service {
          script  "/usr/lib/keepalived/nginx-ha-check"
          interval 3
          weight   50
      }
      
      vrrp_instance VI_1 {
          interface         eth0
          state             BACKUP
          priority          99
          virtual_router_id 51
          advert_int        1
          accept
          unicast_src_ip    192.168.10.12
      
          unicast_peer {
              192.168.10.10
              192.168.10.11
          }
      
          virtual_ipaddress {
              192.168.10.100
          }
      
          track_script {
              chk_nginx_service
          }
      
          notify "/usr/lib/keepalived/nginx-ha-notify"
      }
    • Edit keepalived.conf on the other nodes:
      • add the IP of the new node to the unicast_peer section so that all other nodes are listed:
        unicast_peer {
            192.168.10.11
            192.168.10.12
        }
  4. Restart Keepalived on all nodes
  5. Test by stopping nginx service on first two nodes

Configuration file and SSL certificate file synchronization is out of scope for this document but make sure all nodes have identical NGINX Plus configuration.

Configuration for Active-Active

In order to direct traffic to both nodes at the same time, an additional Virtual IP (VIP) must be used. This new VIP will be active on the previously passive node, so that each node is active with their own VIP. To configure an existing NGINX Plus HA pair for active-active, perform the following steps:

  1. Edit the keepalived.conf on the secondary node:
    • copy the entire vrrp_instance block VI_1 and paste it below the existing block
    • within the new vrrp_instance block:
      • rename the new vrrp_instance to VI_2 or other unique name
      • change the virtual_router_id to 61 or another unique value
      • change the virtual_ipaddress to an available IP on the same subnet, in this example 192.168.10.101
      • change the priority value to 100
        vrrp_script chk_nginx_service {
            script  "/usr/lib/keepalived/nginx-ha-check"
            interval 3
            weight   50
        }
        
        vrrp_instance VI_1 {
            interface         eth0
            state             BACKUP
            priority          101
            virtual_router_id 51
            advert_int        1
            accept
            unicast_src_ip    192.168.10.10
        
            unicast_peer {
                192.168.10.11
            }
        
            virtual_ipaddress {
                192.168.10.100
            }
        
            track_script {
                chk_nginx_service
            }
        
            notify "/usr/lib/keepalived/nginx-ha-notify"
        }
        
        vrrp_instance VI_2 {
            interface         eth0
            state             BACKUP
            priority          100
            virtual_router_id 61
            advert_int        1
            accept
            unicast_src_ip    192.168.10.10
        
            unicast_peer {
                192.168.10.11
            }
        
            virtual_ipaddress {
                192.168.10.101
            }
        
            track_script {
                chk_nginx_service
            }
        
            notify "/usr/lib/keepalived/nginx-ha-notify"
        }
  2. Edit the keepalived.conf on the primary node:
    • repeat steps performed on the secondary node
    • set the priority within the new vrrp_instance to 99 or a value lower than the secondary node
  3. Restart Keepalived on all nodes

Configuration file and SSL certificate file synchronization is out of scope for this document but make sure all nodes have identical NGINX Plus configuration.

NGINX Configuration Changes for Active-Active

Now that two NGINX nodes are active with their own VIP, NGINX itself must be configured. There are two options to distribute traffic to the active nodes. Option 1 has all nodes active, with each node handling at least one application. Option 2 has all applications active on all nodes.

NOTE: If the application being load balanced requires session persistence it is recommended to use sticky cookie, sticky route or IP hash as these methods will function with multiple active nodes. Sticky learn creates a session table in memory that is not shared between active nodes.

  1. Each server block uses the listen directive to specify which VIP it is listening on.
    • In this configuration, each NGINX node will only process requests to server blocks for which it has an active VIP. In the event of a failover the active node will be primary for additional VIPs and will process requests for the associated server blocks.

    Building on the above Active-Active keepalived.conf example we will use the same two VIPs here. In this example application 1 is active on NGINX node 1 and application 2 is active on NGINX node 2:

    server {
        listen 192.168.10.100:80;
    
        location / {
            root /application1;
        }
    }
    
    server {
        listen 192.168.10.101:80;
    
        location / {
            root /application2;
        }
    }
  2. Each NGINX node listens for all requests. DNS load balancing is used to distribute requests to NGINX nodes.
    • In this configuration, NGINX is able to process the traffic for any application on any VIP. In the event of a failure on a node, the VIP for that node moves to the node with the next highest priority. This way the DNS load balancing configuration does not need to change.
    • To evenly distribute the traffic to all NGINX nodes some form of DNS load balancing is required. Simple round-robin DNS is sufficient and can be configured using documentation for your given DNS server. Ensure that there is an A record in your DNS server configuration for each VIP with the same FQDN. Each time the name is resolved it will reply with all VIPs, but in a different order.

In this example both applications are active on both nodes:

server {
    listen *:80;

    location /app1 {
        root /application1;
    }

    location /app2 {
        root /application2;
    }
}

Combining and Expanding Methods

Both of the above methods can be combined for an Active-Active-Passive configuration or either can be expanded to an Active1-Active2-…-ActiveN configuration.

Below is a configuration example for Active-Active-Active. Here the steps for adding an active node are just iterated to add a third. Notice this node is active for one VIP, secondary for one VIP and tertiary for one VIP.

vrrp_script chk_nginx_service {
    script   "/usr/lib/keepalived/nginx-ha-check"
    interval 3
    weight   50
}

vrrp_instance VI_1 {
    interface         eth0
    state             BACKUP
    priority          101
    virtual_router_id 51
    advert_int        1
    accept
    unicast_src_ip    192.168.10.10

    unicast_peer {
        192.168.10.11
        192.168.10.12
        192.168.10.13
    }

    virtual_ipaddress {
        192.168.10.100
    }

    track_script {
        chk_nginx_service
    }

    notify "/usr/lib/keepalived/nginx-ha-notify"
}

vrrp_instance VI_2 {
    interface         eth0
    state             BACKUP
    priority          100
    virtual_router_id 61
    advert_int        1
    accept
    unicast_src_ip    192.168.10.10

    unicast_peer {
        192.168.10.11
        192.168.10.12
        192.168.10.13
    }

    virtual_ipaddress {
        192.168.10.101
    }

    track_script {
        chk_nginx_service
    }

    notify "/usr/lib/keepalived/nginx-ha-notify"
}

vrrp_instance VI_3 {
    interface         eth0
    state             BACKUP
    priority          99
    virtual_router_id 71
    advert_int        1
    accept
    unicast_src_ip    192.168.10.10

    unicast_peer {
        192.168.10.11
        192.168.10.12
        192.168.10.13
    }

    virtual_ipaddress {
        192.168.10.102
    }

    track_script {
        chk_nginx_service
    }

    notify "/usr/lib/keepalived/nginx-ha-notify"
}

Below is a configuration example for the passive node in Active-Active-Passive combining the steps in “Configuration for Active-Active” with the steps in “Configuration to Add a Passive Node”.

vrrp_script chk_nginx_service {
    script   "/usr/lib/keepalived/nginx-ha-check"
    interval 3
    weight   50
}

vrrp_instance VI_1 {
    interface         eth0
    state             BACKUP
    priority          99
    virtual_router_id 51
    advert_int        1
    accept
    unicast_src_ip    192.168.10.12

    unicast_peer {
        192.168.10.10
        192.168.10.11
    }

    virtual_ipaddress {
        192.168.10.100
    }

    track_script {
        chk_nginx_service
    }

    notify "/usr/lib/keepalived/nginx-ha-notify"
}

vrrp_instance VI_2 {
    interface         eth0
    state             BACKUP
    priority          99
    virtual_router_id 61
    advert_int        1
    accept
    unicast_src_ip    192.168.10.12

    unicast_peer {
        192.168.10.10
        192.168.10.11
    }

    virtual_ipaddress {
        192.168.10.101
    }

    track_script {
        chk_nginx_service
    }

    notify "/usr/lib/keepalived/nginx-ha-notify"
}

See Also