NGINX Plus provides a flexible replacement for traditional hardware and software application delivery controllers (ADCs). NGINX Plus is a small software package that can be installed just about anywhere – on bare metal, a virtual machine, or a container, and on premises or in a public, private, or hybrid cloud – while providing the same level of application delivery, high availability, and security offered by legacy ADCs. This guide explains how to migrate a F5 BIG‑IP Local Traffic Manager (LTM) configuration to the NGINX Plus software load balancer, and covers the most commonly used features and configurations to get you started quickly on your migration.

NGINX Plus and BIG‑IP LTM both act as a full reverse proxy and load balancer, so that the client sees the load balancer as the application and the backend servers see the load balancer as the client. This allows for great control and manipulation of the traffic. This guide focuses on basic load balancing. For information on extending the configuration with Layer 7 logic and scripting, see the post about migrating Layer 7 logic on the NGINX blog. It covers features such as content switching and request routing, rewriting, and redirection.

About NGINX and NGINX Plus

NGINX is an open source web server, reverse proxy, and load balancer that has grown in popularity in recent years due to its scalability. NGINX was first created to solve the C10K problem (serving 10,000 simultaneous connections on a single web server). NGINX’s features and performance have made it the go‑to solution at high‑performance sites – it now powers the majority of the 100,000 busiest websites in the world.

NGINX Plus is the commercially supported version of the open source NGINX software. NGINX Plus is a complete software load balancer and application delivery platform, extending the power of NGINX with a host of enterprise‑ready capabilities that are instrumental to building web applications at scale:

NGINX Plus Deployment Scenarios

Architecturally speaking, NGINX Plus differs from traditional ADCs in deployment location and function. Typical hardware‑based ADCs are usually deployed at the edge of the network and act as a front‑door entry point for all application traffic. It’s not uncommon to see a large hardware ADC straddle the public and private DMZs, assuming the large burden of processing 100% of the traffic as it comes into the network. You often see ADCs in this environment performing all functions related to traffic flow for all applications – security, availability, optimization, authentication, etc. – requiring extremely large and powerful hardware appliances. The downside to this model is that the ADC is always stationary at the “front door” of the network.

As they update their infrastructure and approach to application delivery, many companies are paring down the hardware ADC functionality at the edge and moving to a more distributed application model. Because the legacy hardware ADC is already sitting at the edge of the network it can continue to handle all ingress traffic management, directing application traffic to the appropriate NGINX Plus instances for each application type. NGINX Plus then handles traffic for each application type to provide application‑centric load balancing and high availability throughout the network, both on‑ and off‑premises. NGINX Plus is deployed closer to the application and is able to manage all traffic specific to each application type.

In one architecture for modernizing application delivery infrastructure, hardware ADCs on the edge of the network pass application traffic to NGINX Plus for load balancing
NGINX Plus can run behind BIG‑IP LTMs to handle application traffic

Other companies are completely replacing their stationary hardware ADC appliances at the network edge with NGINX Plus, providing the same level of application delivery at the edge of the network.

In the most flexible architecture for modern application delivery, NGINX completely replace hardware application delivery controllers
NGINX Plus can completely replace hardware ADCs to handle all traffic entering the network

Prerequisites

This guide assumes you are familiar with F5 BIG‑ LTM concepts and CLI configuration commands.
Familiarity with basic NGINX and NGINX Plus concepts and directives is also helpful; links to documentation are provided, but the guide does not explain NGINX Plus functioning in depth.

Mapping F5 BIG‑IP LTM Networking Concepts to NGINX Plus

Network Architecture

When migrating F5 BIG‑IP LTM networking and load‑balancer configuration to NGINX Plus, it can be tempting to try translating F5 concepts and commands directly into NGINX Plus syntax. But the result is often frustration, because in several areas the two products don’t align very closely in how they conceive of and handle network and application traffic. It’s important to understand the differences and keep them in mind as you do your migration.

F5 divides the network into two parts: the management network (often referred to as the management plane or control plane) and the application traffic network (the data plane). In a traditional architecture, the management network is isolated from the traffic network and accessible via a separate internal network, while the application network is attached to a public network (or another application network). This requires separate network configurations for each of the two kinds of traffic.

BIG‑IP LTM appliances are a dual‑proxy environment, which means that data plane traffic is also split between two different networks: the client‑side network over which client requests come into the BIG‑IP LTM, and the server‑side network over which requests are sent to the application servers. BIG‑IP LTM typically requires two network interface cards (NICs) to handle each part of the network.

It is possible with a BIG‑IP LTM appliance, however, to combine the client and server networks on a single NIC, combining the data plane into a single‑stack proxy architecture. This is a very typical architecture in a cloud environment where traffic comes into the BIG‑IP LTM data plane and exits through the same virtual NIC. Regardless of networking architecture, the same basic principles for load balancing apply, and the configurations discussed below work in either architectural layout.

NGINX Plus can function in a similar architecture either by binding multiple IP subnets (and/or VLANs) to a single NIC that is available to the host device, or by installing multiple NICs and using each for unique client and server networks, or multiple client networks and multiple server‑side networks. This is, in essence, how the BIG‑IP LTM appliance functions as well, typically shipping with multiple NICs which can be trunked or bound into virtual NICs.

Definitions of Networking Concepts

Basic F5 BIG‑IP LTM networking configuration requires only that you specify the IP addresses of the management and data planes, but managing more complex network environments that include BIG‑IP LTM appliances involves some additional concepts. All of these concepts can be very easily simplified and mapped to NGINX Plus instances. Key BIG‑IP LTM networking concepts with NGINX Plus correlates include:

  • Self‑IP address – The primary interface that listens to incoming client‑side data plane traffic on a specific VLAN. It is a specific IP address or subnet on a specific NIC associated with that VLAN or a VLAN group.

    In NGINX Plus, self IP addresses most directly map to the primary host interface used by NGINX Plus to manage traffic‑plane application data. Generally speaking, self IP addresses are not a necessary concept in an NGINX Plus deployment, as NGINX Plus utilizes the underlying OS networking for management and data‑traffic control.

  • Management IP address and port – The IP address:port combinations on a BIG‑IP LTM appliance that are used to administer it, via the GUI and/or remote SSH access. The NGINX Plus equivalent is the Linux host IP address, typically the primary host interface. It is possible, but not necessary, to use separate IP addresses and/or NICs for management access to the Linux host where NGINX Plus is running, if you need to separate remote access from the application traffic.

  • Virtual server – The IP address:port combination used by BIG‑IP LTM as the public destination IP address for the load‑balanced applications. This is the IP‑address portion of the virtual server that is associated with the domain name of a frontend application (for instance), and the port that’s associated with the service (such as port 80 for HTTP applications). This address handles client requests and shifts from the primary device to the secondary device in the case of a failover.

    Virtual servers in NGINX Plus are configured using a server block. The listen directive in the server block specifies the IP address and port for client traffic.

  • Pool and node list – A pool is a collection of backend nodes, each hosting the same application or service, across which incoming connections are load balanced. Pools are assigned to virtual servers so BIG‑IP LTM knows which backend applications to use when a new request comes into a virtual server. In addition, BIG‑IP LTM uses the term node list to refer to an array of distinct services that all use the same traffic protocol and are hosted on the same IP address, but listen on different port numbers (for example, three HTTP services at 192.168.10.10:8100, 192.169.10.10:8200, and 192.168.10.10:8300).

    NGINX Plus flattens the BIG‑IP LTM pool and node list concepts by representing that information in upstream configuration blocks, which also define the load‑balancing and session‑persistence method for the virtual server. NGINX Plus does not need the concept of node lists, because standard upstream block configuration very easily accommodates multiple services on the same IP address.

In addition to these networking concepts, there are two other important technology categories to consider when migrating from BIG‑IP LTM to NGINX Plus:

  • iRules – iRules is a proprietary, event‑driven, content‑switching and traffic‑manipulation engine (based on TCL) used by BIG‑IP LTM to control all aspects of data‑plane traffic. iRules are attached to virtual servers and are required for any type of content switching, such as choosing a pool based on URI, inserting headers, establishing affinity with JSESSIONIDs, and so on. iRules are event‑driven and are configured to fire for each new connection when certain criteria are met, such as when a new HTTP request is made to a virtual server or when a server sends a response to a client.

    NGINX Plus natively handles content switching and HTTP session manipulation, eliminating the need to explicitly migrate most context‑based iRules and those which deal with HTTP transactions such as header manipulation. Most context‑based iRules can be translated to server and location blocks, and more complex iRules that cannot be duplicated with NGINX Plus directives and configuration block can be implemented with the NGINX Lua or nginScript modules. For more information on translating iRules to NGINX Plus content rules, see Migrating Layer 7 Logic from F5 iRules and Citrix Policies to NGINX and NGINX Plus on the NGINX blog.

  • High availability – Conceptually, BIG‑IP LTM and NGINX Plus handle high availability (HA) in the same way. Each instance of NGINX Plus can function as an active or passive instance, and when the active instance goes down the passive instance takes over the virtual server addresses (thus becoming the active instance). With NGINX Plus, a separate software package called nginx‑ha‑keepalived handles the virtual server and failover process for a pair of NGINX Plus servers. BIG‑IP LTM uses a built‑in HA mechanism and each active‑passive pair shares a floating “virtual” IP address which maps to the currently active instance.

    Active‑active configurations are also possible, both on‑premises with the nginx‑ha‑keepalived package and on the Google Cloud Platform. For more information on configuring the nginx‑ha‑keepalived package and other load‑balancing architectural models, see the NGINX Plus Admin Guide.

Converting F5 BIG‑IP LTM Load‑Balancer Configuration to NGINX Plus

F5 BIG‑IP LTM offers three methods for configuration:

  • GUI
  • CLI (the custom on‑box Traffic Management Shell [TMSH] tool)
  • iControl API

Ultimately all changes made via the GUI or API are translated to a TMSH CLI command, so that’s the representation we’re using in this guide. We assume that you are configuring the device from the (tmos.ltm) location, and so omit the common command variable ltm from all of the TMSH commands.

With NGINX Plus, configuration is stored in a straightforward text file which can be accessed directly or managed using traditional on‑box tools or configuration management and orchestration tools such as Ansible, Chef, and Puppet.

Although IP addresses are used through the document, the listening IP address:port and the Host header can be used in selecting the appropriate server block to process a request with NGINX Plus. The server_name directive allows for the selection based on the Host header in addition to listening IP address:port. The server_name directive can include multiple host names, wildcards, and regular expressions. Multiple server_name directives and multiple listening IP/ports can be used within one NGINX server block. For more information on using Host and the server_name directive instead of IP addresses, see Server Names at nginx.org.

Note: All IP addresses and names of objects (upstream blocks, virtual servers, pools, and so on) are examples only. Substitute the values from your BIG‑IP LTM configuration.

Virtual Servers

As mentioned above, virtual servers are the primary listeners for both BIG‑IP LTM and NGINX Plus, but the configuration syntax for defining them is quite different. Here, a virtual server at 192.168.10.10 listens on port 80 for HTTP traffic, and distributes incoming traffic between the two backend application servers listed in the test_pool upstream group.

BIG‑IP LTM

# create pool test_pool members add { 10.10.10.10:80 10.10.10.20:80 }
# create virtual test_virtual { destination 192.168.10.10:80 pool test_pool source-address-translation { type automap } ip-protocol tcp profiles add { http } }
# save sys config

NGINX Plus

upstream test_pool {
server 10.10.10.10:80;
server 10.10.10.20:80;
}

server {
listen 192.168.10.10:80;

location / {
proxy_pass http://test_pool;
}
...
}

SSL Offload (Termination and Proxy)

Terminating SSL connections is a common use case on a load balancer. F5 BIG‑IP LTM uses a proprietary SSL/TLS implementation whereas NGINX Plus relies on system libraries, so the version of OpenSSL is dictated by the OS. On BIG‑IP LTM, a profile for each SSL key and certificate pair is attached to a virtual server (either as a client profile for encrypting traffic to and from the client, a server profile for encrypting backend traffic, or both). On NGINX Plus, the ssl_certificate and ssl_certificate_key directives are included at the virtual‑server (server) level.

There are two methods for handling SSL traffic on a load balancer instance: termination and proxying. With SSL termination, the load balancer and client communicate in an encrypted HTTPS session, in the same way a secure application like a banking website handles client encryption with SSL certificates. After decrypting the client message (effectively terminating the secure connection), the load balancer forwards the message to the upstream server over a cleartext (unencrypted) HTTP connection. (In the other direction, the load balancer encrypts the server response before sending it to the client.) SSL termination is a good option if the load balancer and upstream servers are on a secured network where there’s no danger of outside agents intercepting and reading the cleartext backend traffic, and where upstream application performance is paramount.

In the SSL proxy architecture, the load balancer still decrypts client‑side traffic as it does in the termination model, but then it re‑encrypts it before forwarding it to upstream servers. This is a good option where the server‑side network is not secure or where the upstream servers can handle the computational workload required for SSL encryption and decryption.

BIG‑IP LTM

  • SSL Termination and Proxy: Creating SSL Virtual Server and Pool Members

    # create pool ssl_test_pool members add { 10.10.10.10:443 10.10.10.20:443 } 

    # create virtual test_ssl_virtual { destination 192.168.10.10:443 pool ssl_test_pool source-address-translation { type automap } ip-protocol tcp profiles add { http } }

    # save /sys config

  • SSL Termination: Creating a Client SSL Profile

    # create profile client-ssl test_ssl_client_profile cert test.crt key test.key

    # modify virtual test_ssl_virtual profiles add { test_ssl_client_profile }

    # save /sys config

  • SSL Proxy: Creating a Server SSL Profile

    # create profile server-ssl test_ssl_server_profile cert test.crt key test.key

    # modify virtual test_ssl_virtual profiles add { test_ssl_server_profile }

    # save /sys config

NGINX Plus

  • SSL Termination

    upstream ssl_test_pool {
    server 10.10.10.10:443;
    server 10.10.10.20:443;
    }

    server {
    listen 192.168.10.10:443 ssl;

    ssl_certificate /etc/nginx/ssl/test.crt;
    ssl_certificate_key /etc/nginx/ssl/test.key;

    location / {
    proxy_pass http://ssl_test_pool;
    }
    }

  • SSL Proxy

    upstream ssl_test_pool {
    server 10.10.10.10:443;
    }

    server {
    listen 192.168.10.10:443 ssl;

    ssl_certificate /etc/nginx/ssl/test.crt;
    ssl_certificate_key /etc/nginx/ssl/test.key;

    location / {
    proxy_pass https://ssl_test_pool;
    proxy_ssl_certificate /etc/nginx/ssl/client.pem;
    proxy_ssl_certificate_key /etc/nginx/ssl/client.key;
    proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    proxy_ssl_ciphers HIGH:!aNULL:!MD5;
    proxy_ssl_trusted_certificate /etc/nginx/ssl/trusted_ca_cert.crt;
    proxy_ssl_verify on;
    proxy_ssl_verify_depth 2;
    }
    }

Session Persistence

F5 BIG‑IP LTM and NGINX Plus handle session persistence (also referred to as affinity) in a similar way and configure it at the same level: on the upstream server (BIG‑IP LTM pool or NGINX Plus upstream block). Both support multiple forms of persistence. Session persistence is critical for applications that are not stateless and is helpful for continuous delivery use cases.

Cookie‑Based Session Persistence

One method that is simple to configure and handles failover well for NGINX Plus, if compatible with the application, is sticky cookie. It works just like the cookie insert method in BIG‑IP LTM: the load balancer creates a cookie that represents the server and the client then includes the cookie in each request, effectively offloading the session tracking from the load balancer itself.

  • BIG‑IP LTM: HTTP Cookie Persistence

    # create persistence cookie test_bigip_cookie cookie-name BIGIP_COOKIE_PERSIST expiration 1:0:0

    # modify virtual test_virtual { persist replace-all-with { test_bigip_cookie } }

    # save /sys config

  • BIG‑IP LTM: HTTPS Cookie Persistence

    # create persistence cookie test_bigip_cookie cookie-name BIGIP_COOKIE_PERSIST expiration 1:0:0

    # modify virtual test_ssl_virtual { persist replace-all-with { test_bigip_cookie } }

    # save /sys config

  • NGINX Plus: HTTP Cookie Persistence

    upstream test_pool {
    server 10.10.10.10:80;
    server 10.10.10.20:80;

    sticky cookie mysession expires=1h;
    }

  • NGINX Plus: HTTPS Cookie Persistence

    upstream ssl_test_pool {
    server 10.10.10.10:443;
    server 10.10.10.20:443;

    sticky cookie mysession expires=1h;
    }

Source IP Address‑Based Session Persistence

Another form of session persistence is based on the source IP address recorded in the request packet (the IP address of the client making the request). For each request the load balancer calculates a hash on the IP address, and associates the hash with one of the servers in the upstream group. It sends all requests with that hash to that server, (for more details on the NGINX Plus implementation, see Choosing an NGINX Plus Load Balancing Technique on our blog).

  • BIG‑IP LTM

    # modify virtual test_virtual { persist replace-all-with {source_addr} } 

    # save /sys config

  • NGINX Plus

    upstream test_pool {
    ip_hash;
    server 10.10.10.10:80;
    server 10.10.10.20:80;
    }

Token‑Based Session Persistence

Another method for session persistence takes advantage of a cookie or other token created within the session by the backend server, such as a jsessionid. To manage jsessionid creation and tracking, NGINX Plus creates a table in memory matching the cookie value with a specific backend server.

  • BIG‑IP LTM

    BIG‑IP LTM does not natively support a learned (or universal) persistence profile without creating a more advanced iRule, which is out of scope for this document.

  • NGINX Plus

    upstream test_pool {
    server 10.10.10.10:80;
    server 10.10.10.20:80;

    sticky learn create=$upstream_cookie_jsessionid
    lookup=$cookie_jsessionid
    zone=client_sessions:1m;
    }

Keepalive Connections

Typically, a separate HTTP session is created and destroyed for every connection. This can be fine for short‑lived connections, like requesting small content from a web server, but it can be highly inefficient for long‑lived connections. Constantly creating and destroying connections can create high load for both the application server and client, slowing down page load and hurting the overall perception of the website or application’s performance. HTTP keepalive connections, which instruct the load balancer to keep connections open for each session, are a necessary performance feature to allow web pages to load more quickly.

BIG‑IP LTM

# modify virtual test_virtual profiles add { oneconnect }

# modify virtual test_ssl_virtual profiles add { oneconnect }

# save /sys config

NGINX Plus

upstream test_pool {
server 10.10.10.10:80;
server 10.10.10.20:80;
keepalive 32;
}

Health Checks

F5 BIG‑IP LTM uses the term monitor to refer to the process of verifying that a server is functioning correctly, while NGINX Plus uses health check. In an BIG‑IP LTM configuration, the monitor is associated directly with a pool and applied to each node in the pool while NGINX Plus places the health check in a location block.

The interval argument to the create command configures BIG‑IP LTM to check the server every 5 seconds, and corresponds to the default frequency for NGINX Plus. NGINX Plus does not need the BIG‑IP LTM timeout parameter as it implements the timeout function with the interval and fails parameters.

Note: This BIG‑IP LTM configuration is for HTTP. For HTTPS, substitute test_ssl_monitor for test_monitor in both the create and modify commands. The NGINX Plus configuration works for both HTTP and HTTPS.

BIG‑IP LTM

# create monitor http test_monitor defaults-from http send "GET /index.html HTTP/1.0\r\n\r\n" interval 5 timeout 20

# modify pool test_pool monitor test_monitor

# save /sys config

NGINX Plus

upstream test_pool {
...
zone test_pool_zone 64k;
}

server {
...
location / {
proxy_pass http://test_pool;
health_check interval=5 fails=2;
}
}

Summary of Converted Load Balancer Configuration

Here we put together the configuration entities, combining everything required to build a basic F5 BIG‑IP LTM basic environment and detail how to migrate the same configuration to NGINX Plus.

BIG‑IP LTM

# create pool test_pool members add { 10.10.10.10:80 10.10.10.20:80 } 

# create virtual test_virtual { destination 192.168.10.10:80 pool test_pool source-address-translation { type automap } ip-protocol tcp profiles add { http } }

# create pool ssl_test_pool members add { 10.10.10.10:443 10.10.10.20:443 }

# create virtual test_ssl_virtual { destination 192.168.10.10:443 pool ssl_test_pool source-address-translation { type automap } ip-protocol tcp profiles add { http } }

# create profile client-ssl test_ssl_client_profile cert test.crt key test.key

# modify virtual test_ssl_virtual profiles add { test_ssl_client_profile }

# create profile server-ssl test_ssl_server_profile cert test.crt key test.key

# modify virtual test_ssl_virtual profiles add { test_ssl_server_profile }

# create persistence cookie test_bigip_cookie cookie-name BIGIP_COOKIE_PERSIST expiration 1:0:0

# modify virtual test_virtual { persist replace-all-with { test_bigip_cookie } }

# modify virtual test_ssl_virtual { persist replace-all-with { test_bigip_cookie } }

# modify virtual test_virtual profiles add { oneconnect }

# modify virtual test_ssl_virtual profiles add { oneconnect }

# create monitor http test_monitor defaults-from http send "GET /index.html HTTP/1.0\r\n\r\n" interval 5 timeout 20

# modify pool test_pool monitor test_monitor

# create monitor https test_ssl_monitor defaults-from https send "GET /index.html HTTP/1.0\r\n\r\n" interval 5 timeout 20

# modify pool ssl_test_pool monitor test_ssl_monitor

# save /sys config

NGINX Plus

The following configuration includes three additional directives which weren’t discussed previously. Adding them is a best practice when proxying traffic:

  • The proxy_set_headerHost$host directive ensures the Host header received from the client is sent with the request to the backend server.
  • The proxy_http_version1.1 directive sets the HTTP version to 1.1 for the connection to the backend server.
  • The proxy_set_headerConnection"" directive clears the Connection header so that NGINX Plus can keep maintain encrypted keepalive connections to the upstream servers.

We are also enabling live activity monitoring in the final server block. Live activity monitoring is implemented in the Status API and is exclusive to NGINX Plus. The wide range of statistics reported by the API is displayed on the built‑in dashboard and can also be exported to any application performance management (APM) or monitoring tool that can consume JSON‑formatted messages. For more detail on logging and statistics see the NGINX Plus Admin Guide.

upstream test_pool {
zone test_pool_zone 64k;
server 10.10.10.10:80;
server 10.10.10.20:80;
sticky cookie mysession expires=1h;
keepalive 32;
}

upstream ssl_test_pool {
zone ssl_test_pool_zone 64k;

server 10.10.10.10:443;
server 10.10.10.20:443;

sticky cookie mysession expires=1h;
keepalive 32;
}

server {
listen 192.168.10.10:80 default_server;
proxy_set_header Host $host;

location / {
proxy_pass http://test_pool;
health_check;
proxy_http_version 1.1;
}

location ~ /favicon.ico {
root /usr/share/nginx/images;
}
}

server {
listen 192.168.10.10:443 ssl default_server;

ssl_certificate test.crt;
ssl_certificate_key test.key;
proxy_set_header Host $host;

location / {
proxy_pass https://ssl_test_pool;
health_check;
proxy_http_version 1.1;
proxy_set_header Connection "";
}

location ~ /favicon.ico {
root /usr/share/nginx/images;
}
}

server {
listen 8080;
status_zone status-page;

root /usr/share/nginx/html;
location = /status.html { }

location = / {
return 301 /status.html;
}

location /status {
status;
status_format json;
}

location ~ /favicon.ico {
root /usr/share/nginx/images;
}
}

Revision History

  • Version 1 (February 2017) – Initial version (NGINX Plus R11, NGINX 1.11.5)