NGINX Plus provides a flexible replacement for traditional hardware and software application delivery controllers (ADCs). NGINX Plus is a small software package that can be installed just about anywhere – on bare metal, a virtual machine, or a container, and on premises or in a public, private, or hybrid cloud – while providing the same level of application delivery, high availability, and security offered by legacy ADCs. This guide explains how to migrate a F5 BIG‑IP Local Traffic Manager (LTM) configuration to the NGINX Plus software load balancer, and covers the most commonly used features and configurations to get you started quickly on your migration.
NGINX Plus and BIG‑IP LTM both act as a full reverse proxy and load balancer, so that the client sees the load balancer as the application and the backend servers see the load balancer as the client. This allows for great control and manipulation of the traffic. This guide focuses on basic load balancing. For information on extending the configuration with Layer 7 logic and scripting, see the post about migrating Layer 7 logic on the NGINX blog. It covers features such as content switching and request routing, rewriting, and redirection.
- About NGINX and NGINX Plus
- NGINX Plus Deployment Scenarios
- Mapping F5 BIG‑IP LTM Networking Concepts to NGINX Plus
- Converting F5 BIG‑IP LTM Load Balancer Configuration to NGINX Plus
- Summary of Converted Load Balancer Configuration
About NGINX and NGINX Plus
NGINX is an open source web server, reverse proxy, and load balancer that has grown in popularity in recent years due to its scalability. NGINX was first created to solve the C10K problem (serving 10,000 simultaneous connections on a single web server). NGINX’s features and performance have made it the go‑to solution at high‑performance sites – it now powers the majority of the 100,000 busiest websites in the world.
NGINX Plus is the commercially supported version of the open source NGINX software. NGINX Plus is a complete software load balancer and application delivery platform, extending the power of NGINX with a host of enterprise‑ready capabilities that are instrumental to building web applications at scale:
- Full‑featured HTTP, TCP, and UDP load balancing
- Intelligent session persistence
- High‑performance reverse proxy
- Caching and offload of dynamic and static content
- Adaptive streaming to deliver audio and video to any device
- Application‑aware health checks and high availability
- Advanced activity monitoring available via a dashboard or API
- Management and real‑time configuration changes with DevOps‑friendly tools
NGINX Plus Deployment Scenarios
Architecturally speaking, NGINX Plus differs from traditional ADCs in deployment location and function. Typical hardware‑based ADCs are usually deployed at the edge of the network and act as a front‑door entry point for all application traffic. It’s not uncommon to see a large hardware ADC straddle the public and private DMZs, assuming the large burden of processing 100% of the traffic as it comes into the network. You often see ADCs in this environment performing all functions related to traffic flow for all applications – security, availability, optimization, authentication, etc. – requiring extremely large and powerful hardware appliances. The downside to this model is that the ADC is always stationary at the “front door” of the network.
As they update their infrastructure and approach to application delivery, many companies are paring down the hardware ADC functionality at the edge and moving to a more distributed application model. Because the legacy hardware ADC is already sitting at the edge of the network it can continue to handle all ingress traffic management, directing application traffic to the appropriate NGINX Plus instances for each application type. NGINX Plus then handles traffic for each application type to provide application‑centric load balancing and high availability throughout the network, both on‑ and off‑premises. NGINX Plus is deployed closer to the application and is able to manage all traffic specific to each application type.
Other companies are completely replacing their stationary hardware ADC appliances at the network edge with NGINX Plus, providing the same level of application delivery at the edge of the network.
This guide assumes you are familiar with F5 BIG‑IP LTM concepts and CLI configuration commands.
Familiarity with basic NGINX and NGINX Plus concepts and directives is also helpful; links to documentation are provided, but the guide does not explain NGINX Plus functioning in depth.
Mapping F5 BIG‑IP LTM Networking Concepts to NGINX Plus
When migrating F5 BIG‑IP LTM networking and load‑balancer configuration to NGINX Plus, it can be tempting to try translating F5 concepts and commands directly into NGINX Plus syntax. But the result is often frustration, because in several areas the two products don’t align very closely in how they conceive of and handle network and application traffic. It’s important to understand the differences and keep them in mind as you do your migration.
F5 divides the network into two parts: the management network (often referred to as the management plane or control plane) and the application traffic network (the data plane). In a traditional architecture, the management network is isolated from the traffic network and accessible via a separate internal network, while the application network is attached to a public network (or another application network). This requires separate network configurations for each of the two kinds of traffic.
BIG‑IP LTM appliances are a dual‑proxy environment, which means that data plane traffic is also split between two different networks: the client‑side network over which client requests come into the BIG‑IP LTM, and the server‑side network over which requests are sent to the application servers. BIG‑IP LTM typically requires two network interface cards (NICs) to handle each part of the network.
It is possible with a BIG‑IP LTM appliance, however, to combine the client and server networks on a single NIC, combining the data plane into a single‑stack proxy architecture. This is a very typical architecture in a cloud environment where traffic comes into the BIG‑IP LTM data plane and exits through the same virtual NIC. Regardless of networking architecture, the same basic principles for load balancing apply, and the configurations discussed below work in either architectural layout.
NGINX Plus can function in a similar architecture either by binding multiple IP subnets (and/or VLANs) to a single NIC that is available to the host device, or by installing multiple NICs and using each for unique client and server networks, or multiple client networks and multiple server‑side networks. This is, in essence, how the BIG‑IP LTM appliance functions as well, typically shipping with multiple NICs which can be trunked or bound into virtual NICs.
Definitions of Networking Concepts
Basic F5 BIG‑IP LTM networking configuration requires only that you specify the IP addresses of the management and data planes, but managing more complex network environments that include BIG‑IP LTM appliances involves some additional concepts. All of these concepts can be very easily simplified and mapped to NGINX Plus instances. Key BIG‑IP LTM networking concepts with NGINX Plus correlates include:
Self‑IP address – The primary interface that listens to incoming client‑side data plane traffic on a specific VLAN. It is a specific IP address or subnet on a specific NIC associated with that VLAN or a VLAN group.
In NGINX Plus, self IP addresses most directly map to the primary host interface used by NGINX Plus to manage traffic‑plane application data. Generally speaking, self IP addresses are not a necessary concept in an NGINX Plus deployment, as NGINX Plus utilizes the underlying OS networking for management and data‑traffic control.
Management IP address and port – The IP address:port combinations on a BIG‑IP LTM appliance that are used to administer it, via the GUI and/or remote SSH access. The NGINX Plus equivalent is the Linux host IP address, typically the primary host interface. It is possible, but not necessary, to use separate IP addresses and/or NICs for management access to the Linux host where NGINX Plus is running, if you need to separate remote access from the application traffic.
Virtual server – The IP address:port combination used by BIG‑IP LTM as the public destination IP address for the load‑balanced applications. This is the IP‑address portion of the virtual server that is associated with the domain name of a frontend application (for instance), and the port that’s associated with the service (such as port 80 for HTTP applications). This address handles client requests and shifts from the primary device to the secondary device in the case of a failover.
Pool and node list – A pool is a collection of backend nodes, each hosting the same application or service, across which incoming connections are load balanced. Pools are assigned to virtual servers so BIG‑IP LTM knows which backend applications to use when a new request comes into a virtual server. In addition, BIG‑IP LTM uses the term node list to refer to an array of distinct services that all use the same traffic protocol and are hosted on the same IP address, but listen on different port numbers (for example, three HTTP services at 192.168.10.10:8100, 126.96.36.199:8200, and 192.168.10.10:8300).
NGINX Plus flattens the BIG‑IP LTM pool and node list concepts by representing that information in
upstreamconfiguration blocks, which also define the load‑balancing and session‑persistence method for the virtual server. NGINX Plus does not need the concept of node lists, because standard
upstreamblock configuration very easily accommodates multiple services on the same IP address.
In addition to these networking concepts, there are two other important technology categories to consider when migrating from BIG‑IP LTM to NGINX Plus:
iRules – iRules is a proprietary, event‑driven, content‑switching and traffic‑manipulation engine (based on TCL) used by BIG‑IP LTM to control all aspects of data‑plane traffic. iRules are attached to virtual servers and are required for any type of content switching, such as choosing a pool based on URI, inserting headers, establishing affinity with JSESSIONIDs, and so on. iRules are event‑driven and are configured to fire for each new connection when certain criteria are met, such as when a new HTTP request is made to a virtual server or when a server sends a response to a client.
NGINX Plus natively handles content switching and HTTP session manipulation, eliminating the need to explicitly migrate most context‑based iRules and those which deal with HTTP transactions such as header manipulation. Most context‑based iRules can be translated to
locationblocks, and more complex iRules that cannot be duplicated with NGINX Plus directives and configuration block can be implemented with the NGINX Lua or nginScript modules. For more information on translating iRules to NGINX Plus content rules, see Migrating Layer 7 Logic from F5 iRules and Citrix Policies to NGINX and NGINX Plus on the NGINX blog.
High availability – Conceptually, BIG‑IP LTM and NGINX Plus handle high availability (HA) in the same way. Each instance of NGINX Plus can function as an active or passive instance, and when the active instance goes down the passive instance takes over the virtual server addresses (thus becoming the active instance). With NGINX Plus, a separate software package called nginx-ha-keepalived handles the virtual server and failover process for a pair of NGINX Plus servers. BIG‑IP LTM uses a built‑in HA mechanism and each active‑passive pair shares a floating “virtual” IP address which maps to the currently active instance.
Active‑active configurations are also possible, both on‑premises with the nginx-ha-keepalived package and on the Google Cloud Platform. For more information on configuring the nginx-ha-keepalived package and other load‑balancing architectural models, see the NGINX Plus Admin Guide.
Converting F5 BIG‑IP LTM Load‑Balancer Configuration to NGINX Plus
F5 BIG‑IP LTM offers three methods for configuration:
- CLI (the custom on‑box Traffic Management Shell [TMSH] tool)
- iControl API
Ultimately all changes made via the GUI or API are translated to a TMSH CLI command, so that’s the representation we’re using in this guide. We assume that you are configuring the device from the
(tmos.ltm) location, and so omit the common command variable
ltm from all of the TMSH commands.
With NGINX Plus, configuration is stored in a straightforward text file which can be accessed directly or managed using traditional on‑box tools or configuration management and orchestration tools such as Ansible, Chef, and Puppet.
Although IP addresses are used through the document, the listening IP address:port and the
Host header can be used in selecting the appropriate
server block to process a request with NGINX Plus. The
server_name directive allows for the selection based on the
Host header in addition to listening IP address:port. The
server_name directive can include multiple host names, wildcards, and regular expressions. Multiple
server_name directives and multiple listening IP/ports can be used within one NGINX
server block. For more information on using
Host and the
server_name directive instead of IP addresses, see Server Names at nginx.org.
Note: All IP addresses and names of objects (
upstream blocks, virtual servers, pools, and so on) are examples only. Substitute the values from your BIG‑IP LTM configuration.
As mentioned above, virtual servers are the primary listeners for both BIG‑IP LTM and NGINX Plus, but the configuration syntax for defining them is quite different. Here, a virtual server at 192.168.10.10 listens on port 80 for HTTP traffic, and distributes incoming traffic between the two backend application servers listed in the test_pool upstream group.