Web Server Load Balancing with NGINX Plus

NGINX Plus and Microsoft Azure Load Balancers

[Editor – This post has been updated to refer to the NGINX Plus API, which replaces and deprecates the separate dynamic configuration module mentioned in the original version of the post.]

Customers using Microsoft Azure have three options for load balancing: NGINX Plus, the Azure load balancing services, or NGINX Plus in conjunction with the Azure load balancing services. This post aims to give you enough information to make a decision and also shows you how using NGINX Plus with Azure Load Balancer can give you a highly available HTTP load balancer with rich Layer 7 functionality.


Microsoft Azure gives its users two choices of a load balancer: Azure Load Balancer for basic TCP/UDP load balancing (at Layer 4, the network layer) and Azure Application Gateway for HTTP/HTTPS load balancing (at Layer 7, the application layer). While these solutions work for simple use cases, they do not provide many features that come standard with NGINX Plus.

Here is a general comparison between NGINX Plus and the Azure load‑balancing offerings:

Feature NGINX Plus Azure Load Balancer Azure Application Gateway NGINX Plus & Azure Load Balancer
Load balancing methods Advanced Simple Simple Advanced
SSL/TLS termination
URL request mapping
URL rewriting and redirecting
HTTP health checks Advanced Simple Simple Advanced
TCP/UDP health checks Advanced Simple Advanced
Session persistence Advanced Simple Simple Advanced
Active-active NGINX Plus cluster

Now let’s now explore some of the differences between NGINX Plus and the Azure load balancing services, their unique features, and how NGINX Plus and Azure load balancers can work together.

Comparing NGINX Plus and Azure Load Balancing Services

Load Balancing Methods

NGINX Plus offers a choice of several load‑balancing methods. In addition to the default Round Robin method there are:

  • Least Connections – A request is sent to the server with the lowest number of active connections.
  • Least Time – A request is sent to the server with the lowest average latency and the lowest number of active connections.
  • IP Hash – A request is sent to the server determined by the source IP address of the request.
  • Generic Hash – A request is sent to the server determined from a user‑defined key, which can contain any combination of text and NGINX variables, for example the variables corresponding to the Source IP Address and Source Port header fields, or the URI.

All of the methods can be extended by adding different weight values to each backend server.

Azure Load Balancer offers one load balancing method, Hash, which by default uses a key based on the Source IP Address, Source Port, Destination IP Address, Destination Port, and Protocol header fields to choose a backend server.

Azure Application Gateway provides only a round‑robin method.

Session Persistence

Session persistence, also known as sticky sessions or session affinity, is needed when an application requires that all requests from a specific client continue to be sent to the same backend server because client state is not shared across backend servers.

NGINX Plus supports three advanced session‑persistence methods:

  • Sticky Cookie – NGINX Plus adds a session cookie to the first response from the upstream group for a given client. This cookie identities the backend server that was used to process the request. The client includes this cookie in subsequent requests and NGINX Plus uses it to direct the client request to the same backend server.
  • Sticky Learn – NGINX Plus monitors requests and responses to locate session identifiers (usually cookies) and uses them to determine the server for subsequent requests in a session.
  • Sticky Route – A mapping between route values and backend servers can be configured so that NGINX Plus monitors requests for a route value and chooses the matching backend server.

NGINX Plus also offers two basic session‑persistence methods, implemented as two of the load‑balancing methods described above:

  • IP Hash – The backend server is determined by the IP address of the request.
  • Hash – The backend server is determined from a user-defined key, for example Source IP Address and Source Port, or the URI.

Azure Load Balancer supports the equivalent of the NGINX Plus Hash method, although the key is limited to certain combinations of the Source IP Address, Source Port, Destination IP Address, Destination Port, and Protocol header fields.

Azure Application Gateway supports the equivalent of the NGINX Plus Sticky Cookie method with the following limitations: you cannot configure the name of the cookie, when the cookie expires, the domain, the path, or the HttpOnly or Secure cookie attribute.

Note: When you use Azure Load Balancer or the NGINX Plus IP Hash method, or the NGINX Plus Hash method with the Source IP Address included in the key, session persistence works correctly only if the client’s IP address remains the same throughout the session. This is not always the case, as when a mobile client switches from a WiFi network to a cellular one, for example. To make sure requests continue hitting the same backend server, it is better to use one of the advanced session‑persistence methods listed above.

Health Checks

Azure Load Balancer and Azure Application Gateway support basic application health checks. You can specify the URL that the load balancer requests, and it considers the backend server healthy if it receives the expected HTTP 200 return code. You can also specify the health check frequency and the timeout period before the server is considered unhealthy.

NGINX Plus extends this functionality with advanced health checks. In addition to specifying the URL to use, with NGINX Plus you can insert headers into the request and look for different response codes, and examine both the headers and body of the response.

A useful related feature in NGINX Plus is slow start. NGINX Plus slowly ramps up the load to a new or recently recovered server so that it doesn’t become overwhelmed by connections.This is useful when your backend servers require some warm‑up time and will fail if they are given their full share of traffic as soon as they show as healthy.

NGINX Plus also supports health checks to TCP and UDP servers, which allow you to specify a string to send and a string to look for in the response.

Azure Load Balancer supports TCP health checks, but does not offer this level of monitoring.

SSL Termination

NGINX Plus supports SSL termination, as does Azure Application Gateway. Azure Load Balancer does not.

Additional Features in NGINX Plus

NGINX Plus provides several additional features that you will not find in the Azure Load Balancer or Application Gateway.

URL Rewriting and Redirecting

With NGINX Plus you can rewrite the URL of a request before passing it to a backend server. This allows the location of files, or request paths, to be altered without modifications to the URL advertised to clients. You can also redirect requests. For example, you can redirect all HTTP requests to an HTTPS server.

Connection and Rate Limits

You can configure multiple limits to control the traffic to and from your NGINX Plus instance. These include limiting inbound connections, the connections to backend nodes, the rate of inbound requests, and the rate of data transmission from NGINX Plus to clients.

HTTP/2 Support

NGINX Plus supports HTTP/2.

WebSocket Support

NGINX Plus supports WebSocket, including the ability to examine the body and the headers of a client request, advanced session persistence options, and other Layer 7 features.

While Azure Application Gateway does not provide support for HTTP/2 or WebSocket at the application layer, Azure Load Balancer supports them at the network layer, where TCP and UDP operate.

NGINX Plus with Azure Load Balancing Services

When used together with Azure Load Balancer and Azure Traffic Manager, NGINX Plus becomes a highly available load balancer solution with rich Layer 7 functionality.

Active-Active High Availability

By using Azure Load Balancer to load balance across NGINX Plus instances in an Availability Set, you create a highly available load balancer within a region.

Autoscaling NGINX Plus

You can set up autoscaling of NGINX Plus instances based on average CPU usage. This is possible by creating Availability Sets in the Azure Cloud Service that hosts your NGINX Plus instances. You need to take care of synchronization of NGINX Plus config files.

Autoscaling Backend Instances

You can also set up autoscaling of your backend instances based on average CPU usage. This is possible by creating Availability Sets in the Azure Cloud Service that hosts your backend instances. You need to take care of adding or removing backend instances from the NGINX Plus configuration, which is possible with the NGINX Plus API.

To automate updates to the NGINX Plus configuration (either in combination with Availability Sets or when using NGINX Plus on its own), you can integrate a service discovery system with NGINX Plus, either via the NGINX Plus API or via DNS, if the system has a DNS interface. Check out our blog posts on using NGINX Plus with popular service discovery systems:

Integration with Azure Traffic Manager

For a globally distributed environment you can use Azure Traffic Manager to distribute traffic from clients across many regions.

Additional Features in Azure Load Balancing Services

Azure Load Balancer and Application Gateway are managed by Azure Cloud and both provide a highly available load‑balancing solution.

A feature of Azure Load Balancer that is not available in NGINX Plus is source NAT, in which traffic outbound from backend instances has the same source IP address as the load balancer.

Azure Load Balancer provides automatic reconfiguration when using Azure Cloud’s autoscaling feature.


If your load balancing requirements are simple, the Azure load balancing offerings can provide a good solution. When the requirements get more complex, than NGINX Plus is a good choice. You can use NGINX Plus either alone or in conjunction with Azure Load Balancer for high availability of NGINX Plus instances.

Hero image
Application Delivery & Load Balancing in Microsoft Azure

Practical report describes Microsoft Azure’s load‑balancing options and explains how NGINX can contribute to a comprehensive solution.


No More Tags to display