NGINX.COM
Web Server Load Balancing with NGINX Plus



It’s been quite a while since our last post, and we’ve got major news to share about our latest NGINX Unit releases, namely 1.13.0 and 1.14.0. They introduce new features that add to our toolset and extend the range of scenarios where you can take advantage of NGINX Unit – reverse proxying in NGINX Unit 1.13.0 and address‑based routing in NGINX Unit 1.14.0. Let’s look at the two new features in detail.

HTTP Reverse Proxying

In a sense, the very name NGINX has come to mean reverse proxying. While this may not be necessarily warranted (NGINX Plus is much more than a one‑trick pony), what matters now is that the feature has come to NGINX Unit as well, becoming a family trait of sorts.

NGINX Unit enables reverse proxying within its general routing framework: the new proxy option, which configures proxying of requests to a specified address, joins the pass and share options already familiar to you from our posts about internal routing and static file serving. As of this writing, the proxy address configuration supports IPv4, IPv6, and Unix socket addresses.

Here’s a sample routes object with the proxy option enabled:

If an incoming request satisfies the match condition, NGINX Unit establishes a proxy connection to the address specified by the proxy option and relays the request to it, returning the response (or an error status) to the client. Otherwise, the request is matched against subsequent match conditions in the route.

At first glance, the configuration may seem rather bland. However, this is only the beginning of NGINX Unit’s evolution as a reverse proxy, and even in this basic form it enables you to offload requests that don’t require NGINX Unit’s dynamic capabilities to other servers, and to architect custom request‑processing scenarios by chaining NGINX Unit listeners and instances.

Earlier versions of NGINX Unit, despite their many capabilities, could serve only as endpoints for incoming client requests. Now NGINX Unit can serve as an intermediate node within your web framework as well, accepting all kinds of traffic, maintaining a dynamic configuration, serving high‑demand requests on its own, and acting as a reverse proxy for your existing backend solutions.

Keep in mind, however, that the availability of proxied addresses isn’t automatically validated; if you misconfigure an address or accidentally create a redirect loop, NGINX Unit reports an error only when a request is unsuccessful.

For further details, see our documentation.

Address-Based Routing

Address‑based routing, added in NGINX Unit 1.14.0, extends the routing mechanism, enabling address matching against two newly introduced match options: source and destination. The former matches the connected client’s IP address, whereas the latter matches the target address of the request.

With this release, NGINX Unit’s routing engine can now match address values against individual IPv4‑ or IPv6‑based patterns and arrays of patterns. In turn, valid patterns may be wildcards with port numbers, exact addresses, or address ranges in CIDR notation:

This type of matching can be freely combined with other matching types. As with the match conditions introduced in previous releases, you can negate address‑based patterns by prefixing them with the exclamation mark (!). In the following example, the destination option matches all target addresses except 127.0.0.1:

For further details, see our documentation.

Use Case: IP Address Filtering and Access Management

Finally, let’s explore the pragmatic synergy of both newly introduced capabilities. Consider a scenario where we have two largely identical servers, 192.168.1.100 and 192.168.1.101, each running a single web app instance on port 8080. We want them to serve internal and external users alike, enforcing several limitations in the process:

  • Users in some regions of the globe are denied access.
  • Of all local users, only the admin has access to the privileged section of the app.
  • A trusted partner has access to the same section on par with the admin.
  • Static files are served separately to improve the app servers’ performance.
  • Administrative operations are limited to a single server to improve security.

In a rather unexpected turn of events, here we employ NGINX Unit as a reverse proxy only, to display its newly acquired capabilities. (To view or download the complete configuration for this use case, click view raw in the footer of any of the next three snippets.)

Let’s start with the entry points. In our NGINX Unit instance, we set up two listeners for external and internal traffic, configuring two distinct certificate bundles for added security and pointing each listener to a different route:

We define the two routes (internal and external) to introduce domain‑specific limitations for incoming traffic. Both routes have a match condition that enforces administrator‑only access to the /admin URI: the internal route allows access only to local users who possess a specific cookie, while the external route allows access only from a particular IP address belonging to a trusted partner. The other match condition in each route allows access to all non‑admin URIs by local users and by users on any external IP address outside the restricted partner network, respectively. Both routes channel all valid traffic to a route called common.

Here’s the common route with three match conditions that respectively offload static file handling from the app servers, direct administrative operations to a single server address (192.168.1.100) to improve security and troubleshooting), and facilitate session persistence by assigning all requests arriving via the listener for the internal route to the server that also handles administrative requests. The final action object unconditionally relays all remaining requests to the second of the two servers (192.168.1.101). Separating internal and non-administrative external traffic in this way simplifies access monitoring and rights management.

What’s Next

The latest release of NGINX Unit arrived in late December, but our sights are already trained on a few important improvements that shall contribute to the steady evolution of our project. These include round‑robin load balancing, rootfs support to advance our initial app isolation a tad further, advanced logic for handling static assets, memory performance improvements, and extended networking capabilities. Stay tuned!

We’re interested in other capabilities you want to see in NGINX Unit. Please visit our GitHub repository to join the discussions and development.

NGINX Plus subscribers get support for NGINX Unit at no additional charge. Start a free 30‑day trial of NGINX Plus today.

For a list of all changes in releases 1.13.0 and 1.14.0, see the NGINX Unit changelog.

Hero image
[Free O'Reilly Ebook] NGINX Unit Cookbook

Learn how to get started with NGINX Unit



About The Author

Artem Konev

Senior Technical Writer

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.