Announcing NGINX Plus R13

We’re pleased to announce that NGINX Plus Release 13 (R13) is now available as a free upgrade to all NGINX Plus subscribers. NGINX Plus is a combined web server, load balancer, and content cache built on top of the open source NGINX software. NGINX Plus R13 includes new features focused on dynamic deployments, enhanced debugging capabilities, and improved security and performance.

NGINX Plus R13 introduces support for:

  • A new NGINX Plus API – Perform on‑the‑fly reconfiguration and obtain extended status metrics at a single consolidated endpoint; the API also adds support for key‑value stores.
  • Request mirroring – Send a copy of all incoming traffic to a dedicated server, where you can monitor, inspect, and log application traffic without affecting the performance of production servers.
  • nginScript enhancements – Extend NGINX Plus with programmatic configuration using nginScript, our custom implementation of JavaScript. The new interactive shell provides a console that shows all of the built‑in objects for JavaScript. These objects can be investigated further to expose the available methods and primitives for each object.
  • Build tool for dynamic modules – Use our new build tool to create installable packages for the many third‑party modules available for NGINX and NGINX Plus.

Further enhancements include improvements to the sticky learn method for session persistence, HTTP trailers support, and a new third‑party dynamic module for HTTP substitutions.

Changes in Behavior

  • Deprecated modules – The previous APIs for on‑the‑fly reconfiguration and extended status (the Upstream-Conf and Status modules) are deprecated and replaced by the unified NGINX Plus API. The deprecated APIs will continue to be shipped with NGINX Plus for a minimum of 6 months, alongside the new NGINX Plus API. The deprecated APIs will be removed in a future release of NGINX Plus.
  • Removed directive – The sticky_cookie_insert directive has been removed in NGINX Plus R13, having been deprecated in NGINX Plus R2.
  • Third‑party dynamic modules – Dynamic modules installed from the NGINX repository are automatically upgraded to R13. Any third‑party modules – that is, modules not included in the official NGINX repo – must be recompiled against open source NGINX version 1.13.4 to continue working with NGINX Plus R13. For more information, see the NGINX Plus Admin Guide.
  • Directive in ModSecurity module no longer supported – The SecRequestBodyInMemoryLimit directive for ModSecurity is no longer supported. Customers may safely remove this directive, because the ModSecurity module obeys the request‑body handling defined by the NGINX configuration.
  • Removed support for end‑of‑life OS versions – NGINX Plus is no longer supported on CentOS 5.10+, Red Hat Enterprise Linux 5.10+, Oracle Linux 5.10+, Ubuntu 12.04 LTS, or Ubuntu 16.10.

NGINX R13 Features in Detail

NGINX Plus API

NGINX Plus R13 includes a new REST API unified under a single endpoint. Previous versions of NGINX Plus included separate Upstream-Conf and Extended Status APIs. The new API combines the functionality of both, and also supports the new Key‑Value Store module in a variety of use cases for on‑the‑fly reconfiguration (discussed in the Key‑Value Store section below).

To enable the NGINX Plus API, include the new api directive in a location block:

server {
listen 80;

location /api {
api write=on;
}
}

By default, the NGINX Plus API provides read‑only access to data. Add the write=on parameter to the api directive to enable read/write access so that changes can be made to upstream servers and the new Key‑Value Store module. We strongly recommend restricting access to the API to authorized users only, especially when read/write mode is enabled.

To see all the types of information available from the API endpoint, run this command:

$ curl http://localhost:80/api/1/
["nginx","processes","connections","ssl","slabs","http","stream"]

To display details about a specific type of information (or to modify upstream configuration), append the appropriate string to the request URI:

  • connections – Display metrics for total connections
  • http – Display metrics for HTTP traffic and modify HTTP upstream configuration

    There are also two “subtypes” under http:

    • http/server_zones – Display information about HTTP virtual servers
    • http/upstreams – Display information about NGINX upstream groups
  • nginx – Display general information about NGINX
  • processes – Display information about NGINX worker processes
  • slabs – Display information on shared memory allocated by NGINX
  • ssl – Display metrics for SSL client statistics in real time
  • stream – display metrics on TCP/UDP traffic and modify TCP/UDP upstream configuration

Extended Status Monitoring

NGINX Plus reports more than 40 exclusive metrics on top of what’s available in the open source NGINX. You can now access these metrics using the NGINX Plus API. Use the API to access the metrics important to you.

As an example, append connections to the URI to output a snapshot of connection status, which includes the number of accepted, active, dropped, and idle client connections.

$ curl http://localhost:80/api/1/connections
{"accepted":3,"dropped":0,"active":1,"idle":0}

Another example: append ssl to the URI to output a snapshot of SSL client statistics in real time.

$ curl http://localhost:80/api/1/ssl
{"handshakes":0,"handshakes_failed":0,"session_reuses":0}

On-the-Fly Reconfiguration of Upstream Server Groups

In NGINX Plus R12 and earlier, you could use the upstream_conf directive to enable dynamic reconfiguration of existing upstream server groups on the fly without reloading NGINX Plus. This functionality is now incorporated into the NGINX Plus API.

This NGINX Plus configuration snippet defines two servers in the upstream group called backend, and enables the NGINX Plus API at /api:

upstream backend {
zone backends 64k;
server 10.10.10.2;
server 10.10.10.4;
}

server {
listen 80;
server_name www.example.org;

location /api {
api write=on;
}
}

To add a server to the backend group, include the -d option in a curl request to /api/1/http/upstreams/backend/servers, with JSON text that defines the new server’s IP address (here, as 10.10.10.6). The -i option means HTTP headers are included in the response. (You can also substitute -X POST for -d, but the latter option is preferred.)

$ curl -id '{"server":"10.10.10.6",}' http://localhost/api/1/http/upstreams/backend/servers
HTTP/1.1 201 Created
...;

For details about all options for configuring upstream groups, see the reference documentation for the HTTP API module.

Key-Value Store

NGINX Plus R13 introduces a new Key-Value Store module. You can use the NGINX Plus API to create, modify, and remove key‑value pairs in one or more “keyval” shared memory zones on the fly. The value of each key‑value pair can then be evaluated as a variable for use by other NGINX Plus features.

To add, modify, read, and delete entries in the key‑value store, use the POST, PATCH, GET, and DELETE HTTP methods respectively. The key‑value store provides a wealth of on‑the‑fly configuration solutions to enable real‑time integration with external systems.

Some sample use cases include:

  • Dynamic IP blacklisting
  • Routing of URIs to backend servers
  • Managing lists of permitted URIs per user
  • Managing redirect rules (see the following example)

The following configuration snippet uses the Key-Value Store module to manage vanity URLs for a website.

keyval_zone zone=redirects:1M state=state/redirects.json;  # Save key-val pairs to file
keyval $uri $target zone=redirects; # $uri is the key, $target is the value

server {
listen 80;

location /api {
api write=on; # Enable the NGINX Plus API (secure this location in production environments)
}

if ($target) { # True when $uri exists in the 'redirects' keyval zone
return 301 $target; # Redirect client to the matching value for the $uri
}

location / {
proxy_pass http://backend;
}
}

In the keyval directive, the key is set to the URI from the remote machine issuing the HTTP request. If $uri is a key in the key‑value store, the value associated with the key is assigned to a new variable called $target. Then if $target exists, NGINX Plus redirects the client to the matching value of $uri.

To populate the key‑value store with an initial vanity URL, we send the data, encoded as JSON, to the URI for the NGINX Plus API.

$ curl -id '{"/conf":"/conf2017"}' http://localhost/api/1/http/keyvals/redirects
HTTP/1.1 201 Created
...

Now clients that request /conf are redirected to /conf2017.

$ curl -i http://localhost/conf
HTTP/1.1 301 Moved Temporarily
Location: http://localhost/conf2017

You can use the PATCH method to add more vanity URL redirects to the key‑value store and modify existing entries on the fly.

$ curl -iX PATCH -d '{"/conf":"/conf2018"}' http://localhost/api/1/http/keyvals/redirects
HTTP/1.1 204 No Content
...

You can configure multiple separate key‑value stores by using the keyval directive to define a different shared memory zone for each one. For more information, see the reference documentation for the Key-Value Store module.

Swagger Documentation

The new NGINX Plus API comes with a Swagger specification that can be used to explore the API and understand the capabilities of each resource. The Swagger documentation is bundled with NGINX Plus and can be accessed at http://nginx-host/swagger-ui/.

The interactive part of the Swagger UI requires the NGINX Plus API to be enabled, which can be achieved by uncommenting the /api/ location block in the conf.d/default.conf file.

# enable /api/ location with appropriate access control in order
# to make use of NGINX Plus API
#
#location /api/ {
# api write=on;
# allow 127.0.0.1;
# deny all;
#}

You can also explore the NGINX Plus API documentation at https://demo.nginx.com/swagger-ui/.

Note: The entire NGINX Plus API, including the extended status metrics, upstream configuration, and the new Key‑Value Store module, is exclusive to NGINX Plus.

Request Mirroring

With NGINX Plus R13, you can enable HTTP request mirroring. With this feature, HTTP requests that are proxied to an upstream group are cloned and also sent to a different destination. The original request is processed as usual, but any responses from the cloned request are ignored. There are many use cases for request mirroring, including:

  • Integration with web application firewalls (WAFs) when deployed in learn mode, so typical request patterns can be analyzed without impacting production traffic
  • Risk‑free performance tuning using live, production traffic
  • Duplicating file uploads on a backup server to avoid file system replication between web servers

Enabling request mirroring has negligible impact on overall system throughput and performance. The following configuration snippet shows how to use the new mirror directive to clone requests and pass them to a separate upstream server.

location / {
mirror /mirror;
proxy_pass http://backend;
}

location /mirror {
internal;
proxy_pass http://test_backend$request_uri;
}

Requests are proxied to the backend upstream group for regular processing. They are also cloned and proxied to a separate upstream group named test_backend, retaining the URI from the original request.

Note: Request mirroring was initially released in open source NGINX 1.13.4.

nginScript Enhancements

Since becoming generally available in NGINX Plus R12, nginScript continues to be extended with core JavaScript language support. With this release, we introduce support for hexadecimal numbers (such as 0x7b) and scientific notation (such as 512e10). Primitive methods for the Object class have also been implemented.

nginScript now also offers an interactive shell, invoked with the njs command, to assist with the development of nginScript code.

The following shell snippet shows how to enter the nginScript interactive shell, define an expression that produces a random date up to 30 seconds in the future, and add two numbers.

$ njs
interactive njscript
>> Date.now() + Math.round(Math.random()*30*1000);
1500976350968
>> 0x7b + 512e10;
5120000000123
>>

To learn more about nginScript, see the Introduction to nginScript on our blog.

Note: nginScript is available for both open source NGINX and NGINX Plus.

Build Tool for Dynamic Modules

NGINX 1.11.5 and NGINX Plus R11 introduced support for compiling dynamic modules independently of NGINX itself. This allows users of NGINX and NGINX Plus to use the official builds from NGINX, Inc. repositories and load in only the dynamic modules they need.

With NGINX Plus R13, we provide a build tool for compiling and packaging a dynamic module as an installable module that preserves and honors the dependency between it and the base NGINX version that it is linked to.

For complete details about the build tool, see Creating Installable Packages for Dynamic Modules on our blog.

Note: The build tool is available for both open source NGINX and NGINX Plus.

Faster Sticky-Learn Session Persistence

Session persistence is a very useful feature of NGINX Plus load balancing that enables you to send all requests from a particular client to one server. There are multiple ways to establish session persistence; with the “sticky learn” method, NGINX Plus looks for the presence of a specific cookie and pins the client to the same server whenever that cookie is included in a request.

With NGINX Plus R13 you can now establish a sticky session as soon as the upstream server has sent the headers of its response, instead of waiting until the complete response payload has arrived. NGINX Plus can thus send the sticky session to the client at the earliest opportunity. Include the new header parameter to the sticky learn directive:

upstream backends {
zone backends 64k;
server 10.10.10.2;
server 10.10.10.4;

sticky learn create=$upstream_cookie_sessionid
lookup=$cookie_sessionid
zone=client_sessions:1m
header;
}

The header parameter is particularly useful if an application is prone to errors and you want the client to resend failed requests to the same upstream server.

Note: Sticky-learn session persistence is exclusive to NGINX Plus.

Additional Features

NGINX Plus R13 introduces the following additional features:

  • HTTP trailers – The add_trailer directive enables arbitrary trailers to be added to the end of HTTP responses. The trailer response header allows the sender to include additional fields at the end of chunked messages to supply metadata that might be dynamically generated while the message body is sent, such as a message integrity check or a digital signature.
  • Substitutions filter dynamic module – The HTTP Substitutions Filter community dynamic module is now supported and included in our NGINX Plus distributions. The module can apply both regular‑expression and fixed‑string substitutions to response bodies. It scans the output chains buffer and matches string line by line. You can also access the module on the Dynamic Modules page.
  • Graceful worker shutdown – Use the worker_shutdown_timeout directive to set a timeout that enables graceful shutdown of worker processes to complete more quickly. When the timeout expires after a shutdown or restart signal is received, NGINX Plus attempts to close all open client connections.

Upgrade to R13 or Try NGINX Plus

If you’re running NGINX Plus, we strongly encourage you to upgrade to Release 13 as soon as possible. You’ll pick up a number of fixes and improvements, and it will help us to help you if you need to raise a support ticket. Installation and upgrade instructions can be found at the customer portal.

Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.

If you’ve not tried NGINX Plus, we encourage you to try it out for web acceleration, load balancing, and application delivery, or as a fully supported web server with enhanced monitoring and management APIs. You can get started for free today with a 30‑day evaluation and see for yourself how NGINX Plus can help you deliver and scale out your applications.

Cover image
Free O'Reilly Ebook
Your guide to everything NGINX