We’re happy to announce the availability of NGINX Plus Release 23 (R23). Based on NGINX Open Source, NGINX Plus is the only software load balancer, reverse proxy, and API gateway.
New features in NGINX Plus R23 include:
- gRPC health checks – Actively testing that a gRPC service can handle requests before sending them significantly boosts reliability.
- Unprivileged installation support – NGINX Plus can now be installed by, and upgraded as, an unprivileged (non‑
root) user. This fully supported solution aligns with the growing trend toward zero‑trust security models.
- OpenID Connect PKCE support – NGINX Plus R23 implements the Proof Key for Code Exchange (PKCE) extension to the OpenID Connect Authorization Code flow. PKCE prevents several types of attack and enables secure OAuth exchanges with public clients.
Important Changes in Behavior
- Deprecated module – The third‑party Cookie‑Flag module is deprecated and replaced by the new
proxy_cookie_flagsdirective. The module will be removed in NGINX Plus R26. For details, see Native Method for Setting Cookie Flags.
- New operating systems supported:
- Alpine 3.12 (x86_64, aarch64)
- Debian 10 (aarch64; x86_64 has been supported since NGINX Plus R17)
- Older operating systems removed or to be removed:
- Alpine 3.9 is no longer supported; oldest supported version is 3.10
- CentOS/Oracle Linux/RHEL 6.5+ is no longer supported; oldest supported version is 7.4
- Ubuntu 19.10 is no longer supported
- Debian 9 will be removed in NGINX Plus R24
New Features in Detail
gRPC Health Checks
When deployed as a load balancer, NGINX Plus can monitor the health of backend (upstream) servers by making active health checks. NGINX Plus R23 supports the gRPC health checking protocol, enabling it to accurately test whether backend gRPC servers are able to handle new requests. This is particularly valuable in dynamic and containerized environments. When spinning up new instances of a gRPC service, it’s important to send requests only once the service is “fully up”. This requires a health check that goes deeper than looking at the TCP port or verifying HTTP URI availability – one where the service itself indicates whether it’s ready to receive requests.
For gRPC services that implement the gRPC health checking protocol, configuration is straightforward.
This configuration load balances all requests to the grpc_backend upstream group. The
health_check directive includes the
type=grpc parameter to invoke the
Check method of each upstream server’s
Health service. Services that respond with
SERVING are considered healthy. The
mandatory parameter ensures that when NGINX Plus starts up, or a new server is introduced to the upstream group, traffic is not forwarded until a health check passes (otherwise, new services are assumed to be healthy by default).
If there are several gRPC services exposed on each upstream server, then the most significant service (one with dependent or subordinate services) can be monitored by specifying its name as the value to the
grpc_service parameter as in this example:
For gRPC services that don’t implement the gRPC health checking protocol, we can test whether the upstream server is at least responding to gRPC requests, because in that case it sends an error status code in response to the
Check method. With the configuration in grpc_health.conf, we expect a service that doesn’t implement the gRPC protocol to respond with status code
We can also check that a gRPC service is able to respond to incoming requests without needing to modify the backend code. We can use this approach to monitor any gRPC service:
Unprivileged User Installation
In previous releases, NGINX Plus operated with a minimum of processes running as the privileged user
root. For example, the installation instructions in the NGINX Plus Admin Guide create these processes:
$ ps auxf | grep nginx
root ... 9068 888 ? Ss 21:44 0:00 nginx: master process nginx
nginx ... 9712 3572 ? S 21:44 0:00 \_ nginx: worker process
As shown, the master process is running with
root privileges. All other processes (workers and cache management) use the unprivileged user account
Critical systems dealing with sensitive data may not want to use user
root. In this case, NGINX Plus R23 can be installed and run as a non‑privileged user. We provide an installation script,
ngxunprivinst.sh, in our GitHub repo for use on the following OSs:
- Alpine Linux
- Amazon Linux, Amazon Linux 2
- CentOS, Red Hat Enterprise Linux
- Debian, Ubuntu
Note: If any NGINX Plus listeners are configured on ports below 1024 (for example, 80 or 443), the master process must have
root privileges (but you can still install NGINX Plus under an unprivileged user account).
To use the installation script, run the following commands. (To see all available
ngxunprivinst.sh commands, run the script without a command‑name parameter, or see the code for the script at the GitHub repo.)
Download the script and make sure it’s executable:
$ chmod +x ngxunprivinst.sh
- Copy your NGINX Plus certificate and key (nginx-repo.crt and nginx-repo.key) to the local directory. The
‑koptions are included on all
ngxunprivinst.shcommands to identify them.
List the versions of NGINX Plus available in the NGINX Plus repo.
$ ./ngxunprivinst.sh list -c nginx-repo.crt -k nginx-repo.key 18-1 18-2 19-1 20-1 21-1 22-1 23-1
Fetch the desired package (here we’re fetching NGINX Plus R23-1). The
‑poption specifies the installation directory:
$ ./ngxunprivinst.sh fetch -c nginx-repo.crt -k nginx-repo.key -p /home/nginxrun -v 23-1
$ ./ngxunprivinst.sh install -c nginx-repo.crt -k nginx-repo.key -p /home/nginxrun -v 23.1 nginx-plus-23-1.el8.ngx.x86_64.rpm nginx-plus-module-njs-23%2B0.4.6-1.el8.ngx.x86_64.rpm
Start NGINX, including the
‑poption to specify the path,
‑cto name the configuration file, and
‑eto name the error log.
$ /home/nginxrun/usr/sbin/nginx -p /home/nginxrun/etc/nginx -c nginx.conf -e /home/nginxrun/var/log/error.log
We include the
‑eoption to suppress the warning message that otherwise appears. Before NGINX Plus has read its configuration during startup, it writes to the default error log, /var/log/nginx/error.log. Unprivileged users don’t have permission to create or write to this file, resulting in a warning. Once the configuration is read, the
error_logdirective sets the error log to a location that the unprivileged user can write to.
(Optional) To verify that NGINX Plus is running as a non‑
rootuser, run this command:
$ ps auxf | grep nginx nginxrun ... 9068 888 ? Ss 21:55 0:00 nginx: master process nginxrun ... 9712 3572 ? S 21:55 0:00 \_ nginx: worker process
OpenID Connect PKCE Support
Proof Key for Code Exchange (PKCE) is an extension recently added to the OpenID Connect (OIDC) Authorization Code flow to prevent several kinds of attack and to secure the OAuth exchange with public clients. For NGINX Plus R23, we’ve updated our OpenID Connect reference implementation to support the extension. PKCE will become mandatory with OAuth 2.1.
The specific change is to replace
client_secret with two new values in the code challenge:
To address different attacks, especially on mobile devices, the challenge for a token (whether it’s an Access, ID, or Refresh Token) has been adjusted as follows:
- NGINX Plus generates (and remembers) a
- NGINX Plus redirects the end user to log in at the OIDC identity provider (IdP) login page. The request includes a hashed version of the
- The IdP sends an
auth_codefor the user to NGINX Plus.
- Based on the shared state NGINX Plus is able to find the generated
code_verifierand send the request to exchange the authorization code for a token set from the IdP’s token endpoint.
Prior to adding PKCE, it was sufficient for NGINX Plus to share a static client secret with the IdP.
In the updated OIDC reference implementation NGINX Plus is able to handle Authorization Code flows for both PKCE and the client‑secret methodology.
Here’s a sample configuration that enables the extended Authorization Code flow with PKCE:
$oidc_pkce_enable variable acts as a switch for the PKCE flow. If set to
1 for a specific domain, the PKCE flow is used. If set to
0 (the default), the non-PKCE Authorization Code flow is used.
Other Enhancements in NGINX Plus R23
Fine-Grained Control Over SSL/TLS Connections
TLS v1.3 enables stronger security than previous TLS versions, with end-to-end encryption between servers, and between servers and their clients. NGINX Plus R23 provides direct access to OpenSSL configuration for fine‑grained control over TLS v1.3.
Creating a Default HTTPS Server Without a Certificate and Key
In previous releases, the default
server block for TLS‑protected HTTPS traffic had to include the
ssl_certificate_key directives, requiring you to create a “dummy” self‑signed certificate and key.
ssl_reject_handshake directive eliminates the requirement for a certificate and key, as in this sample configuration:
Direct OpenSSL Configuration
NGINX Plus R23 gives you finer‑grained control over how NGINX Plus handles SSL/TLS with OpenSSL 1.0.2 and later.
The following use cases take advantage of the new level of control:
ChaCha ciphers – NGINX Plus uses ChaCha20 when a client (usually mobile) specifies that cipher at the top of its preference list. ChaCha20 distinctly improves performance for clients that support it.
TLS v1.3 cipher configuration – In previous releases, the
ssl_ciphersdirective was used to set NGINX Plus’s list of preferred SSL/TLS ciphers, as in this example:
This directive doesn’t apply to TLS v1.3, however, because the OpenSSL implementation of ciphers for TLS v1.3 isn’t compatible with the older interfaces. To set the list of ciphers for TLS v1.3, use the new
ssl_conf_commanddirective as in this example:
To set ciphers for both TLS v1.2 and v1.3, include both directives in the configuration:
Upgrading proxied connections – Building on the cipher configuration mechanism implemented by the
ssl_conf_commanddirective, NGINX Plus R23 gives you the same control over cipher suites for connections proxied with these protocols:
The following example shows how to configure NGINX Plus to upgrade requests from clients using older TLS versions to use backend servers known to support TLS v1.3.
Cache Manager Can Monitor Available Disk Space
When NGINX Plus is configured as a caching proxy, the cache manager process guarantees that the cache size doesn’t exceed the limit set by the
max_size parameter to the
proxy_cache_path directive, by removing content that was accessed least recently.
With NGINX Plus R23, the cache manager can also monitor the amount of available disk space on the filesystem housing the cache, and remove content when the available space is less than the new
min_free parameter to the
This means that even when the cache shares the same filesystem as other processes, NGINX Plus ensures that populating the cache won’t inadvertently fill up the disk.
Native Method for Setting Cookie Flags
Unsecured cookies remain a high‑risk attack vector. As noted at the Mozilla Developer Network (MDN), one way to ensure that cookies are not accessed by unintended parties or scripts is to set flags such as
Secure in the
In previous releases, we provided the
set_cookie_flag directive for this purpose, as implemented in the third‑party Cookie‑Flag module available in our dynamic modules repo. NGINX Plus R23 introduces the
proxy_cookie_flags directive to replace that directive and module.
The deprecated Cookie‑Flag module will be removed in NGINX Plus R26, so we recommend that you locate any
set_cookie_flag directives in your configuration and replace them with the
proxy_cookie_flags directive as soon as possible.
Here’s a sample configuration for proxying to a simple backend application that doesn’t set any cookie‑protection flags itself:
In this example, we’re adding the
SameSite flags to secure the appcookie session cookie created by the upstream server, which NGINX Plus uses for session persistence as described in the NGINX Plus Admin Guide.
curl command or your browser’s developer tool, you can see that the
SameSite flags are now set for appcookie.
< HTTP/1.1 200 OK
< Server: nginx/1.19.4
< Date: Thu, 08 Dec 2020 14:46:12 GMT
< Content-Type: application/octet-stream
< Content-Length: 9
< Connection: keep-alive
< Set-Cookie: appcookie=appserver1; Secure; HttpOnly; SameSite=Strict
< Content-Type: text/html
With NGINX Plus R23, you can also add the
SameSite flag to cookies with the
sticky directive, as in this example (the
secure parameters have been supported since NGINX Plus R6):
Setting Variables in the Stream Module
Here’s an example that constructs complex, compound values from multiple variables.
A more sophisticated use case employs the
set directive to update the key‑value store. In this configuration for DNS load balancing, the key‑value store records the time when each client IP address makes a DNS request, retaining each record for 24 hours.
You can then use the NGINX Plus API to learn when each client made its most recent DNS request during the previous 24 hours.
$ curl http://localhost:8080/api/6/stream/keyvals/dns_timestamp
Updates to the NGINX Plus Ecosystem
Other notable enhancements to the module are the Query String module for easy access to key‑value pairs passed in the URL, and line‑level backtrace support for debugging.
Changes to Dynamic Modules
New SPNEGO for Kerberos Module
Support for SPNEGO Kerberos authentication is now available in the NGINX Plus dynamic modules repository. For installation instructions and pointers to more information, see the NGINX Plus Admin Guide.
Deprecated Cookie-Flags Module
As detailed in Native Method for Setting Cookie Flags above, the new
proxy_cookie_flags directive replaces the
set_cookie_flag directive implemented in the third‑party Cookie‑Flag module, which is now deprecated and scheduled for removal in NGINX Plus R26. If your configuration includes the
set_cookie_flag directive, please replace it with
proxy_cookie_flags at your earliest convenience.
Updates to the Prometheus-njs Module
js_includedirective is deprecated, replaced by the
js_setdirectives can reference a module function
Notable Bug Fix
Health checks that used the
require directive in a
match block to test that variables were not empty might not have detected unhealthy upstream servers if the response was larger than the value of the
Upgrade or Try NGINX Plus
If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R23 as soon as possible. You’ll also pick up several additional fixes and improvements, and it will help NGINX to help you when you need to raise a support ticket.
If you haven’t tried NGINX Plus, we encourage you to try it out – for security, load balancing, and API gateway, or as a fully supported web server with enhanced monitoring and management APIs. You can get started today with a free 30-day trial. See for yourself how NGINX Plus can help you deliver and scale your applications.