NGINX.COM
Web Server Load Balancing with NGINX Plus

[Editor – The NGINX ModSecurity WAF module for NGINX Plus officially went End-of-Sale as of April 1, 2022 and is transitioning to End-of-Life effective March 31, 2024. For more details, see F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life on our blog.

This post is adapted from a webinar by Owen Garrett, Head of Products at NGINX, Inc. The original product name, NGINX Plus with ModSecurity WAF, was still in use at the time.]

Table of Contents

0:00 Introduction
0:34 NGINX Plus R9 Recap
2:34 NGINX Plus R10 New Features
3:01 ModSecurity WAF
6:03 Why ModSecurity?
7:12 ModSecurity 101
8:49 Comprehensive Protection for Critical Apps and Data
9:37 NGINX Plus with ModSecurity WAF Details
12:44 Why NGINX Plus with ModSecurity WAF?
13:58 Native JWT Support
14:38 NGINX Plus for Authentication
16:18 Use Case 1 – Single Sign‑On (SSO)
17:29 Use Case 2 – API Gateway
18:07 Why NGINX Plus for OpenID?
19:56 “Dual‑Stack” RSA‑ECC Certificates
20:26 RSA vs. ECC
22:45 Network Features
23:04 Transparent Proxy
26:27 What Is nginScript?
27:38 nginScript in NGINX Plus R10
30:17 Additional Features
33:53 Additional Features (Continued)
34:50 Summary

0:00 Introduction

Title slide for webinar 'What's New in NGINX Plus R10?' It includes improvements to application security and the introduction of UDP load balancing and dynamic modules

Owen Garrett: Thank you for joining this presentation on the new features in NGINX Plus R10.

My name is Owen Garrett. I lead the product team for NGINX, Inc. and I’ll be explaining some of the features we’ve brought to you in this new release. We’ll also walk through use cases and see how these new features can benefit you in the applications and services that you’re building today.

00:34 NGINX Plus R9 Recap

Recap of features adding in NGINX Plus R9: dynamic modules, UDP load balancing, support for service discovery with DNS SRV records, App Pricing [NGINX Plus R10 webinar]

We do major releases for NGINX Plus about every three to four months. The last major release, R9, was in April of this year, and it brought some big features to our user base.

The most important of those was support for dynamic modules. That’s a capability that we’re building on in the R10 release, as we bring to you our ModSecurity‑based web application firewall [WAF] and a preview of nginScript, a JavaScript‑based scripting language for NGINX. [Editor – nginScript is now called the NGINX JavaScript module.] Both of these are dynamic modules, allowing you to conditionally load them, use them, evaluate them, and then decide whether you want to use that functionality in your production environment or not.

In the R9 release, we brought some significant extensions to our Stream module – the capability to manage TCP‑ and UDP‑based traffic. We added in UDP load balancing with a number of core features, and we’ve extended that now in the R10 release, bringing it closer to parity with the HTTP load balancing.

We made a major push into service discovery with support for DNS SRV records, so that NGINX Plus can pull configuration and react dynamically to changes that were recorded within a service discovery database such as Consul or etcd.

We also introduced a new pricing model for larger organizations, or organizations who are building large applications that need to use a large, or unpredictable number of instances of NGINX Plus, and want to do so in a cost‑effective and scalable fashion.

2:34 NGINX Plus R10 New Features

Summary of new features in NGINX Plus R10: ModSecurity WAF, native JSON Web Token (JWT) support, 'dual-stack' RSA-ECC certificates, IP Transparency, Direct Server Return, and the NGINX JavaScript module [NGINX Plus R10 webinar]

This is the foundation we’re building upon to bring to you today the features in our new NGINX Plus R10 release. There’s a focus around security, we’re bringing a new ModSecurity WAF product to market, adding authentication performance improvements. Also there is a focus around some low‑level capabilities to address some specific network‑focused use cases, and also a preview of new features for our JavaScript language called nginScript [Editor – nginScript is now called the NGINX JavaScript module.].

3:01 ModSecurity WAF

Section title card reading 'ModSecurity WAF' for increased application security

Let’s start with security and the significance of the ModSecurity web application firewall (WAF) that’s included as part of NGINX Plus R10.

A web application firewall (WAF) is crucial for providing application security: in 2015 there was a 50% increase in attacks on applications and a 125% increase in DDoS attacks

A web application firewall is an essential part of any modern web application that’s handling sensitive data. In some cases, a WAF is mandated, such as if you’re operating under standards like PCI DSS; in the majority of cases you’re required to deploy some sort of WAF device acting as a firewall to protect your application from malicious or undesirable requests.

A web application firewall operates at a different level than a traditional network firewall. Network firewalls typically operate on Layers 2 to&nbsp4 of the OSI network stack. These are known as Layer 2 or Layer 4 firewalls. They inspect packets and drop bad packets – ones that fail to meet particular security or access control parameters.

A web application firewall operates at a much higher level: Layer wsq7. It inspects HTTP requests and tries to decide whether a particular HTTP request, or indeed a particular user, is legitimate or is acting maliciously. This is so malicious users can be stopped and the request can be denied before they even reach the web application.

In a security context where the number of web application attacks has increased 50% year‑on‑year, according to Akamai’s State of the Internet report, and the number of DDoS attacks has more than doubled from Q1 2015 to Q1 2016, security is absolutely critical.

When security isn’t properly in place, breaches can be devastating. Code Spaces was driven out of business when an attacker exploited a vulnerability, gained access to the Amazon control panel, and extorted the organizers. When they refused to immediately pay up, the attackers started deleting data and virtual machines on Amazon.

Democratic National Congress emails were leaked, resulting in four resignations, and other high‑profile web application attacks have caused great damage to the owners of those web properties.

A WAF is a necessary tool for protecting applications. It’s not the only tool, and of course you would want to think about a blended approach, bringing in network and application‑level techniques, as well as code reviews and code scanners. But a WAF such as ModSecurity forms an essential foundation.

6:03 Why ModSecurity?

NGINX chose ModSecurity for its WAF to provide application security because it's open source software tested at tens of thousand of sites over 14 years

ModSecurity is the most widely deployed WAF in the world. It’s a mature and well‑regarded project which has been tested in production for over 14 years.

The open source project is owned by TrustWave Holdings and stewarded by a team within TrustWave Holdings called SpiderLabs. They are a team of developers and penetration testers who both build the ModSecurity WAF and manage the open source community around that, and are also responsible for building out the rule sets that the WAF uses.

There’s a large, enthusiastic community that uses ModSecurity WAF, contributes to it, and deploys it in production. Although it has a reputation of being somewhat cryptic and difficult to learn, with a few hours investment, it’s relatively easy to become familiar with the rules language that the ModSecurity WAF uses, and it’s also easy to find help.

7:12 ModSecurity 101

A WAF for application security consists of rules that define malicious behavior and software that enforces the rules; the OWASP core rule set is free and other can be purchased; ModSecurity provides anomaly-based scoring for flexibility

There are two basic components involved. Rules are a little bit like virus signatures; they’re a set of regularly updated rules that identify particular malicious or suspect requests. The WAF software applies and executes those rules against incoming traffic to then make a decision. It can decide to drop the traffic, or to give it an anomaly score (in which case a later decision can then be made), or it can just log it as potentially suspect. An administrator can then review the logs and make an informed decision as to whether the WAF is enabled and which rules are put in production and which continue to run in shadow mode.

There are a range of rule sets available. The OWASP Core Rule Set (CRS) is distributed at no cost, and there are also a couple of commercial rule sets, most notably the one developed by the SpiderLabs team at TrustWave, which is available on an annual subscription.

You can, of course, build your own rules, running security scanners against your own code to identify potentially weak points, such as form fields that aren’t adequately checked, or create rules to prevent invalid data from ever making it into that form field and the code that processes it.

8:49 Comprehensive Protection for Critical Apps and Data

NGINX ModSecurity WAF provides comprehensive application security, with features like Layer 7 attack protection, DDoS mitigation, real-time denylists, honeypots and PCI-DSS 6.6 compliance

ModSecurity provides a lot more than just a static list of rules. It also provides a range of tools to profile, and accumulate a reputation score against, particular users. It attempts to identify DDoS attacks by drawing on real‑time denylists and third‑party sources, inspecting responses, and integrating with security scanner tools to perform virtual patching.

Together, these security capabilities are more than sufficient to meet the requirements of auditing standards such as PCI DSS.

9:37 NGINX Plus with ModSecurity WAF Details

NGINX ModSecurity WAF for comprehensive application security is a 'preview' release in R10 but is fully supported; it's offered as a dynamic module at $2000/year/instance

The ModSecurity project began as an Apache module, and even now, the most widely deployed instance of ModSecurity, known as version 2.9, is implemented as an Apache module. There are connectors and patches available to make that module work with NGINX. But we find, backed up by feedback from our own users, that still doesn’t really meet the performance or the stability expectations that our users have.

A little over a year ago, TrustWave embarked on a project to refactor the ModSecurity code into a core platform‑independent library called ModSecurity, with a range of connectors which would then allow that library to be used by web server and proxy platforms, including NGINX. This project, known as ModSecurity version 3.0, is under active development, and it’s getting close to the first public, certified open source release.

The ModSecurity implementation in NGINX is based on this new ModSecurity 3.0 code. We currently describe it as a “preview” because there are features and open issues that need to be addressed before that body of code reaches functional parity with the existing ModSecurity 2.9 implementation.

We encourage you, if you’re interested in trying the NGINX Plus ModSecurity module, to evaluate it in a test environment against your application and against production‑like traffic. Please report any issues, any stability problems, any missing features to us. As we continue to develop that project with the TrustWave team, we will address those issues. When you’re confident that it’s able to meet your performance, your functionality, and your stability requirements, you can of course go ahead and deploy it.

During that entire process, you’ll be fully supported by our core engineering team, by our support team, and by the engineers from our team who are working on the open source ModSecurity project. All the work that we are doing with ModSecurity to harden and complete version 3.0 is going back into the community, and in time, you’ll be able to take that as an open source module and use it directly.

We provide the completed, tested, certified, and supported module at a fee. With that fee, you get technical support, you get updates, and you get the assurance that those updates have been fully tested against NGINX Plus.

12:44 Why NGINX Plus with ModSecurity WAF?

Reasons to choose NGINX ModSecurity WAF for application security over alternatives: 66% cost savings in 5 years vs. Imperva, combines application delivery and security, software-based, avoid vendor lock-in [NGINX Plus R10 webinar]

Why use NGINX Plus ModSecurity WAF rather than alternatives?

The cost savings are significant. The WAF market – when you move into commercial products – is an extremely high‑value and expensive market to play in. As with our NGINX Plus pricing, we’re seeking to create something that is widely applicable, widely available, and is suitable for deployment in large volumes.

The combined solution from NGINX increases your operational efficiency by bundling the application delivery controller capabilities of NGINX Plus with the WAF capabilities as a software package. That means you’re free to deploy that in dev and test, on‑premises and off‑premises, and in the cloud. We’re also minimizing vendor lock‑in by building on an industry standard language and rule set so that, should you ever need to, you can migrate or pull in expertise from other sources to build out your NGINX Plus expertise and deployment.

13:58 Native JWT Support

Section title slide for 'Native JWT Support' [NGINX Plus R10 webinar]

The WAF was the first major project that’s part of the R10 release, and it’s something that we’ve been working on for almost a year, hand in hand with the TrustWave and SpiderLab teams. The second part of NGINX Plus R10 also focuses on security, but rather than looking at content inspection, it looks at authentication and provides a means of inspecting the authentication tokens used by modern security standards – OAuth 2.0 and OpenID.

14:38 NGINX Plus for Authentication

Native support for JWT means that NGINX validates identification tokens provided by issuers like Google, controlling access to backend applications [NGINX Plus R10 webinar]

OAuth 2.0 is emerging as the dominant standard for authentication for a very wide range of both web applications and API‑driven applications.

The core pattern is relatively straightforward: a client who wishes to access/protect the application will first talk to an authentication provider – very commonly, someone like Google or Twitter. You’ve all seen websites that allow you to log in with your Google password. They’re using this technology behind the scenes. The authentication provider then provides the client with a signed token that includes some metadata about the client, such as their email address and other parameters. The clients can then present that token to the web application. Normally, the web application would then need to unpack that token, verify the signature, and pull the data it wants out of that token.

With the support that we’ve added in NGINX Plus R10, you’re able to do that operation directly on the frontend load balancer. The token is known as a JWT (pronounced ‘jot’), for JSON Web Token, and it contains a range of field data provided by the identity issuer and is signed and certified by that issuer. This token is used in a couple of different manners, depending on whether you have an API or a web‑based application.

16:18 Use Case 1 – Single Sign‑On (SSO)

Native support for JWT makes it easy to add single sign-on (SSO) for traditional backend apps [NGINX Plus R10 webinar]

On a web‑based application, an administrator will typically use something like Google’s JavaScript client authentication API library to wrap a Google Single Sign‑On onto the front of his application, so that users who try to get through a walled part of the application are redirected to sign on, and are granted a Bearer Token (which is an OAuth 2.0 concept.) The Bearer Token will contain the JWT which has the user’s identity, signed by Google. And then the user will present that Bearer Token to the application.

This functionality is supported by identity providers such as Google and Yahoo. Unfortunately not by Facebook, though: Facebook standardized on an authentication process before the OpenID Connect standard was accepted and the JWT approach was defined. This is also supported by a range of enterprise and internal identity providers, such as Okta and OneLogin.

17:29 Use Case 2 – API Gateway

With native support for JWT, NGINX Plus as an API gateway provides centralized authentication for API access [NGINX Plus R10 webinar]

If you’re using an API gateway‑like environment, where you have a mobile application, that mobile application will operate in a similar fashion: it will retrieve a Bearer Token containing a JWT from an identity provider. That Bearer Token may even be hardwired into the mobile application. Typically, if you’re running a mobile app, you’ll use a homegrown entity to provide that identity, either build it from open source or simply script it. And then the mobile application will present the Bearer Token as part of the flow.

18:07 Why NGINX Plus for OpenID?

NGINX Plus for OpenID tokens improves security by consolidating keys to one location, offloads processing from backends, and enables rate limiting per user [NGINX Plus R10 webinar]

In each case, NGINX Plus as the first point of entry for the web traffic can do two things:

  1. It can validate the JWT token (it can verify the signature matches the signature algorithm and secret data that’s used by the issuer)
  2. It can check parameters such as the expiry date to ensure the token is valid

And then NGINX Plus can also automatically extract data from the token. Just as you can access HTTP headers using variables in NGINX Plus, you can now access parameters in a JWT using variables.

This means that NGINX now knows the identity of the user behind individual requests, and it can apply configuration logic based on that identity. It can log those parameters, it can add them to HTTP headers, it can apply rate limits based on the user identity, to ensure that users meet particular service levels, and this can simplify your application logic.

You don’t need to put the secure keys on your application to allow the application to verify the JWT is valid. You don’t need to put complex JWT management code on the application. This is particularly significant if you’re using multiple different code stacks for the app. Instead, you can centralize all of that frontend on NGINX Plus and ensure that only authenticated traffic is allowed in, and that requests are decorated with the authentication parameters that each application requires.

19:56 “Dual‑Stack” RSA‑ECC Certificates

Section title text for 'dual-stack' RSA-ECC certificates [NGINX Plus R10 webinar]

The third feature added as part of NGINX Plus R10 is support for RCA and ECC certificates on the same virtual server. This feature was released as part of our open source release a month or two ago, and is now available at NGINX Plus, fully supported by our team.

20:26 RSA vs. ECC

NGINX Plus can handle ECC certificates, for which processing is 3x faster than RSA, but still support RSA for legacy apps [NGINX Plus R10 webinar]

Certificates are used in the SSL handshake process in order to verify the identity of a website. A certificate is issued by a certificate authority, presented to the client, and the client verifies that it is connecting to the right website.

These are very long‑standing bits of Internet and SSL technology, and the vast majority of certificates use the RSA algorithm to perform the public key exchange that’s necessary as part of the certificate check. Recently, certificates have started to be produced that use an alternative algorithm called Elliptic Curve Cryptography (ECC).

Elliptic Curve Cryptography is significantly faster than RSA on the server side in terms of the amount of computations the server has to do during the SSL or TLS handshake. And this is one of the biggest performance‑limiting aspects of SSL, so anything that reduces the amount of compute work without compromising the level of security is a great thing.

Unfortunately, there are still a number of clients that don’t understand and can’t process ECC certificates. With this new feature in NGINX Plus R10, you can configure an SSL virtual server with a pair of certificates, RSA and ECC.

When a client connects to NGINX Plus, NGINX Plus will serve up the ECC certificate to modern clients that it knows are able to handle the ECC handshake, and it will serve up the RSA certificates to legacy clients that can’t handle ECC certificates. And in that way, you get a common level of security across all clients, but you’re able to use the most efficient possible certificate. This reduces the CPU utilization on the NGINX Plus server and increases the number of new SSL handshakes it can perform per second.

22:45 Network Features

Section title slide for 'Network Features' [NGINX Plus R10 webinar]

We introduced a number of other features in NGINX Plus R10 as well. Some of them focus roughly around the different networking capabilities that you would expect from an enterprise‑level ADC such as NGINX Plus.

23:04 Transparent Proxy

NGINX Plus supports IP Transparency (revealing client IP address to backend server) and Direct Server Return (UDP server responds directly to client, not through the proxy) [NGINX Plus R10 webinar]

We added a capability called transparent proxy that allows NGINX Plus to dynamically control the source IP address and port of each connection or each UDP packet that it sends to an upstream server.

Before IP Transparency was an option, every connection originating from NGINX Plus to an upstream would originate from one of the local IP addresses on NGINX Plus. That’s fine for majority of modern HTTP applications, because they can use things like the X-Forwarded-For header to determine the true source IP address of each client.

But for some legacy web applications and for TCP or UDP protocols that need to be able to see the source IP address of each client for login, or authentication, or rate‑limiting purposes, then obviously that functionality can’t be used if the source IP address isn’t in the TCP string.

This new capability allows NGINX Plus to spoof the source IP address for both TCP and UDP traffic. And by doing so, the upstream server observes that the connection originates from the remote client’s IP address.

This practice is called IP Transparency. But it’s not without its challenges. It’s a complex networking deployment to configure because you need to ensure the traffic is routed back through the NGINX Plus device and correctly terminated. We’ll be publishing a solution shortly that shows how to do that with a combination of routing on each upstream server and use of the TPROXY iptables module on NGINX Plus, so that you can perform IP Transparency for HTTP and TCP transactions.

For UDP traffic, you can also use this functionality for a deployment configuration called Direct Server Return or DSR, where response packets completely bypass NGINX and the network stack of the server that NGINX is running on. This is great for high‑performance applications where you don’t want the hit of processing response packets. Again, it requires careful network configuration and it requires a judicious configuration of health checks on NGINX, so that it can determine whether individual UDP servers are up and running or have failed and aren’t responding to UDP requests.

The key point of this functionality is that if you are using a legacy hardware load balancer that provides either IP Transparency or Direct Server Return, then this new support makes it easier to deploy NGINX Plus in its place.

Editor – For detailed configuration instrutions, see IP Transparency and Direct Server Return with NGINX and NGINX Plus as Transparent Proxy on our blog.

26:27 What Is nginScript?

NGINX JavaScript modulet is a next-generation configuration language that makes NGINX and NGINX Plus more powerful and accessible; JavaScript can implement complex and custom actions [NGINX Plus R10 webinar]

nginScript is a really exciting project which we first announced about a year ago at last year’s NGINX conference. [Editor – nginScript is now called the NGINX JavaScript module; this post uses the names interchangeably.] The goal of nginScript is to free your configuration from the static and constrained configuration language NGINX currently uses, and allow you to embed dynamic bits of code within your configuration – code that uses JavaScript and is run on‑the‑fly per request, to make rich and intelligent decisions about how that particular request should be managed, or how the response should be processed.

The nginScript implementation is still a preview. We are working on the APIs and the interfaces, but there are a number of ways that you can use nginScript right now in the R10 release to do some very cool things.

27:38 nginScript in NGINX Plus R10

Slide shows configuration directives for the NGINX JavaScript mdoule, including js_set [NGINX Plus R10 webinar]

Here’s an example of an nginScript implementation that allows you to gradually move traffic from one set of servers to another.

We define a window, a time window, where we want to migrate. We determine the name of the upstream that we want to send traffic to, by calling a JavaScript function that we’ve implemented called transitionStatus, and then we proxy pass through to that upstream.

We can create that JavaScript variable using almost arbitrary JavaScript. There are limitations, and we’re working to address those limitations and build out the richness of our JavaScript implementation.

A sample use case for NGINX JavaScript is transitioning clients to a new app server over a two-hour period [NGINX Plus R10 webinar]

But even now, you can do some fairly sophisticated calculations to see where in time the user is in that window, and then perform a progressive transition from one set of servers to another. It’s a way of doing a seamless upgrade, pinned by source IP address, so that users aren’t switched forwards and backwards between servers, from an old generation to a new generation.

Editor – For a detailed discussion of this nginScript use case, see Using the NGINX JavaScript Module to Progressively Transition Clients to a New Server on our blog.

This is a really exciting way that you’ll be able to use to extend the functionality of NGINX Plus in the future, and it’s something we’re going to continue to iterate on and develop over the next few months.

The NGINX JavaScript module is a work in progress to which a growing set of ECMAScript 5.1 and other global functions is being added

It is a work in progress. We’ve targeted a subset of ECMAScript 5.1. We have implemented our own JavaScript interpreter on runtime, that runs directly within the core of NGINX, and interfaces cleanly and nicely with the event model that NGINX uses as well as the memory management model that NGINX uses.

We’re building out a growing set of both global functions and built‑in objects: date, time, string, other objects like that, and core functions. It’s still a work in progress so the internal API may change in the next release, but our target is to lock that down; build out the core interfaces between the nginScript language, the NGINX configuration, and the core runtime; and then build out the range of both global functions and built‑in JavaScript capabilities that we can support, to help you build rich and articulate applications using NGINX.

30:17 Additional Features

Section title card for 'Additional Features'

There are a number of additional features that we’ve added as part of NGINX Plus R10.

New features in NGINX Plus R10 include closer parity between TCP/UDP and HTTP load balancing, plus support for the IP_BIND_ADDRESS_NO_PORT socket option to help with ephemeral port exhaustion [NGINX Plus R10 webinar]

I mentioned at the start some of the work that we had done in previous releases to deliver the same level of performance and functionality with UDP and TCP load balancing that we currently offer with HTTP. That gap is continuing to close with the availability of more of the functions that you’re familiar using with HTTP traffic – these can now be used with the Stream module that handles TCP and UDP requests.

Capabilities like split clients to share traffic between different servers, the GeoIP and Geo modules. The Map modules to make complex decisions and other NGINX variables. And of course, you can also use nginScript as part of the evaluation of the configuration for Stream services as well as HTTP.

Another capability we’ve added is support for a new socket option called IP_BIND_ADDRESS_NO_PORT This slightly obscurely named socket option provides some very significant scalability benefits for large‑scale applications that are handling very large numbers of connections. WebSocket is a great example where you want to use NGINX Plus to manage a large number of long‑term, kept‑open TCP connections.

Without this capability, NGINX Plus is confined to the ephemeral port limit of a Linux server. Essentially, that means that you can’t make more than about 60,000 TCP connections to your upstream servers from an individual NGINX instance. You can overcome that limit by adding additional IP addresses to that NGINX instance and putting some decision logic in place to pick the source IP address for each connection. But you still have a similar limit that scales linearly with the number of IP addresses that can affect you, if you’re running a large number of persistent connections through NGINX Plus.

The Linux kernel capability that supports IP_BIND_ADDRESS_NO_PORT overcomes that limit, and it allows NGINX Plus to make upwards of 60,000 TCP connections to each upstream server. The more upstream servers you have, the more concurrent TCP connections you can run from NGINX Plus.

The limit comes down to the IP 4‑tuple: the number of options you have for source IP address and port and destination IP address and port. That must be unique for every single TCP connection. And with IP_BIND_ADDRESS_NO_PORT, it overcomes one of the limits in that calculation. So, this capability is turned on by default on modern Linux kernels, and it will allow you to scale your web application, particularly ones using WebSocket, far beyond the levels that you could do previously.

Additional Features (Continued)

New features in NGINX Plus R10 include a unique ID assigned to every request for tracing purposes, and new HTTP/2 processing functionality [NGINX Plus R10 webinar]

We’ve added in an additional transaction ID variable [$request_id], randomly generated, that you can use in logging or you can pass through in the headers, the upstream servers. So, you can then track an individual transaction as it moves through multiple tiers of web servers, load balancers, and web applications. Improvements to HTTP/2 allow us to buffer HTTP bodies and stream those through to upstream servers; similar improvements for the way that we stream bodies back to clients.

These changes reflect a continued investment in ensuring that we can build out the most scalable and high‑performance HTTP/2 implementation available.

34:50 Summary

In summary, NGINX Plus R10 features include the ModSecurity WAF for application security, native JWT support, 'dual-stack' RSA-ECC certificates, transparent proxy, and nginScript

In summary, NGINX Plus R10 doubles down on security, with the addition of our ModSecurity WAF, our support for JWT tokens and OpenID handshakes, and support for dual certificates on an individual virtual server.

We’ve provided additional options for network configuration to support IP Transparency and Direct Server Return. We’ve further closed the functionality gap between UDP and TCP load balancing, and added more feature‑rich HTTP load balancing capabilities in NGINX Plus.

We’re previewing the next stage of nginScript, and I’d love to get your feedback on that as you use it in task and production. And of course, the range of other capabilities and features we talked about.

To try out all the great new features in NGINX Plus R10 for yourself, start your free 30-day trial today or contact us to discuss your use cases.

[Editor – NGINX ModSecurity WAF officially went End-of-Sale as of April 1, 2022 and is transitioning to End-of-Life effective March 31, 2024. For more details, see F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life on our blog.]

Hero image
Are Your Applications Secure?

Learn how to protect your apps with NGINX and NGINX Plus



About The Author

Owen Garrett

Sr. Director, Product Management

Owen is a senior member of the NGINX Product Management team, covering open source and commercial NGINX products. He holds a particular responsibility for microservices and Kubernetes‑centric solutions. He’s constantly amazed by the ingenuity of NGINX users and still learns of new ways to use NGINX with every discussion.

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.