NGINX.COM
Web Server Load Balancing with NGINX Plus

NGINXaaS for Azure enables enterprises to securely deliver high-performance applications in the cloud. Powered by NGINX Plus, it is a fully managed service that became generally available in January 2023. Since its release and into the future, we continue to enhance NGINXaaS for Azure by adding new features.

In this blog, we highlight some of the latest performance and security capabilities that let you enjoy more NGINX Plus benefits without having to deploy, maintain, and update your own NGINX Plus instances. For general information on NGINXaaS for Azure and its overarching capabilities, read Simplify and Accelerate Cloud Migrations with F5 NGINXaaS for Azure.

A diagram depicting the NGINXaaS for Azure architecture. NGINX Plus and Edge Routing are in a SaaS portion of the environment, while the customer's compute, key storage, monitoring, and other services are in the customer's Azure subscription.
Figure 1: Overview of NGINXaaS for Azure

Securing Upstream Traffic with mTLS

While reverse proxies require SSL/TLS for encrypting client-side traffic on the public internet, mutual TLS (mTLS) becomes essential to authenticate and ensure confidentiality on the server side. With the shift to Zero Trust, it’s also necessary to verify that upstream server traffic hasn’t been altered or intercepted.

A diagram of server-side TLS authentication. Both the client side and server side connections are encrypted, and both the NGINXaaS instance and app service are mutually authenticated.
Figure 2: mTLS with NGINXaaS

NGINXaaS for Azure now supports NGINX directives to secure upstream traffic with SSL/TLS certificates. With these directives, not only can you keep upstream traffic encrypted via mTLS, you can also verify that the upstream servers are presenting a valid certificate from a trusted certificate authority.

Certificate Rotation

A key part (pun intended) of using TLS certificates with NGINXaaS for Azure is securely managing those certificates through the use of Azure Key Vault (AKV). AKV keeps sensitive, cryptographic material secure and allows NGINXaaS for Azure to use those certificates while preventing accidental or intentional disclosure of the key material through the Azure portal.

Animation of certificate rotation with NGINXaas and Azure Key Vault. A new version of an existing certificate is loaded into Azure Key Vault, and then the new certificate is automatically rotated into the NGINXaaS instance.
Figure 3: Certificate rotation with Azure Key Vault

NGINXaaS for Azure can now automatically rotate certificates through your NGINX deployments whenever they are updated in AKV. New versions of certificates are rotated into your deployments within four hours.

HTTP/2 Proxy (and Additional Protocol Options)

Close your eyes and think back to the year 1997. We were Tubthumping along with Chumbawamba, wearing our JNCO jeans (or Modrobes for any fellow Canadians out there), and HTTP/1.1 was released. At that time, most end users accessed the Internet over a dial-up modem, web pages only contained a few dozen elements, and when it came to user experience, bandwidth was a much greater concern than latency.

Twenty-five years later, a sizeable portion of web applications still use HTTP/1.1 to deliver content. This can be a problem. While HTTP/1.1 still works, it only allows delivery of one resource per-connection at a time. Meanwhile, modern web apps may make hundreds of requests to update a user interface.

While most users have considerably more bandwidth at their disposal, the speed of data transmission (constrained by the fundamental speed of light) hasn’t advanced as quickly. Therefore, the cumulative latency of all those requests can have a significant impact on the perceived performance of your application. Modern browsers open multiple TCP connections to the same server, but each request on those connections is still sequential, meaning one slow resource can delay all the other resources behind it.

Take a look at F5’s homepage when it loads using only HTTP/1.1:
Timeline of f5.com accessed via HTTP/1.1. The total time approaches 2.5 seconds, and several requests are proceeded by grey bars indicating queueing delays.
Figure 4: F5.com accessed via HTTP/1.1

See all those grey bars? That’s valuable time the browser is wasting as it waits for session establishment, blocks a single slow resource, or looks for a new TCP connection to become available.

Let’s enable HTTP/2 and try again:
Timeline of f5.com accessed via HTTP/2. The total time is under 2 seconds, and and much less time is wasted on queueing-related delays.
Figure 5: The same request, but using HTTP/2

Much better. There are still a few slow resources, but they aren’t holding up other requests and significantly less time is being spent waiting for queueing-related delays. HTTP/2 keeps the same semantics that web browsers and servers are familiar with from HTTP/1.1 while adding new capabilities to address some reasons for poorly perceived application performance (e.g., request prioritization, header compression, and request multiplexing).

Given such a clear difference, NGINXaaS for Azure now supports serving client requests over HTTP/2. Client-side connections are more likely to be impacted by longer roundtrip times, so you can deliver that traffic with the latencyreducing benefits of HTTP/2 while leaving your upstream servers unchanged.

Despite what some in the web application business might want to believe, we do recognize that there are additional protocol options beyond HTTP available to you. This is why the NGINX gRPC module is also now available in NGINXaaS for Azure. It provides several configuration directives for working with gRPC traffic, including error interception, header manipulation, and more.

For nonHTTP/gRPC traffic, the stream module is now available in NGINXaaS for Azure. This module supports proxying and load balancing TCP and UDP traffic.

Support for Layer 4 and Layer 7 Load Balancing

NGINXaaS for Azure can now function as both a Layer 4 TCP/UDP and Layer 7 HTTP/HTTP cloud-native load balancer. Why is this important? Instead of deploying two different services or load balancers, NGINXaaS for Azure enables you to deploy a single load balancer and configure it to function on both layers at the same time, easing your cloud architecture and lowering your cost.

You can learn about the configuration here.

Higher Capacity (Up to 160 NCUs)

NGINXaaS for Azure is a consumption-based service that is metered hourly and billed monthly in NGINX Capacity Units (NCUs). We recently doubled that deployment capacity from 80 NCUs to 160 NCUs. Customers can deploy a smaller system of 20 NCUs and, if workload increases, can scale up to 160 NCUs. Further, customers also have an option to deploy up to 160 NCUs at the start.

Addition of NGINX Plus Directives

To provide an easy lift-and-shift configuration experience from an on-premises to fully managed cloud offering, we added many NGINX Plus directives. Please refer to all the NGINX Plus directives we support in NGINXaaS for Azure here.

Get Started Today

We’re always improving and expanding the ways you can use NGINX and F5 technology to secure, deliver, and optimize every app and API – everywhere. With the aforementioned and other new capabilities added to NGINXaaS for Azure, more Azure users can solve numerous app and API problems with the power of NGINX Plus, without the overhead of managing an additional VM or container infrastructure.

If you want to learn more about NGINXaaS for Azure, we encourage you to look through the product documentation. If you are ready to give NGINXaaS for Azure a try, please visit the Azure Marketplace or contact us to discuss your use cases.

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Headshot of Adam Schumacher

Adam Schumacher

Global Solutions Architect - Platforms & Infrastructure

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.