In August 2016, Amazon Web Services (AWS) introduced Application Load Balancer for Layer 7 load balancing of HTTP and HTTPS traffic. The new product added several features missing from AWS’s existing Layer 4 and Layer 7 load balancer, Elastic Load Balancer, which was officially renamed Classic Load Balancer.
A year later, AWS launched Network Load Balancer for improved Layer 4 load balancing, so the set of choices for users running highly available, scalable applications on AWS includes:
- NGINX Open Source and NGINX Plus
- Classic Load Balancer
- Application Load Balancer (ALB)
- Network Load Balancer (NLB)
In this post, we review ALB’s features and compare its pricing and features to NGINX Open Source and NGINX Plus.
- The information about supported features is accurate as of July 2020, but is subject to change.
- For a direct comparison of NGINX Plus and Classic Load Balancer (formerly Elastic Load Balancer or ELB), as well as information on using them together, see our previous blog post.
- For information on using NLB for a high‑availability NGINX Plus deployment, see our previous blog post.
Features In Application Load Balancer
ALB, like Classic Load Balancer or NLB, is tightly integrated into AWS. Amazon describes it as a Layer 7 load balancer – though it does not provide the full breadth of features, tuning, and direct control that a standalone Layer 7 reverse proxy and load balancer can offer.
ALB provides the following features that are missing from Classic Load Balancer:
- Content‑based routing. ALB supports content‑based routing based on the request URL,
Hostheader, and fields in the request that include standard and custom HTTP headers and methods, query parameters, and source IP address. (See “Benefits of migrating from a Classic Load Balancer” in the ALB documentation.)
- Support for container‑based applications. ALB improves on the existing support for containers hosted on Amazon’s EC2 Container Service (ECS).
- More metrics. You can collect metrics on a per‑microservice basis.
- WebSocket support. ALB supports persistent TCP connections between a client and server.
- HTTP/2 support. ALB supports HTTP/2, a superior alternative when delivering content secured by SSL/TLS.
(For a complete feature comparison of ALB and Classic Load Balancer, see “Product comparisons” in the AWS documentation.)
ALB was a significant update for AWS users who had struggled with Classic Load Balancer’s limited feature set, and it went some way towards addressing the requirements of sophisticated users who need to be able to secure, optimize, and control the traffic to their web applications. However, it still does not provide all the capabilities of dedicated reverse proxies (such as NGINX) and load balancers (such as NGINX Plus).
A Better Approach to Control Traffic on AWS
Rather than using Amazon ALB, users can deploy NGINX Open Source or NGINX Plus on AWS to control and load balance traffic. They can also deploy Classic Load Balancer or Network Load Balancer as a frontend to achieve high availability across multiple availability zones. The table compares features supported by ALB, NGINX, and NGINX Plus.
Note: The information in the following table is accurate as of July 2020, but is subject to change.
|Amazon ALB||NGINX Open Source||NGINX Plus|
methods and features
||Multiple load‑balancing methods (including Round Robin, Least Connections, Hash, IP Hash, and Random) with weighted upstream servers||Same as NGINX Open Source, plus Least Time method, more session persistence methods, and slow start|
|Caching||❌ Caching in the load balancer not supported||✅ Static file caching and dynamic (application) caching||✅ Static and dynamic caching plus advanced features|
|Health checks||Active (identifies failed servers by checking status code returned for asynchronous checks)||Passive (identifies failed servers by checking responses to client requests)||Both active and passive – active checks are richer and more configurable than ALB’s|
|High availability||You can deploy ALB instances in multiple Availability Zones for HA, but not across regions||Active‑active HA with NLB and active‑passive HA with Elastic IP addresses||Same as NGINX Open Source, plus built‑in cluster state sharing for seamless HA across all NGINX Plus instances|
|Support for all protocols in the IP suite||❌ HTTP and HTTPS only||✅ Also TCP and UDP, with passive health checks||✅ Also TCP and UDP, with active and passive health checks|
|Multiple applications per load balancer instance||✅||✅|
|Content‑based routing||✅ Based on request URL,
|Containerized applications||Can load balance to Amazon IDs, ECS container instances, Auto Scaling groups, and AWS Lambda functions||Requires manual configuration or configuration templates||Automated configuration using DNS, including
|Portability||❌ All environments (Dev, test, and production) must be on AWS||✅ Any environment can be on any deployment platform|
|SSL/TLS||✅ Multiple SSL/TLS certificates with SNI support
❌ Validation of SSL/TLS upstreams not supported
|✅ Multiple SSL/TLS certificates with SNI support
✅ Full choice of SSL/TLS ciphers
✅ Full validation of SSL/TLS upstreams
|HTTP/2 and WebSocket||✅||✅|
|Authentication capabilities||– OIDC, SAML, LDAP, AD IdP authentication options
– Integrated with AWS Cognito and CloudFront
|Multiple authentication options|
|Advanced capabilities||❌ Barebones API||✅ Origin serving, prioritization, rate limiting, and more||✅ Same as NGINX Open Source, plus RESTful API, key‑value store|
|Logging and debugging||✅ Amazon binary log format||✅ Customizable log files and multiple debug interfaces||✅ Fully customizable log files and multiple debug interfaces, fully supported by NGINX|
|Monitoring tools||✅ Integrated with Amazon CloudWatch||✅ NGINX Controller* and other third‑party tools||✅ NGINX Controller and other third‑party tools; extended set of reported statistics|
|Official technical support||✅ At additional cost||❌ Community support only||✅ Included in price and direct from NGINX|
|Free tier||✅ First 750 hours||✅ Always free||✅ 30‑day trial subscription|
* NGINX Controller is now F5 NGINX Management Suite.
Of course, you should evaluate your load balancing choice not by a feature checklist, but by assessing the capabilities you need to deliver your applications flawlessly, with high security, maximum availability, and full control.
Handling Traffic Spikes
Amazon’s Classic Load Balancer (formerly ELB) suffered from a poor response to traffic spikes. Load balancer instances were automatically sized for the current level of traffic, and it could take many minutes for ELB to respond and deploy additional capacity when spikes occurred. Users had to resort to a manual, forms‑based process to request additional resources in advance of traffic spikes (referred to as “pre‑warming”). Because ALB is based on NGINX, ALB instances can handle much more traffic, but you may still observe scaling events in response to traffic spikes. Furthermore, a traffic spike automatically results in greater consumption of Load Balancer Capacity Units (LCUs) and consequently a higher cost.
You can gain complete control over capacity and cost if you deploy and scale your load‑balancing proxies yourself. NGINX and NGINX Plus are deployed within standard Amazon instances, and our sizing guide gives an indication of the potential peak performance of instance types with different capacities. Pricing for NGINX Plus is the same for all instance sizes, so it’s cost‑effective to deploy excess capacity to handle spikes, and it’s quick to deploy more instances – no forms to complete – when more capacity is needed.
Detecting Failed Servers with Health Checks
Our testing of Amazon ALB indicates that it does not implement “passive” health checks. A server is only detected as having failed once an asynchronous test verifies that it is not returning the expected status code.
We discovered this by creating an ALB instance to load balance a cluster of instances. We configured a health check with the minimum 5-second frequency and minimum threshold of 2 failed checks, and sent a steady stream of requests to the ALB. When we stopped one of the instances, for some requests ALB returned a
Gateway error for several seconds until the health check detected the instance was down. Passive health checks (supported by both NGINX and NGINX Plus) prevent these types of errors from being seen by end users.
ALB’s health checks can only determine the health of an instance by inspecting the HTTP status code (such as
Found). Such health checks are unreliable; for example, some web applications replace server‑generated errors with user‑friendly pages, and some errors are only reported in the body of a web page.
NGINX Plus supports both passive and active health checks, and the latter are more powerful that ALB’s, able to match against the body of a response as well as the status code.
Finally, the biggest question you face if you deploy ALB is cost. Load balancing can be a significant part of your Amazon bill.
AWS uses a complicated algorithm to determine pricing. Unless you know precisely how many new connections, how many concurrent connections, and how much data you will manage each month – which is very hard to predict – and you can run the LCU calculation the same way that Amazon does, you’ll be dreading your Amazon bill each month.
NGINX Plus on AWS gives you complete predictability. For a fixed hourly cost plus AWS hosting charges, you get a significantly more powerful load‑balancing solution with full support.
NGINX Plus is a proven solution for Layer 7 load balancing, with Layer 4 load‑balancing features as well. It works well in tandem with Amazon’s own Classic Load Balancer or NLB.
We encourage the continuing and growing use of NGINX and NGINX Plus in the AWS environment, already a very popular solution. If you are not already an NGINX Plus user, start your free 30-day trial today or contact us to discuss your use cases.