NGINX.COM
Web Server Load Balancing with NGINX Plus

VMware is the leading company in enterprise virtualization. Among other capabilities, virtualization enables multiple operating systems to run simultaneously on the same hardware. For example, you can run Windows and Linux at the same time on the same server, or run Windows on your Mac laptop.

For application owners and developers, virtualization separates the application from the hardware it runs on, achieving a new level of freedom and portability. VMware has an extensive range of widely trusted offerings for enterprise virtualization, and a broad array of partnerships for access to compatible products and services.

Cloud offerings can be seen as a new level of virtualization. But the leaders in cloud computing, such as Amazon Web Services and the Google Cloud Environment, don’t start with the hardware, software, and solutions that a company already has in‑house; instead, they offer their own platform, often running third‑party, open source tools, which customers can access as needed.

Tech analyst firm IDC predicts that “3rd Platform solutions” – that means cloud – will disrupt one‑third of the leaders in every industry by 2018.

VMware’s cloud offering, vCloud Air, retains some aspects of the company’s roots in virtualization. For instance, vCloud Air is seen as favoring VMware’s own management tools, such as vCenter and vRealize, which interoperate well with VMware private clouds – clouds that run on a customer’s hardware and inside the customer’s firewall.

NGINX Plus and open source NGINX help bridge the gap between cloud implementations and onsite deployments of enterprise software. NGINX is extremely popular on AWS, used by 40% of all AWS implementations. (This mirrors usage of NGINX across the 1 million busiest websites worldwide.) Statistics are not available for vCloud Air, but NGINX appears to be increasingly popular in the VMware world as well.

We hope the vCloud Air performance tips offered here will help to improve the performance of your current implementation and introduce you to new tools and approaches for future growth and improvement.

Note. For improved performance for all your web applications, wherever they’re hosted, see 10 Tips for 10x Performance.

Tip 1 – Plan for Performance

Cloud computing is helping organizations solve problems faster and at less cost – but every form of computing has its challenges. You have to plan carefully to get the most out of existing investments and make new investments wisely.

This concern is especially pertinent for VMware customers, many of whom are using vCloud Air to extend existing applications, or to smooth the transition into the cloud for developers working with legacy tools, software licenses, and workflow approaches. A recent report from the UK about adoption of cloud computing in the public sector shows that more than 80% of respondents had found at least some hidden costs in their cloud deployments.

The biggest issue, affecting 44% of those polled, was interoperability between the existing IT infrastructure and new cloud platforms – exactly the area where VMware hopes to make things easier for its existing customers with vCloud Air. There may be instances where existing in‑house deployment makes sense; where vCloud Air is a good move; and where a “pure play” cloud approach, such as that offered by other vendors, is better for the long term.

To make the move to cloud effectively, consider the following plan:

  1. Start small – Use cloud for an application where it most obviously makes sense, such as a new app that’s similar to apps that do well in the cloud today. Use a DevOps approach and see how much change this requires to your current working approach.
  2. Forecast – See where your needs are going over a timeframe of 3–5 years. (The competitive landscape and customer expectations are changing rapidly for most companies, largely due to IT innovation, so this is not easy.) Identify the best architecture for your IT services in the future, especially what mix of private and public cloud services you will want to be using.
  3. Backcast – Once you know where you want to go, imagine yourself in that future, then describe how you got to your new environment from where you are today. Identify which currently available technologies you will abandon rapidly, which you will adopt temporarily to ease the transition, and which you will count on going forward.
  4. Retrain and hire – Start moving your internal talent to the projects and skill‑building that will equip them for where you’re going, and see where you need to hire to fill gaps.

You are likely to find that many of your longer‑term needs are best met by a microservices approach, with massive applications decomposed into a collection of responsive, reusable services that you mix and match to meet customer requirements. You may also use elements of a service‑oriented architecture (SOA) approach. For a detailed comparison of the two approaches, download this report from O’Reilly Media.

The tips listed here, as well as most of the suggestions on the NGINX blog, follow a common pattern: you begin the move to a microservices‑type architecture, quickly and affordably – often saving money right away – with minimal initial changes to your existing applications. Then, over time, you can make further changes opportunistically. This approach is extremely flexible and can be used over and over across internal, private cloud, and public cloud deployments.

Tip 2 – Optimize Your vCloud Air Implementation

Your VCloud Air implementation needs tuning and adjustment, just like a physical server. In fact, vCloud Air may be somewhat more machine‑dependent than some other public clouds. Consider these steps to improve the operation of your vCloud Air environment:

  • Use VMXNET3 for networking – VMXNET3 is a newer, nonstandard virtual network adapter. Install VMware management tools. The tools might not increase performance directly, but make file management easier, at the cost of reduced portability to other clouds. NGINX can run as an engine within this environment.
  • Take the memory and CPU you need – In vCenter, you can reserve the server resources you need – up to and including an entire physical machine – for use by a VM. If you know the likely maximum of resources you will use, performance improves if you allocate the whole lot in advance. By preallocating, work doesn’t get interrupted by operations to reallocate CPU or memory as working space grows and shrinks.
  • Take all the disk you need – As with CPUs and memory, allocating the maximum disk space you’re likely to use in advance eliminates the need for disk space allocation and deallocation operations.

Admittedly, some of these steps undermine the premise of virtualization – that you just use what you need while you need it, then give it back. But you have information about your application that the management software doesn’t have. Use that information to reduce the need for ongoing adjustments by the management software.

Tip 3 – Implement a Reverse Proxy Server

A reverse proxy server is a server that you put “in front of” your application servers. The reverse proxy server accepts traffic from Internet clients (typically web browsers) and is the single point of entry to a website. The reverse proxy server parcels out requests to backend servers (perhaps handling some itself).

A reverse proxy server isolates your application servers from direct exposure to Internet traffic – and it also isolates your Internet clients, that is your users, from direct exposure to your application servers. A reverse proxy server provides a lot of benefits:

  • Enables you to direct traffic to servers that can handle it, and not servers that are overloaded or crashed
  • Can cache static files, so your application servers don’t have to handle requests for them
  • Can support new protocols, such as SSL/TLS and HTTP/2
  • Gives you a focal point for monitoring and managing servers of different types and capacities as a pool of resources, rather than one device (whether physical or virtual) at a time

Implementing a reverse proxy server is the first, and most important, step in moving to a microservices‑based architecture. It gives you the architectural flexibility to take all the other big‑picture steps needed to implement microservices and gives you key microservices benefits, such as high availability, and to add specialized services, such as a separate debugging server, while allowing you to move individual apps to microservices at your own pace.

A reverse proxy server, such as the NGINX reverse proxy server, spreads traffic and enables flexibility

A reverse proxy server is even more important in the cloud than when using servers you control directly. Performance and availability are harder to predict in the cloud – and spinning up new resources when existing ones don’t perform is easier in the cloud. Using a reverse proxy server gives you flexibility to get the job done, independent of the status of any given resources.

Tip 4 – Cache Static and Dynamic Files

One of the great advantages of using NGINX as a reverse proxy server is its ability to cache static files. With caching, a high percentage of file requests – roughly half of all requests, for many websites – are handled on the reverse proxy server, significantly offloading the application servers.

The storage needed for cached files and retrieval performance are relatively predictable, reducing the variability of performance for your application along with page loading times.

To optimize performance, you can move cached files to different locations. Also, there’s an art to matching cloud instances to specific CPU, memory, and storage needs. By using your reverse proxy server for caching, and a separate server to run the application, you can optimize the two instances:

  • The reverse proxy server runs NGINX and has limited and predictable RAM needs, as NGINX is famously conservative with memory, even when supporting many thousands of simultaneous users. When used for caching, the server has large disk needs, which are predictable as the sum of the disk space needed for all the static files in a site.
  • The application server runs your application software, such as PHP, so it therefore has variable and hard‑to‑predict RAM needs. It has limited disk access needs as static files are all on the caching server.

This division of labor can help save you money and improve performance simultaneously. Once an instance is optimized, you can grab the resources you’re likely to need up front, improving performance during runtime. And these fine‑tuned instances reduce overall resource use and thereby cost.

Dynamic files can also be cached, by making them a bit less dynamic. This is called microcaching. If an application server is getting ten requests a second, for instance, you can cache that page for just one second. The first request causes a new page to be generated, but; the next nine are retrieved from the cache. A new page doesn’t need to be generated again until the one second passes.

You can learn more about caching with NGINX and NGINX Plus in our caching guide.

Tip 5 – Implement Load Balancing

Load balancing distributes requests across multiple application servers, with the goal of stopping any one server from becoming a bottleneck. Given that server performance is more variable in a cloud environment than when using servers you control directly, load balancing is even more important in the cloud than elsewhere.

Load balancing helps with several potential performance issues found in cloud environments:

  • Running out of memory – It’s all too easy to run out of memory on an instance. Load balancing, along with the other tips listed here, helps you adjust the demands you make on each instance to its available memory.
  • Inconsistent performance – Instances of the same type can vary in performance. Load balancing helps you avoid bottlenecks at any one instance, whether that’s due to transient problems, slow performance over time, or overloading.
  • Bandwidth constraints – Cloud storage volumes may have less bandwidth than hardware disks. Load balancing and other tips listed here help you spread bandwidth demands across instances.

vCloud Air offers multiple options for load balancing within vCloud, as described in the vCloud documentation. These options are useful, but not as powerful as load balancing and other options available elsewhere.

Both open source NGINX and NGINX Plus support load balancing. Load balancing on NGINX is independent of any specific cloud or server vendor. For greater control, use NGINX Plus. NGINX Plus offers greater configuration control, precise logging, more load balancing options, application health checks, session persistence, SSL termination, HTTP/2 termination, WebSocket support, caching, and dynamic configuration of load‑balanced server groups.

To learn more about how software load balancing improves performance, download our ebook, Five Reasons to Choose a Software Load Balancer. We have lots of resources about load balancing with NGINX and NGINX Plus, including basic configuration instructions and guidance on choosing a load balancing technique.

Tip 6 –& Benchmark and Monitor App Performance

Reasons for moving to the cloud include flexibility, greater performance, and reduced cost. Costs for cloud‑based apps are much different than for traditionally hosted apps. In order to compare cloud implementations to others, and to better serve users, you need to benchmark and monitor app performance.

There are a number of parameters you can measure minute by minute and compare to standards you establish, including:

  • Time to first byte appearing onscreen
  • Load time for a full page
  • Transactions initiated
  • Transactions completed

You can also measure and monitor app development time and support requirements to get a complete picture. With this information in hand, you can increase available resources to ease bottlenecks, but also let go of unneeded resources to reduce costs.

vCloud Air includes a Monitoring tab with basic metrics. You can also use the vRealize Operations Manager for local servers and applications. vCenter Hyperic gathers monitoring information across on‑premises and vCloud Air instances and returns it to the vRealize Operations Manager.

NGINX Plus adds advanced health checks and sophisticated, live activity monitoring. You can view the dashboard in NGINX Plus directly or feed NGINX Plus statistics to other dashboards and third‑party tools. You get stats for connections, requests, uptime, and more – per server, per group, and across your entire deployment.

The stats can be exported to NGINX partners such as Datadog, and Dynatrace, and New Relic.

Conclusion

vCloud Air can add a great deal to your app deployment options. The steps suggested here are valuable ways to improve vCloud Air performance while keeping costs down.

These steps are also valuable for non‑cloud and for mixed deployments. The flexibility you gain by, for instance, using a reverse proxy server can allow you to mix and match resources across all types of environments.

NGINX Plus is an especially capable tool for cloud and mixed deployments, including additional load balancing options and monitoring improvements. Start your free 30‑day trial today or contact us to discuss your use cases.

We invite you to share your comments below, or tweet your observations with the hashtags #NGINX and #webperf.

Hero image
Free O'Reilly eBook: The Complete NGINX Cookbook

Updated for 2024 – Your guide to everything NGINX



About The Author

Floyd Smith

Director of Content Marketing

Floyd Earl Smith has been involved in application development since the launch of the Macintosh and has written more than 20 books on hardware and software topics. He now writes for the NGINX blog, including contributing to blog posts and webinars about the NGINX Microservices Reference Architecture, a breakthrough microservices framework.

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.