NGINX.COM
Web Server Load Balancing with NGINX Plus

[Editor – The solution described in this blog relies on the NGINX Plus Status and Upstream Conf modules (enabled by the status and upstream_conf directives). Those modules are replaced and deprecated by the NGINX Plus API in NGINX Plus Release 13 (R13) and later, and will not be available after NGINX Plus R15. For the solution to continue working, update the configuration files and scripts that refer to the two deprecated modules. See also Transitioning to the New NGINX Plus API for Configuration and Monitoring on our blog.]

Our previous blog posts about service discovery with Consul and etcd discussed the importance of service discovery in distributed systems, including service‑oriented and microservices architectures. In such systems it’s common to assign network locations to service instances dynamically, because their locations can change over time due to autoscaling, failures, or upgrades. To track the current network location of a service instance, clients need a sophisticated service discovery mechanism.

In this blog, we’ll explain how to dynamically add or remove load‑balanced servers that are registered with Apache ZooKeeperTM, a tool used for service discovery, in combination with NGINX Plus’ dynamic configuration API. [Editor – This refers to the NGINX Plus Upstream Conf module, now replaced and deprecated by the NGINX Plus API.] With this solution you can automate service discovery and change the set of load‑balanced servers without having to reload the NGINX Plus configuration.

Apache ZooKeeper is a software project of the Apache Software Foundation that provides an open source distributed configuration service, synchronization service, and naming registry for large distributed systems. ZooKeeper nodes – or Znodes, as they are popularly called – store their data in a hierarchical name space, much like a file system or a tree data structure. All clients can read from and write to the nodes, making ZooKeeper a shared configuration service.

To make it easier to combine the dynamic configuration API with ZooKeeper, we’ve created a sample zookeeper-demo, with step‑by‑step instructions for creating the configuration described in this blog post. In this post, we will walk you through the proof of concept. Using tools like Docker, Docker Compose, and Homebrew, you can spin up a Docker‑based environment in which NGINX Plus load balances HTTP traffic to a couple of hello‑world applications, with all components running in separate Docker containers.

Editor – Demos are also available for other service discovery methods:

How the Demo Works

First we spin up a separate Docker container for each of the following apps:

  • ZooKeeper – Performs service discovery.
  • Registrator – Registers services with ZooKeeper. Registrator monitors the starting and stopping of containers and updates ZooKeeper about the state changes.
  • hello – Simulates a backend server. This is another project from NGINX, Inc., an NGINX web server that serves an HTML page containing the web server’s hostname, IP address, and port.
  • A second instance of the hello app – Simulates another backend server.
  • NGINX Plus – Load balances the above services.

The NGINX Plus container listens on the public port 80, and the built‑in NGINX Plus dashboard on port 8080. The ZooKeeper container listens on ports 2181, 2888, and 3888.

Registrator monitors Docker for new containers that are launched with exposed ports, and registers the associated services with ZooKeeper. By setting environment variables within the containers, we can be more explicit about how to register the services with ZooKeeper. For each hello‑world container, we set the SERVICE_TAGS environment variable to production to identify the container as an upstream server for NGINX Plus to load balance. When a container quits or is removed, Registrator removes its corresponding Znode entry from ZooKeeper automatically.

Finally, with a tool written in Ruby and included in the sample demo, zk-tool, we use ZooKeeper watches to trigger an external handler (script.sh) every time there is a change in the list of registered service containers. This bash script gets the list of all current NGINX Plus upstream servers, uses zk-tool to loop through all the containers registered with ZooKeeper that are tagged production, and uses the dynamic configuration API [again, referring to the deprecated Upstream Conf module] to add them to the NGINX Plus upstream group if they’re not listed already. It also then removes from the NGINX Plus upstream group any production‑tagged containers that are not registered with ZooKeeper.

Summary

Using a script like the one in our demo to dynamically configure upstream groups based on the services registered with ZooKeeper automates configuration of upstream servers in NGINX Plus. This automation also frees you from having to figure out how to issue the API calls correctly and reduces the amount of time between the state change of a service in ZooKeeper and its addition or removal within the upstream group in NGINX Plus.

Editor – Demos are also available for other service discovery methods:

Try out automated configuration of NGINX Plus upstream groups using ZooKeeper for yourself:

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Kunal Pariani

Technical Solutions Architect

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.