Web Server Load Balancing with NGINX Plus

Global server load balancing (GSLB) is the practice of balancing load (connections or requests) across two or more distinct data centers or points of presence (PoPs). It is used to achieve several goals:

  • Improving high availability (HA) – Route traffic away from a failed data center to operational ones.
  • Reducing latency –Route each client to the data center that is closest to it.
  • Improving performance by balancing load – Route connections to the data center that has the most capacity.
  • Managing cost –Route traffic to the lowest‑cost data center. Most commonly used in a ‘cloud‑bursting’ scenario, when a higher‑cost data center is brought online only when its capacity is required.

GSLB is generally implemented by managing the DNS resolution process; when a client makes a DNS lookup, it is given one or more IP addresses chosen by the GSLB process. The GSLB process is informed by health monitors (which determine which data centers have running services), load information, and proximity, which can be judged most easily using GeoIP location.

NS1 offers one of the most advanced GSLB solutions available as a service, with a rich API that allows PoPs to dynamically inform the NS1 servers about their availability and current loads. NGINX provides an integration agent so that NGINX Plus can provide load and availability data about itself, and the applications it is proxying, to the NS1 GSLB service. This agent runs alongside each NGINX Plus instance, monitoring its local load and the availability of backend services, and pushing metrics in real time to the NS1 API.

The agent supports the following capabilities:

  • Remote health checks, so clients are not directed to an unavailable (down or otherwise unreachable) PoP
  • Local capacity checks, so clients are not directed to a PoP with insufficient working servers
  • Central capacity balancing, so clients are balanced across PoPs according to the current load at each PoP, and traffic is drained from PoPs that are overloaded

The solution functions alongside other NS1 capabilities, such as geo‑proximal routing to direct each client to its closest PoP.

For complete instructions on deploying NS1 and the NGINX NS1 agent, check out our deployment guide Global Server Load Balancing with NS1 and NGINX Plus.

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.

About The Author

Owen Garrett

Sr. Director, Product Management

Owen is a senior member of the NGINX Product Management team, covering open source and commercial NGINX products. He holds a particular responsibility for microservices and Kubernetes‑centric solutions. He’s constantly amazed by the ingenuity of NGINX users and still learns of new ways to use NGINX with every discussion.

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at or join the conversation by following @nginx on Twitter.