This article describes how to configure and use HTTP health checks in NGINX Plus and open source NGINX.

In This Section

Overview

NGINX and NGINX Plus can continually test your upstream servers, avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group.

Prerequisites

  • Open source NGINX or NGINX Plus for passive health checks
  • NGINX Plus for passive and active health checks and the monitoring dashboard
  • A load‑balanced group of HTTP upstream servers

Passive Health Checks

For passive health checks, NGINX and NGINX Plus monitor transactions as they happen, and try to resume failed connections. If the transaction still cannot be resumed, NGINX and NGINX Plus mark the server as unavailable and temporarily stop sending requests to it until it is marked active again.

The conditions under which an upstream server is marked unavailable are defined for each upstream server with the parameters to the server directive in the upstream block:

  • fail_timeout – Sets the time during which a number of failed attempts must happen for the server to be marked unavailable, and also the time for which the server is marked unavailable (default is 10 seconds).
  • max_fails – Sets the number of failed attempts that must occur during the fail_timeout period for the server to be marked unavailable (default is 1 attempt).

In the following example, if NGINX fails to send a request to a server or does not receive a response from it 3 times in 30 seconds, it marks the server as unavailable for 30 seconds:

upstream backend {
server backend1.example.com;
server backend2.example.com max_fails=3 fail_timeout=30s;
}

Server Slow Start

A recently recovered server can be easily overwhelmed by connections, which may cause the server to be marked as unavailable again. Slow start allows an upstream server to gradually recover its weight from zero to its nominal value after it has been recovered or became available. This can be done with the slow_start parameter to the upstream server directive:

upstream backend {
server backend1.example.com slow_start=30s;
server backend2.example.com;
server 192.0.0.1 backup;
}

The time value sets the time for the server to recover its weight.

Note that if there is only a single server in a group, the fail_timeout, max_fails, and slow_start parameters are ignored and the server is never marked unavailable.

Active Health Checks

NGINX Plus can periodically check the health of upstream servers by sending special health check requests to each server and check for a response.

To enable active health checks:

  1. In the location that passes requests to the upstream group (proxy_pass), specify the health_check directive:

    server {
    location / {
    proxy_pass http://backend;
    health_check;
    }
    }
  2. Specify a shared memory zone for the upstream server group with the zone directive:

    http {
    upstream backend {
    zone backend 64k;

    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com;
    server backend4.example.com;
    }
    }

    This configuration defines an upstream group backend and a virtual server with a single location that passes all requests to the upstream group. It also turns on advanced health monitoring with default parameters: every five seconds NGINX sends a request for / to each server in the backend group. If any communication error or timeout occurs (or a proxied server responds with a status code other than 2xx or 3xx) the server fails the health check. It is marked as unhealthy, and NGINX Plus does not send client requests to it until it once again passes a health check.

    The zone directive defines a memory zone that is shared among worker processes and is used to store the configuration of the upstream group. This enables the worker processes to use the same set of counters to keep track of responses from the servers in the group. The zone directive also makes the group dynamically configurable.

    The defaults for active health checks can be overridden with parameters to the health_check directive:

    location / {
    proxy_pass http://backend;
    health_check interval=10 fails=3 passes=2;
    }

    Here, the interval parameter increases the delay between health checks to 10 seconds (from the default 5 seconds). The fails requires the server to fail three health checks to be marked as unhealthy (up from the default one). Finally, the passes parameter means the server must pass two consecutive checks to be marked as healthy again (instead of the default one).

    Specifying the Requested URI

    Use the uri parameter to set the URI to request in a health check:

    location / {
    proxy_pass http://backend;
    health_check uri=/some/path;
    }

    The specified URI is appended to the server domain name or IP address set for the server in the upstream block. For the first server in the sample backend group declared above, a health check requests the URI http://backend1.example.com/some/path.

    Defining Custom Conditions

    Finally, it is possible to set custom conditions that the response must satisfy for the server to pass the health check. The conditions are defined in a match block, which is referenced in the match parameter to the health_check directive.

    http {
    ...

    match server_ok {
    status 200-399;
    body !~ "maintenance mode";
    }

    server {
    ...

    location / {
    proxy_pass http://backend;
    health_check match=server_ok;
    }
    }
    }

    Here the health check is passed if the status code of the response is in the range 200399, and its body does not contain the string maintenance mode.

    The match directive enables NGINX Plus to check the status code, header fields, and the body of a response. Using this directive it is possible to verify whether the status is in a specified range, whether a response includes a header, or whether the header or body matches a regular expression. The match directive can contain one status condition, one body condition, and multiple header conditions. A response must satisfy all conditions defined in match block for the server to pass the health check.

    For example, the following match directive matches responses that have status code 200, the exact value text/html in the Content-Type header, and the text Welcome to nginx! in the body:

    match welcome {
    status 200;
    header Content-Type = text/html;
    body ~ "Welcome to nginx!";
    }

    The following example uses the exclamation point (!) to define characteristics the response must not have to pass the health check. In this case, the health check passes when the status code is something other than 301, 302, 303, or 307, and there is no Refresh header.

    match not_redirect {
    status ! 301-303 307;
    header ! Refresh;
    }

    Health checks can also be enabled for non-HTTP protocols, such as FastCGI, memcached, SCGI, and uwsgi, and also for TCP and UDP.