This post applies to both NGINX Open Source and NGINX Plus, and the term NGINX represents both.
The Use Case – Transitioning to a New Application Server
We’re defining a two‑hour window during which we want our progressive switchover to take place, in this example from 5 to 7 p.m. After the first 12 minutes we expect 10% of clients to be directed to the new application server, then 20% of clients after 24 minutes, and so on. The following graph illustrates the transition.
One important requirement of this “progressive transition” configuration is that transitioned clients don’t revert back to the original server – once a client has been directed to the new application server, it continues to be directed there for the remainder of the transition window (and afterward, of course).
We will describe the complete configuration below, but in brief, when NGINX processes a new request that matches the application that is being transitioned, it follows these rules:
- If the transition window has not started, direct the request to the old application server.
- If the transition window has elapsed, direct the request to the new application server.
- If the transition is in progress:
- Calculate the current (time) position in the transition window.
- Calculate a hash for the client IP address.
- Calculate the position of the hash in the range of all possible hash values.
- If the hash position is greater than the current position in the transition window, direct the request to the new application server; otherwise direct the request to the old application server.
Let’s get started!
NGINX and NGINX Plus Configuration for HTTP Applications
In this example we are using NGINX as a reverse proxy to a web application server, so all of our configuration is in the
http context. For details about the configuration for TCP and UDP applications in the
stream context, see below.
First, we define separate
upstream blocks for the sets of servers that host our old and new application code respectively. Even with our progressive transition configuration, NGINX will continue to load balance between the available servers during the transition window.
Next we define the frontend service that NGINX presents to clients.
js_include directive specifies its location.
js_set directive sets the value of the
transitionStatus. NGINX variables are evaluated on demand, that is, at the point during request processing that they are used. So the
js_set directive tells NGINX how to evaluate the
$upstream variable when it is needed.
server block defines how NGINX handles incoming HTTP requests. The
listen directive tells NGINX to listen on port 80 – the default for HTTP traffic, although a production configuration normally uses SSL/TLS to protect data in transit.
location block applies to the entire application space (/). Within this block we use the
set directive to define the transition window with two new variables,
Date.now function always returns the UTC date and time and so an accurate time comparison is possible only if the local time zone is specified.
proxy_pass directive directs the request to the upstream group that is calculated by the
info and higher (by default only events at level
warn and higher are logged). By placing this directive inside the
location block and naming a separate log file, we avoid cluttering the main error log with all
js_include directive. All of our functions appear in this file.
Dependent functions must appear before those that call them, so we start by defining a function that returns a hash of the client’s IP address. If our application server is predominantly used by users on the same LAN then all of our clients have very similar IP addresses, so we need the hash function to return an even distribution of values even for a small range of input values.
Next we define the function,
transitionStatus, that sets the
$upstream variable in the
js_set directive in our NGINX configuration.
transitionStatus function has a single parameter,
variables property of the request object contains all of the NGINX configuration variables, including the two we set to define the transition window,
else block determines whether the transition window has started, finished, or is in progress. If it’s in progress, we obtain the hash of the client IP address by passing
req.remoteAddress to the
We then calculate where the hashed value sits within the range of all possible values. Because the FNV‑1a algorithm returns a 32‑bit integer we can simply divide the hashed value by 4,294,967,295 (the decimal representation of 32 bits).
At this point we invoke
req.log() to log the hash position and the current position of the transition time window. This is logged at the
info level to the
error_log defined in our NGINX configuration and produces log entries such as the following example. The
2016/09/08 17:44:48 [info] 41325#41325: *84 js: timepos = 0.373333, hashpos = 0.840858
Finally, we compare the hashed value’s position within the output range with our current position within the transition time window, and return the name of the corresponding upstream group.
NGINX and NGINX Plus Configuration for TCP and UDP Applications
The sample configuration for HTTP applications in the previous section is appropriate when NGINX acts as a reverse proxy for an HTTP application server. We can adapt the configuration for TCP and UDP applications by moving the entire configuration snippet to the
Just one change and one check are required:
$transition_window_endvariables in the
transitionStatusfunction instead of with the
setdirective in the NGINX configuration, because the
setdirective is not yet supported in the
load_moduledirective in the top‑level (“main”) context of the nginx.conf configuration file. Step 2 in the instructions referenced below for NGINX Plus and shown below for NGINX Open Source show this directive along with the one for the HTTP module.
Then reload the NGINX software as directed in Step 3.
Install the prebuilt package.
For Ubuntu and Debian systems:
$ sudo apt-get install nginx-module-njs
For RedHat, CentOS, and Oracle Linux systems:
$ sudo yum install nginx-module-njs
Enable the module by including a
load_moduledirective for it in the top‑level ("main") context of the nginx.conf configuration file (not in the
load_module modules/ngx_http_js_module.so; load_module modules/ngx_stream_js_module.so;
$ sudo nginx -s reload
If you prefer to compile an NGINX module from source:
- Copy the module binaries (ngx_http_js_module.so, ngx_stream_js_module.so) to the modules subdirectory of the NGINX root (usually /etc/nginx/modules).