This article explains how to configure NGINX Plus as a basic reverse proxy server. You will learn how to pass a request to proxied servers over different protocols, modify the client request headers sent to the proxied server, and buffer responses coming from the proxied servers.

Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.

Passing a Request to a Proxied Server

When NGINX proxies a request, it sends the request to a specified proxied server, fetches the response, and sends it back to the client. It is possible to proxy requests to an HTTP server (another NGINX server or any other server) or a non-HTTP server (which can run an application developed with a specific framework, such as PHP or Python) using a specified protocol. Supported protocols include FastCGI, memcached, SCGI, and uwsgi.

To pass a request to an HTTP proxied server, include the proxy_pass directive in a location configuration block:

location /some/path/ {
    proxy_pass http://www.example.com/link/;
}

This example configuration results in passing all requests processed in this location to the proxied server at the specified address. This address can be specified as a domain name or an IP address. The address can also include a port:

location ~ \.php {
    proxy_pass http://127.0.0.1:8000;
}

Note how in the first example the value of the proxied server ends in a URI, /link/. The URI replaces the part of the request URI that matches the parameter to the location directive. In this example, a request for /some/path/page.html will be proxied to http://www.example.com/link/page.html. If a URI is not included, or it is not possible to determine which part of the URI to replace, the full request URI is passed (and possibly modified).

To pass a request to proxied server running a protocol other than HTTP, substitute the appropriate directive for proxy_pass:

Note that in these cases, the rules for specifying addresses might be different and you might also need to pass additional parameters to the server (see the reference documentation for more detail).

The proxy_pass directive can also point to a named group of servers. In this case, you specify the load-balancing algorithm used to distribute requests across the group of servers.

Passing Request Headers

By default, NGINX automatically redefines two header fields in proxied requests, Host and Connection, and eliminates header fields for which the value is the empty string. The Host header is set to the $proxy_host variable, and Connection is set to close.

To change these settings, as well as modify other header fields, use the proxy_set_header directive. This directive can be specified in a location or server configuration block, or in the http block. In this example, the Host field is set to the $host variable and X-Real-IP field to the $remote_addr variable:

location /some/path/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://localhost:8000;
}

To prevent a header field from being passed to the proxied server, set it to an empty string, as here for Accept-Encoding:

location /some/path/ {
    proxy_set_header Accept-Encoding "";
    proxy_pass http://localhost:8000;
}

Configuring Buffers

By default NGINX buffers responses from proxied servers. A response is stored in the internal buffers and is not sent to the client until the entire response is received. Buffering helps to optimize performance with slow clients, which can waste proxied server time if the response is passed from NGINX to the client synchronously. However, when buffering is enabled NGINX allows the proxied server to process responses quickly, while NGINX stores the responses for as much time as the clients need to download them.

The directive that is responsible for enabling and disabling buffering is proxy_buffering. By default it is set to on and buffering is enabled.

The proxy_buffers directive controls the size and the number of buffers allocated for a request. The first part of the response from a proxied server is stored in a separate buffer, the size of which is set with the proxy_buffer_size directive. This part usually contains a comparatively small response header and can be made smaller than the buffers for the rest of the response.

In the following example, the default number of buffers is increased and the size of the buffer for the first portion of the response is made smaller than the default.

location /some/path/ {
    proxy_buffers 16 4k;
    proxy_buffer_size 2k;
    proxy_pass http://localhost:8000;
}

If buffering is disabled, the response is sent to the client synchronously while it is receiving it from the proxied server. This behavior may be desirable for fast interactive clients that need to start receiving the response as soon as possible.

To disable buffering in a specific location, place the proxy_buffering directive in the location with the off parameter, as follows:

location /some/path/ {
    proxy_buffering off;
    proxy_pass http://localhost:8000;
}

In this case NGINX uses only the buffer configured by proxy_buffer_size to store the current part of a response.