NGINX has gained justifiable fame as a very high‑performance web server. I think many people realize that NGINX can also be used as a reverse proxy, but they might not be aware of just what a powerful reverse proxy it is.
What Is a Reverse Proxy?
Let’s start by taking a step back and asking, what is a proxy server? I think Wikipedia has a good definition:
[A] proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers.
So a proxy server sits in between a client and the actual server that hosts the data the client is looking for. To the client, the proxy server appears to be the actual backend server, and to the backend server the proxy server looks like a client. To define a reverse proxy server we go back to Wikipedia:
[A] reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers.
The difference is that a proxy server sits between clients and just one backend server, but a reverse proxy server sits in front of one or more backend servers and decides which of them to use for each request.
What Are the Benefits of Using a Reverse Proxy?
Why would you want to use a reverse proxy server? There are number of benefits:
- Concurrency – Internet applications often involve large numbers of clients each opening multiple connections, resulting in a very large number of connections to the backend servers. Many web servers and application servers do not handle large numbers of connections well (NGINX when used as web server is an exception), so adding a reverse proxy that can better handle multiple connections can result in a marked improvement in backend server performance.
- Resiliency – If clients are connecting directly to a backend server and it suffers a failure, all clients currently connected (or trying to connect) to the server see their requests fail. A reverse proxy server can monitor the health of backend servers and stop sending requests to a failed server until it is back in service. Clients don’t see an error because the reverse proxy automatically sends their requests to the backend servers that are still operational.
- Scalability – Because a reverse proxy is the single “public face” for the group of backend servers, you can add and remove servers in response to changing traffic load.
- Layer 7 routing – A reverse proxy sees the traffic headed to all servers and can make intelligent decisions about where to send each request, modifying requests and responses as necessary. It can make routing decisions based on a certain HTTP header in the request, part of a URL, the geographic location of the client, and so on.
- Caching – A reverse proxy is a great place to do caching – it’s usually much more efficient to cache content there than to send all requests to backend servers and have each backend server build its own cache.
- Other functions – By sitting in front of the backend servers, a reverse proxy can perform other functions as well, such as traffic shaping based on bandwidth or request rate, connection limiting, integration with various authorization schemes, activity monitoring, and much more.
Using NGINX Plus as a Reverse Proxy
NGINX Plus introduces even more features to the open source NGINX software’s renowned web server capabilities, making NGINX Plus a full featured application delivery controller (ADC) able to take the place of proprietary hardware appliances.
The following are just some of the features available in NGINX Plus.
There are multiple load balancing algorithms to choose from, both weighted and unweighted. Session persistence is also supported. NGINX Plus can load balance HTTP, HTTPS, SPDY, WebSocket, FastCGI, SCGI, uwsgi, and memcached. Read more.
Both passive and active monitoring of backend server health is supported. If NGINX Plus is unable to connect to a node, that node is marked as down. Active health checks can also be configured to run periodically against backend nodes. In addition, the slow‑start feature can be used so that NGINX Plus slowly ramps up traffic to a node that has just come online, to avoid overwhelming it with a burst of heavy traffic. Read more.
Traffic can be routed based on any part of a request, such as the client IP address, host name, URI, query string, headers, etc.
Request and Response Rewriting
Any part of a request or response can be modified, including headers, body, and URI. NGINX Plus can also add and delete headers.
Responses can be cached, and you can configure the types of content to cache and for how long. You can also purge items from the cache. Read more.
Gzip compression is supported, with fine grained control over which content to compress and when to use compression. Read more.
SSL/TLS decryption and encryption are supported and decryption can be done for many domain names using different certificates. Read more.
Live Activity Monitoring and Logging
NGINX Plus statistics encoded in JSON format are available via a simple HTTP request. A dashboard web page is provided to display the statistics, or you can feed them to custom or third‑party monitoring tools Custom‑formatted logs can be configured for both local logging and export to syslog. Read more.
And Much More
NGINX has many more features, such as support for video streaming, mail proxy support, GeoIP support, graceful restarts and upgrades without downtime, traffic shaping, connection limiting, and much more. For more information, visit us at nginx.com and nginx.org.