NGINX is an open source web server and reverse proxy that excels at large‑scale web integration, application security, and web acceleration. NGINX Plus extends NGINX with additional load balancing and application delivery features. The articles in the NGINX Plus Admin Guide and Tutorial will quickly show you how to use some of the most popular features of NGINX and NGINX Plus. To purchase an NGINX Plus subscription, contact the NGINX Plus sales team.
Installing NGINX
- Installing NGINX Plus – Obtaining and installing NGINX Plus
- Installing NGINX Open Source – Obtaining, compiling, and installing open source NGINX
- NGINX Plus on Microsoft Azure – Setting up NGINX Plus VMs to load balance and deliver applications and content in Microsoft Azure
- NGINX Plus on Amazon EC2 – Setting up NGINX Plus AMIs to load balance and deliver applications and content in the Amazon Web Services Elastic Compute Cloud (AWS EC2)
- NGINX Plus on the Google Cloud Platform – Setting up NGINX Plus VMs to load balance and deliver applications and content on the Google Cloud Platform
Getting Started
- Runtime control – Starting and stopping NGINX and NGINX Plus processes, including zero‑downtime reconfiguration and binary upgrades
- Managing NGINX Configuration File – NGINX configuration file structure, the order of directives, directive inheritance rules
Basic Functionality
- Web server – Configuring virtual servers and locations, using variables, rewriting URIs, and customizing error pages
- Serving static content – Setting the root directory for requested content, and creating ordered lists of files to serve if the original index file or URI does not exist
- Reverse proxy – Proxying requests to HTTP, FastCGI, uwsgi, SCGI, and memcached servers; controlling proxied request headers; and buffering of responses from proxied servers
- Compression and decompression – Compressing responses on the fly to minimize use of network bandwidth
- Web content cache – Caching static and dynamic content from proxied servers
Managing SSL Traffic
- SSL termination – Delivering web content over HTTPS
- SSL termination for TCP upstreams – Delivering TCP traffic over HTTPS
- SSL between NGINX and an HTTP upstream – Securing HTTP traffic between NGINX and upstream servers
- SSL between NGINX and a TCP upstream – Securing TCP traffic between NGINX and upstream servers
Load Balancing
- HTTP load balancer – Distributing HTTP requests across a group of servers based on a choice of algorithms, with passive and proactive checking of upstream server health and runtime modification of the load‑balancing configuration
- TCP and UDP load balancer – Distributing TCP connections and UDP datagrams across a group of servers based on a choice of algorithms, with passive and proactive checking of upstream server health and runtime modification of the load-balancing configuration
- On-the-Fly Configuration – Add, delete, modify upstream servers on-the-fly with NGINX REST API
- Using the PROXY protocol – Configuring NGINX and NGINX Plus to receive client connection information passed through proxy servers and load balancers such as HAproxy and Amazon Elastic Load Balancer
Restricting Access
- Limiting access to proxied HTTP resources – Controlling access based on client IP address, limiting the number of simultaneous connections, and limiting request rate and bandwidth per connection
- Restricting access with HTTP Basic authentication – Configuring a username/password authentication for HTTP
- Configuring authentication based on subrequest results – Authenticating each request to your website with an external server or service.
- Restricting access by geographical location – Controlling access based on a client location
- Restricting access to proxied TCP resources – Controlling access based on client IP address, limiting the number of simultaneous connections, and limiting bandwidth per connection
- Dynamic IP Address Blacklisting – Blacklisting IP addresses taken from a dynamically configurable database.
Logging And Monitoring
- HTTP Health Checks – Passive and proactive checking of HTTP upstream server health
- TCP Health Checks – Checking of TCP upstream server health
- UDP Health Checks – Checking of UDP upstream server health
- Live activity monitoring – Monitoring NGINX Plus status and performance metrics in real time with the live activity monitoring dashboard, using JSON for collecting stats
- Logging errors and requests – Configuring error log and access log, logging to syslog
- Debugging NGINX – Configuring the debugging log, collecting the debugging information, obtaining core dump files
High Availability
- High availability of NGINX Plus in an active‑passive pair – Configuring high availability of paired active‑passive NGINX Plus instances on premises with a solution based on keepalived
- Active‑active and additional passive nodes – Improving failover redundancy and scalability by adding additional nodes on premises
- High availability of NGINX Plus in an active‑passive pair in AWS – Configuring high availability of paired active‑passive NGINX Plus instances in AWS with a solution based on keepalived and Elastic IP address
- High availability of NGINX Plus in active‑active pairs in AWS – Configuring high availability of paired active‑active NGINX Plus instances in AWS with a solution using on AWS Network Load Balancer
- Configuration sharing – Sharing configuration across a cluster of NGINX Plus servers
Mail Proxy
- Configuring NGINX as a mail proxy server – Enabling the mail proxy server functionality
NGINX Plus with ModSecurity WAF
- Installing the ModSecurity WAF – Installing the NGINX Plus with ModSecurity WAF, configuring a sample rule, and setting up logging
- Using the OWASP CRS – Enabling and testing the OWASP Core Rule Set (CRS) with the NGINX Plus with ModSecurity WAF
- Using the Trustwave SpiderLabs Rules – Configuring the ModSecurity® Rules from Trustwave SpiderLabs® with the NGINX Plus with ModSecurity WAF
NGINX and NGINX Plus also support HTTP/2, proxying of WebSocket traffic, streaming media delivery, and content transformation through SSI or XSLT. All of these features – and more – are covered in detail in the reference documentation.
Deployment and Migration Guides
- Amazon Route 53 – Implementing global server load balancing (GSLB) with Route 53 and NGINX Plus
- Apache Tomcat – Using NGINX and NGINX Plus to load balance Apache TomcatTM application servers
- Chef – High availability of NGINX Plus instances with a solution based on keepalived and Chef
- Citrix NetScaler – Migrating load balancer configuration to NGINX Plus from Citrix NetScaler
- F5 BIG‑IP – Migrating load balancer configuration to NGINX Plus from F5 BIG‑IP
- Google Cloud Platform – Using NGINX Plus in an all‑active, highly available load balancing deployment on Google Compute Engine (GCE)
- JBoss – Using NGINX and NGINX Plus to load balance JBoss® application servers (both commercial and open source)
- Microsoft Exchange – Using NGINX Plus to load balance both TCP‑based and HTTP‑based Microsoft® ExchangeTM traffic
- New Relic Plug-In – Enabling monitoring of NGINX with New Relic APM™
- Node.js – Using NGINX and NGINX Plus to load balance Node.js® application servers
- Oracle E‑Business Suite – Using NGINX Plus to load balance Oracle® EBS servers
- Oracle WebLogic Server – Using NGINX Plus to load balance Oracle WebLogic servers
- Terraform on GCE – Using Packer and Terraform to deploy NGINX Plus load balancing on Google Cloud Engine
- uWSGI and Django – Using NGINX as an application gateway with uWSGI and Django

