This deployment guide explains how to use NGINX Plus to load balance traffic across a pool of Oracle® E-Business Suite (EBS) 12 servers. It provides complete instructions for configuring NGINX Plus as required.

About NGINX Plus and Oracle EBS

NGINX Plus is the commercially supported version of the open source NGINX software. NGINX Plus is a complete application delivery platform, extending the power of NGINX with a host of enterprise-ready capabilities that enhance an EBS application server deployment and are instrumental to building web applications at scale:

Oracle E-Business Suite (EBS) is a comprehensive suite of integrated, global business applications that enable organizations to make better decisions, reduce costs, and increase performance. Its cross-industry capabilities include enterprise resource planning, customer relationship management, and supply chain planning.

Prerequisites and System Requirements

The following systems and software are required:

  • Oracle EBS 12.2, installed and configured according to Oracle best practices.
  • Linux system to host NGINX Plus. To avoid potential conflicts with other applications, we recommend you install NGINX Plus on a fresh physical or virtual system. For the list of Linux distributions supported by NGINX Plus, see NGINX Plus Technical Specifications.
  • NGINX Plus R6 or later.

You can install NGINX Plus on premises, in a private cloud, or in a public cloud such as the Amazon Elastic Compute Cloud (EC2), the Google Cloud Platform, or Microsoft Azure. See the instructions for your installation type:

The instructions assume you have basic Linux system administration skills, including the following. Full instructions are not provided for these tasks.

  • Configuring and deploying EBS
  • Installing Linux software from vendor-supplied packages
  • Editing configuration files
  • Copying files between a central administrative system and Linux servers
  • Running basic commands to start and stop services
  • Reading log files

Similarly, the instructions assume you have the support of the team that manages your Oracle deployment. Their tasks include the following:

  • Modifying Oracle configurations to configure a Web Entry Point
  • Verifying the configuration

About Sample Values and Copying of Text

  • company.com is used as a sample organization name (in key names and configuration blocks). Replace it with your organization’s name.
  • Many NGINX Plus configuration blocks in this guide list two sample EBS application servers with IP addresses 172.31.0.146 and 172.31.11.210. Replace these addresses with the IP addresses of your EBS servers. Include a line in the configuration block for each server if you have more or fewer than two.
  • For readability reasons, some commands appear on multiple lines. If you want to copy and paste them into a terminal window, we recommend that you first copy them into a text editor, where you can substitute the object names that are appropriate for your deployment and remove any extraneous formatting characters that your browser might insert.
  • The configuration examples in the step-by-step instructions include hyperlinks to the NGINX reference documentation, for easy access to more information about the directives. (If a directive appears multiple times in a section, only the first occurrence is hyperlinked.) We recommend that you do not copy hyperlinked text (or any other text) from this guide into your configuration files, because it might include unwanted link text and does not include whitespace and other formatting that makes the configuration easy to read. For more information, see Creating and Modifying Configuration Files.

Architectural Design

This figure represents a typical load-balancing architecture:

Typical architecture for load balancing three application servers

A load balancer performs the following tasks:

  • Terminates SSL/TLS connections (encrypts and decrypts SSL/TLS traffic)
  • Selects backend servers based on a load-balancing method and health checks
  • Forwards HTTP requests to selected backend servers
  • Provides session persistence
  • Provides logging and monitoring capabilities

Oracle EBS has application tiers and a database tier. A load balancer is used in front of application tiers in order to provide higher performance, availability, security, and traffic management for the application servers.

NGINX Plus as a load balancer between clients and the application tier in an Oracle E-Business Suite deployment

Configuring Firewalls

For improved security, the NGINX Plus load balancer might be located in a DMZ. This might complicate and delay the installation process because of a required firewall configuration.

Review the network configuration requirements in the table and make appropriate changes to your firewalls before proceeding with the configuration.

Purpose Port Source Destination
Admin access, file transfer 22 Administrative network NGINX Plus load balancer
Installation and update of NGINX Plus software 443 NGINX Plus load balancer https://plus-pkgs.nginx.com
HTTP to HTTPS redirects 80 Any NGINX Plus
Production HTTPS traffic 443 Any NGINX Plus
Access to backend application 8000* NGINX Plus Backend application servers
Access to load‑balanced application from application servers 443 Backend application servers NGINX Plus load balancer

* Replace port 8000 with the actual application port as appropriate.

Configuring an SSL/TLS Certificate for Client Traffic

If you plan to enable SSL/TLS encryption of traffic between NGINX Plus and clients of your EBS application, you need to configure a server certificate for NGINX Plus.

There are several ways to obtain a server certificate, including the following. For your convenience, step‑by‑step instructions are provided for the second, third, and fourth options.

For more details on SSL/TLS termination, see the NGINX Plus Admin Guide.

Generating a Self‑Signed Certificate with the openssl Command

Generate a public‑private key pair and a self‑signed server certificate in PEM format that is based on them.

  1. Log in as the root user on a machine that has the openssl software installed.

  2. Generate the key pair in PEM format (the default). To encrypt the private key, include the -des3 parameter. (Other encryption algorithms are available, listed on the man page for the genrsa command.) You are prompted for the passphrase used as the basis for encryption.

    root# openssl genrsa -des3 -out ~/private-key.pem 2048
    Generating RSA private key ...
    Enter pass phrase for private-key.pem:
  3. Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.

    root# cp ~/private-key.pem secure-dir/private-key.pem.backup
  4. Generate the certificate. Include the -new and -x509 parameters to make a new self‑signed certificate. Optionally include the -days parameter to change the key’s validity lifetime from the default of 30 days (10950 days is about 30 years). Respond to the prompts with values appropriate for your testing deployment.

    root# openssl req -new -x509 -key ~/private-key.pem -out ~/self-cert.pem \
    -days 10950
  5. Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX Plus server.

    (In the configuration file for a single Web Entry Point that you can download from the NGINX, Inc. website, the filenames for the certificate and private key are server.crt and server.key. For a discussion of the file and download instructions, see Creating and Modifying Configuration Files.)

Generating a Certificate Request with the openssl Command

  1. Log in as the root user on a machine that has the openssl software installed.

  2. Create a private key to be packaged in the certificate.

    root# openssl genrsa -out ~/company.com.key 2048
  3. Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.

    root# cp ~/company.com.key secure-dir/company.com.key.backup
  4. Create a Certificate Signing Request (CSR) file.

    root# openssl req -new -sha256 -key ~/company.com.key -out ~/company.com.csr
  5. Request a certificate from a CA or your internal security group, providing the CSR file (company.com.csr). As a reminder, never share private keys (.key files) directly with third parties.

    The certificate needs to be PEM format rather than in the Windows‑compatible PFX format. If you request the certificate from a CA website yourself, choose NGINX or Apache (if available) when asked to select the server platform for which to generate the certificate.

  6. Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX Plus server.

    (In the configuration file for a single Web Entry Point that you can download from the NGINX, Inc. website, the filenames for the certificate and private key are server.crt and server.key. For a discussion of the file and download instructions, see Creating and Modifying Configuration Files.)

Exporting and Converting an SSL/TLS Certificate from an IIS Server

On Windows systems, SSL/TLS certificates are packaged in a Public‑Key Cryptography Standards (PKCS) archive file with extension .pfx. You need to export the .pfx file and convert the contents to the Linux‑compatible PEM format.

Working in the Microsoft Management Console, perform the following steps:

  1. Open the Certificates snap‑in.

  2. In the left‑hand navigation pane, click the Certificates folder in the logical store for the certificate you want to export (in the following figure, it is Personal > Certificates).

  3. In the main pane, right‑click the certificate to be exported (in the following figure, it is cas01.company.com).

  4. On the menu that pops up, select All Tasks, then click Export.

    Certificates snap-in to Microsoft Management Console, used to export SSL/TLS certificate

  5. In the Certificate Export Wizard window that pops up, click Yes, export the private key. (This option appears only if the private key is marked as exportable and you have access to it.)

  6. If prompted for a password (used to encrypt the .pfx file before export), type it in the Password and Confirm fields. (Remember the password, as you need to provide it when importing the bundle to NGINX Plus.)

  7. Click Next.

  8. In the File name field, type the filename and path to the location for storing the exported file (certificate and private key). Click Next, then Finish.

  9. Copy the .pfx file to the NGINX Plus server.

Working on the NGINX Plus server (which must have the openssl software installed), perform the following steps:

  1. Log in as the root user.

  2. Extract the private key file from the .pfx file. You are prompted first for the password protecting the .pfx file (see Step 6 above), then for a new password used to encrypt the private key file being created (company.com.key.encrypted in the following sample command).

    root# openssl pkcs12 -in exported-certs.pfx –nocerts \
    -out company.com.key.encrypted
  3. Decrypt the key file. At the prompt, type the password you created in the previous step for the private key file.

    root# openssl rsa -in company.com.key.encrypted –out company.com.key
  4. Extract the certificate file.

    root# openssl pkcs12 -in exported-cert.pfx -clcerts -nokeys -out company.com.crt
  5. Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX Plus server.

    (In the configuration file for a single Web Entry Point that you can download from the NGINX, Inc. website, the filenames for the certificate and private key are server.crt and server.key. For a discussion of the file and download instructions, see Creating and Modifying Configuration Files.)

Configuring Oracle EBS

For Oracle applications to work with a load balancer, you need to configure a Web Entry Point. For full instructions, refer to the Oracle documentation on configuring Web Entry points,
Using Load‑Balancers with Oracle E‑Business Suite Release 12.2 (MOS Doc ID 1375686.1).

Use the AutoConfig Context Editor to set the configuration values in the applications context file on application servers.

Here are examples of appropriate values:

Load Balancer Entry Point store.company.com
Application Server 1 apps-tier1.company.com
Application Server 2 apps-tier2.company.com
Web Entry Protocol https
Application Tier Web Protocol http
Application Tier Web Port 8000
Active Web Port 443

Configuring NGINX Plus for Oracle EBS

The instructions in the following sections are required for NGINX Plus to load balance EBS servers properly.

The instructions in these sections are optional, but improve the performance and manageability of your NGINX Plus deployment:

Finally, if you need multiple Web Entry Points, see Configuring Multiple Web Entry Points.

Creating and Modifying Configuration Files

To reduce errors, this guide has you copy directives from files provided by NGINX, Inc. into your configuration files, instead of using a text editor to type in the directives yourself. Then you go through the sections in this guide (starting with Configuring Global Settings) to learn how to modify the directives as required for your deployment.

As provided, there is one file for a single Web Entry Point and one file for multiple Web Entry Points. If you are installing and configuring NGINX Plus on a fresh Linux system and using it only to load balance EBS traffic, you can use the provided file as your main configuration file, which by convention is called /etc/nginx/nginx.conf.

We recommend, however, that instead of a single configuration file you use the scheme that is set up automatically when you install an NGINX Plus package, especially if you already have an existing NGINX or NGINX Plus deployment or plan to expand your use of NGINX Plus to other purposes in future. In the conventional scheme, the main configuration file is still called /etc/nginx/nginx.conf, but instead of including all directives in it, you create separate configuration files for different functions and store the files in the /etc/nginx/conf.d directory. You then use the include directive in the appropriate contexts of the main file to read in the contents of the function‑specific files.

To download the complete configuration file for a single Web Entry Point:

root# cd /etc/nginx/conf.d
root# curl https://www.nginx.com/resource/conf/oracle-single-entry-point.conf > \
oracle-single-entry-point.conf

To download the complete configuration file for multiple Web Entry Points:

root# cd /etc/nginx/conf.d
root# curl https://www.nginx.com/resource/conf/oracle-multiple-entry-point.conf > \
oracle-multiple-entry-point.conf

(You can also access the URL in a browser and download the file that way.)

To set up the conventional configuration scheme, add an http configuration block in the main nginx.conf file, if it does not already exist. (The standard placement is below any global directives; see Configuring Global Settings.) Add this include directive with the appropriate filename:

http {
include conf.d/oracle-(single|multiple)-entry-point.conf;
}

You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files function-http.conf, this is an appropriate include directive:

http {
include conf.d/*-http.conf;
}

For reference purposes, the full configuration files are also provided in this document:

We recommend, however, that you do not copy text directly from this document. It does not necessarily use the same mechanisms for positioning text (such as line breaks and white space) as text editors do. In text copied into an editor, lines might run together and indenting of child statements in configuration blocks might be missing or inconsistent. The absence of formatting does not present a problem for NGINX Plus, because (like many compilers) it ignores white space during parsing, relying solely on semicolons and curly braces as delimiters. The absence of white space does, however, make it more difficult for humans to interpret the configuration and modify it without making mistakes.

About Reloading Updated Configuration

We recommend that each time you complete a set of updates to the configuration, you run the nginx -t command to test the configuration file for syntactic validity.

root# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

To tell NGINX Plus to start using the new configuration, run one of the following commands:

root# nginx -s reload

or

root# service nginx reload

Configuring Global Settings

Verify that the main nginx.conf file includes the following global directives, adding them as necessary.

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

# If using the standard configuration scheme, the 'http' block is usually placed here
# and encloses 'include' directives that refer to files in the conf.d directory.

Configuring Virtual Servers for HTTP and HTTPS Traffic

These directives define virtual servers for HTTP and HTTPS traffic in separate server blocks in the top‑level http configuration block. All HTTP requests are redirected to the HTTPS server.

  1. Configure a virtual server that listens for requests for https://example.com received on port 443.

    The ssl_certificate and ssl_certificate_key directives name the certificate and private key files you created in Configuring an SSL/TLS Certificate for Client Traffic. Here we use the filenames – server.crt and server.key – specified in the configuration file for a single Web Entry Point that we downloaded from the NGINX, Inc. website in Creating and Modifying Configuration Files.

    # In the 'http' block
    server {
    listen 443 ssl;
    server_name company.com;

    ssl_certificate /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_protocols TLSv1.2;
    }

    This server listens on every IP address. If needed, you can restrict listening to one or more IP addresses (IPv4 or IPv6). For example, with this listen directive the server listens on address 10.210.15.20 and port 443:

    listen 10.210.15.20:443 ssl;
  2. Configure a server block that permanently redirects requests received on port 80 for http://example.com to the HTTPS server defined in the previous step. Opening port 80 does not decrease security, because the requests to this port don’t result in connections to your backend servers.

    # In the 'http' block
    server {
    listen 80;
    status_zone oracle-http-redirect;
    return 302 https://$http_host$request_uri;
    }

For more information on configuring SSL, see the NGINX Plus Admin Guide and the reference documentation for the HTTP SSL/TLS module.

Setting the Default MIME Type

In case the EBS server does not specify the MIME type of the data it is sending to the client (in the Content-Type response header), define the default MIME type as text/html. Include these directives in the http context:

# In the 'http' block
include /etc/nginx/mime.types;
default_type text/html;

Configuring Load Balancing

To configure load balancing, you first create a named upstream group, which lists your EBS app servers. You then set up NGINX Plus as a reverse proxy and load balancer by referring to the upstream group in one or more proxy_pass directives.

  1. Configure an upstream group called oracle with two EBS application servers listening on port 8000, one on IP address 172.31.11.210 and the other on 172.33.0.146. Each upstream group name in the configuration must be unique.

    # In the 'http' block
    upstream oracle {
    zone oracle 64k;
    server 172.31.11.210:8000 max_fails=0;
    server 172.31.0.146:8000 max_fails=0;
    }

    The zone directive creates a 64 KB shared memory zone, also called oracle, for storing configuration and runtime state information about the group that is shared among worker processes.

    Add a server directive for each of your EBS app servers. You can identify servers by IP address or hostnames. If using hostnames, make sure that the operating system on the NGINX Plus server can resolve them.

    NGINX Plus supports two different kinds of application health checks, active and passive. We recommend configuring active health checks (see Configuring Active Health Checks) and disabling passive health checks by including the max_fails=0 parameter on each server directive.

  2. In the server block for HTTPS traffic created in Configuring Virtual Servers for HTTP and HTTPS Traffic, add a location block that proxies all traffic to the upstream group.

    # In the 'server' block for HTTPS traffic
    location / {
    proxy_pass http://oracle;
    proxy_set_header Host $host;
    }

By default, NGINX and NGINX Plus use the Round Robin algorithm for load balancing among servers. The load balancer runs through the list of servers in the upstream group in order, forwarding each new request to the next server. In our example, the first request goes to 172.31.11.210, the second to 192.168.0.146, the third to 172.31.11.210, and so on. For information about the other available load‑balancing algorithms, see Application Load Balancing with NGINX Plus.

For more information on proxying and load balancing, see Reverse Proxy and Load Balancing in the NGINX Plus Admin Guide.

Configuring Session Persistence

EBS applications require session persistence. Without it, you will experience unexpected session logouts almost immediately after logging in. Oracle supports three methods for session persistence: active cookie, passive cookie, and IP address‑based.

For simplicity, configure active‑cookie session persistence with the NGINX Plus “sticky cookie” method. NGINX Plus adds a cookie called ngxcookie to every new user session, recording a hash of the backend server that was selected for the first request from the user. The cookie expires when the browser restarts.

Add the sticky directive to the upstream block created in Configuring Load Balancing, so the complete block looks like this:

# In the 'http' block
upstream oracle {
zone oracle 64k;
server 172.31.11.210:8000 max_fails=0;
server 172.31.0.146:8000 max_fails=0;
sticky cookie ngxcookie;
}

Configuring HTTP/2 Support

HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and later. As always, we recommend you run the latest version of software to take advantage of improvements and bug fixes.

  • If using NGINX, note that in NGINX 1.9.5 and later the SPDY module is completely removed from the NGINX codebase and replaced with the HTTP/2 module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX to use SPDY. If you want to keep using SPDY, you need to compile NGINX from the sources in the NGINX 1.8.x branch.

  • In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY.

    In NGINX Plus R11 and later, the nginx-plus package continues to support HTTP/2 by default, but the nginx‑plus‑extras package available in previous releases is deprecated by dynamic modules.

    For NGINX Plus R8 through R10, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default.

    If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.

To enable HTTP/2 support, add the http2 parameter to the listen directive in the server block for HTTPS traffic that we created in Configuring Virtual Servers for HTTP and HTTPS Traffic, so that it looks like this:

 

# in the 'server' block for HTTPS traffic
listen 443 ssl http2;
To verify that HTTP/2 translation is working, you can use the “HTTP/2 and SPDY indicator” for Google Chrome and the “HTTP/2 indicator” for Firefox.

Configuring Active Health Checks

The open source NGINX software performs basic checks on responses from upstream servers, retrying failed requests where possible. NGINX Plus adds out‑of‑band application health checks (also known as synthetic transactions). The related slow‑start feature gradually ramps up traffic to servers in the load‑balanced group as they recover from a failure, allowing them to “warm up” without being overwhelmed.

These features enable NGINX Plus to detect and work around a much wider variety of problems and have the potential to significantly improve the availability of your Oracle applications.

We are configuring an active health check to verify that the Oracle application returns the X-ORACLE-DMS-ECID header. If not, the health check fails and NGINX Plus doesn’t send requests to the failed server.

  1. In the http context, include a match directive to define the tests that a server must pass to be considered functional. In this example, it must return a status code between 200 and 399 and the X-ORACLE-DMS-ECID header must be set.

    # In the 'http' block
    match oracleok {
    status 200-399;
    header X-ORACLE-DMS-ECID;
    }
  2. In the server block for HTTPS traffic created in Configuring Virtual Servers for HTTP and HTTPS Traffic, add a new location block for the health check.

    # In the 'server' block for HTTPS traffic
    location @health_check {
    internal;
    proxy_connect_timeout 3s;
    proxy_read_timeout 3s;
    proxy_pass http://oracle;
    proxy_set_header Host "oracle.company.com";
    health_check match=oracleok interval=4s
    uri=/OA_HTML/AppsLocalLogin.jsp;

    }

Note that the location block is in the server block for HTTPS traffic, but the match block is in the http block.

NGINX Plus also has a slow‑start feature that is a useful auxiliary to health checks. When a failed server recovers, or a new server is added to the upstream group, NGINX Plus slowly ramps up the traffic to it over a defined period of time. This gives the server time to “warm up” without being overwhelmed by more connections than it can handle as it starts up. For more information, see the NGINX Plus Admin Guide.

For example, to set a slow‑start period of 30 seconds for your EBS application servers, include the slow_start parameter to their server directives:

# In the 'upstream' block
server 172.31.11.210:8000 slow_start=30s;
server 172.31.0.146:8000 slow_start=30s;

For information about customizing health checks, see the NGINX Plus Admin Guide.

Configuring Caching for Application Acceleration

Caching of static objects like the following significantly improves the performance of Oracle EBS:

  • Images
  • CSS files
  • JavaScript files
  • Java applets

Before configuring caching, make sure that the NGINX Plus host has adequate free disk space and disk performance. SSDs are preferred for their superior performance, but standard spinning media can be used.

  1. Create a directory for cached files:

    root@nginx # mkdir /var/oracle-cache
    root@nginx # chown nginx /var/oracle-cache
  2. In the http context, define the path to the cache, the name (cache_oracle) and maximum size (50 MB) of the shared memory zone used for storing cache keys, and the maximum size of the cache itself (here, 500 MB). Adjust the size values as appropriate for the amount of free disk space on the NGINX Plus host.

    # In the 'http' block
    proxy_cache_path /var/oracle-cache/ keys_zone=cache_oracle:50m max_size=500m;
  3. In the server block for HTTPS traffic created in Configuring Virtual Servers for HTTP and HTTPS Traffic, enable caching by defining the name of the shared memory zone for the cache (cache_oracle).

    Also add the proxy_cache_valid directive to the existing location block for / (slash). The any parameter specifies that all responses are cached, and the 1h parameter that cached items expire after one hour.

    # In the 'server' block for HTTPS traffic
    proxy_cache cache_oracle;

    location / {
    proxy_pass http://oracle_one;
    proxy_set_header Host $host;
    proxy_cache_valid any 1h;
    }

For more complete information on caching, refer to the documentation for the Proxy module and the NGINX Plus Admin Guide.

You can track cache usage using the following methods:

  • Statistics from the NGINX Plus Status module, displayed on the built‑in live activity monitoring dashboard, or fed to a custom or third‑party reporting tool
  • The NGINX Plus access log, when the log format includes the $upstream_cache_status variable

For detailed configuration instructions, see the next section.

Configuring Advanced Logging and Monitoring

NGINX Plus provides multiple ways to monitor your Oracle EBS installation, providing data about unavailable servers, failed health checks, response code statistics, and performance. In addition to its built‑in tools, NGINX Plus easily integrates into enterprise monitoring systems through industry‑standard protocols.

Configuring Logging with a Custom Message Format

You can customize the format of messages written to the NGINX Plus access log to include more application‑specific information. Most system variables can be included in log messages. The predefined NGINX Plus combined format includes the following variables:

  • $body_bytes_sent – Number of bytes in the body of the response sent to the client
  • $http_user_agent – User-Agent header in the client request
  • $http_referer – Referer header in the client request
  • $remote_addr – Client IP address
  • $remote_user – Username provided for HTTP basic authentication
  • $request – Full original request line
  • $status – Response status code
  • $time_local – Local time in the Common Log Format

You can accees the complete list of NGINX Plus variables here.

To make troubleshooting of our load‑balancing deployment easier, let’s add the $upstream_addr variable (the address of the actual server generating the response) to the variables in the combined format.

Add the following lines in the http context to enable access logging to /var/log/nginx/access.log and to define the message format:

# In the 'http' block
log_format main '$remote_addr - $remote_user [$time_local]
"$request" $status $body_bytes_sent "$http_referer"
"$http_user_agent" $upstream_addr';
access_log /var/log/nginx/access.log main;

To disable logging for all HTTP traffic, for a virtual server, or for a location, include the following directive in the http, server, or location block respectively:

access_log off;

Note that the message format for error logs is predefined and cannot be changed.

Configuring Logging with syslog

The syslog utility is a widely used standard for message logging. It is used in the backbone of many monitoring and log‑aggregation solutions.

You can configure NGINX Plus to direct both error logs and access logs to syslog servers. These examples configure logging to a syslog server listening on IP address 192.168.1.1 and the default UDP port, 514.

To configure the error log, add the following line in the main context, the http context, or a server or location block:

# In the main, 'http', 'server', or 'location' block
error_log syslog:server=192.168.1.1 info;

To configure the access log using the predefined combined format, add the following line in the http context (it appears on multiple lines here solely for formatting reasons:

access_log syslog:server=192.168.1.1,facility=local7,tag=oracle,severity=info
combined;

You can include multiple error_log and access_log directives in the same context. Messages are sent to every syslog server and file.

Configuring Live Activity Monitoring

NGINX Plus includes a live activity monitoring interface that provides key load and performance metrics in real time. Statistics are reported through a RESTful JSON interface, making it very easy to feed the data to a custom or third‑party monitoring tool. These instructions deploy the dashboard that is built into NGINX Plus.

Dashboard tab in NGINX Plus live activity monitoring dashboard

The quickest way to configure the module and the built‑in NGINX Plus dashboard is to download the sample configuration file from the NGINX, Inc. website and modify it as necessary. For more complete instructions, see Live Activity Monitoring of NGINX Plus in 3 Simple Steps.

  1. Download the status.conf file to the NGINX Plus server:

    # cd /etc/nginx/conf.d
    # curl https://www.nginx.com/resource/conf/status.conf > status.conf
  2. Include the file in the http context in the main configuration file (/etc/nginx/nginx.conf):

    # In the 'http' block in nginx.conf
    include conf.d/status.conf;
  3. Customize the file for your deployment as specified by comments in the file. In particular, the default settings in the file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods:

    • IP address‑based access control lists (ACLs). In the sample configuration file, uncomment the allow and deny directives, and substitute the address of your administrative network for 10.0.0.0/8. Only users on the specified network can access the status page.

      allow 10.0.0.0/8;
      deny all;
    • HTTP basic authentication. In the sample configuration file, uncomment the auth_basic and auth_basic_user_file directives and add user entries to the /etc/nginx/users file (for example, by using an htpasswd generator). If you have an Apache installation, another option is to reuse an existing htpasswd file.

      auth_basic on;
      auth_basic_user_file /etc/nginx/users;
    • Client certificates, which are part of a complete configuration of SSL/TLS. For more information, see the NGINX Plus Admin Guide and the documentation for the HTTP SSL/TLS module.

    • Firewall. Configure your firewall to disallow outside access to the port for the dashboard (8080 in the sample configuration file).

  4. In the server block for HTTPS traffic (created in Configuring Virtual Servers for HTTP and HTTPS Traffic), add the status_zone directive:

    # In the 'server' block for HTTPS traffic
    status_zone oracle-ssl;

When you reload the NGINX Plus configuration file, for example by running the nginx –s reload command, the NGINX Plus dashboard is available immediately at http://nginx-server-address:8080.

For more information about live activity monitoring, see the NGINX Plus Admin Guide.

Monitoring with Third‑Party Tools

The NGINX Plus Status module provides all metrics in JSON format, so you can feed them to many monitoring systems. Here we describe how to configure the NGINX plug‑in for New Relic. We assume that you already have a New Relic account.

Upstreams page in New Relic with NGINX plug-in

  1. Configure the NGINX open source repository so that you can download the plug‑in (nginx-nr-agent package). Use the instructions at nginx.org but don’t install the nginx package because the nginx-plus package is already installed.

  2. Install the nginx-nr-agent package using the package management tool for your OS (for example, yum or apt-get).

  3. Add your New Relic license key and NGINX Plus status URL to the configuration file, /etc/nginx-nr-agent/nginx-nr-agent.ini.

  4. Run the following command to start the agent in daemon mode.

    root@nginx # service nginx-nr-agent start

For more detailed installation and configuration instructions, see the README.txt file provided in the nginx-nr-agent package, and our blog.

Configuring Backup Servers for Disaster Recovery

If you have backup EBS servers, either at the same physical location as your regular servers or at a disaster recovery site, you can include them in the configuration so that EBS continues to work even if all the primary EBS servers go down.

To configure backup servers, add server directives to the upstream block created in Configuring Load Balancing and include the backup parameter. NGINX Plus does not forward traffic to them unless the primary servers all go down.

# In the 'upstream' block
server 172.33.111.210:8000 max_fails=0 backup;
server 172.33.100.146:8000 max_fails=0 backup;

You can then use a DNS‑based global load‑balancing solution to secure against site‑level failures.

Configuring NGINX Plus for High Availability

To increase the reliability of your EBS deployment even more, configure NGINX Plus for high availability (HA).

Configuring High Availability in an On‑Premises Deployment

NGINX Plus Release 6 (R6) and later includes a solution for fast and easy configuration of NGINX Plus in an active‑passive high‑availability (HA) setup, based on software from the keepalived open source project. We provide an overview here, but for more detailed instructions see the NGINX Plus Admin Guide.

The keepalived solution has three components: the keepalived daemon, an implementation of the Virtual Router Redundancy Protocol (VRRP) to manage virtual routers (virtual IP addresses), and a health‑check facility to determine whether a service (in this case, NGINX Plus) is up and operational. If a service on a node fails the health check the configured number of times, keepalived reassigns the virtual IP address from the master (active) node to the backup (passive) node.

VRRP ensures that there is a master node at all times. The backup node listens for VRRP advertisement packets from the master node. If it does not receive an advertisement packet for a period longer than three times the configured advertisement interval, the backup node takes over as master and assigns the configured virtual IP addresses to itself.

Run the nginx-ha-setup script (available in the nginx-ha-keepalived package, which must be installed in addition to the base NGINX Plus package) on both nodes as the root user. The script configures a high‑availability NGINX Plus environment with an active‑passive pair of nodes acting as master and backup. It prompts for the following data:

  • IP address of the local and remote nodes (one of which will be configured as a master, the other as a backup)
  • One free IP address to be used as the cluster endpoint’s (floating) virtual IP address

The configuration of the keepalived daemon is recorded in the file /etc/keepalived/keepalived.conf. The configuration blocks in the file control notification settings, the virtual IP addresses to manage, and the health checks to use to test the services that rely on virtual IP addresses. The following is the configuration file created by the nginx-ha-setup script on a CentOS 7 machine. Note that this is not an NGINX Plus configuration file, so the syntax is different (semicolons are not used to delimit directives, for example).

vrrp_script chk_nginx_service {
script "/usr/libexec/keepalived/nginx-ha-check"
interval 3
weight 50
}

vrrp_instance VI_1 {
interface eth0
state BACKUP
priority 101
virtual_router_id 51
advert_int 1
unicast_src_ip 192.168.100.100
unicast_peer {
192.168.100.101
}
authentication {
auth_type PASS
auth_pass f8f0e5114cbe031a3e1e622daf18f82a
}
virtual_ipaddress {
192.168.100.150
}
track_script {
chk_nginx_service
}
notify "/usr/libexec/keepalived/nginx-ha-notify"
}

Configuring High Availability in a Public Cloud Deployment

Most public cloud systems have integrated tools for ensuring high availability of load‑balancer instances. NGINX Plus is available in three cloud environments, which provide the indicated solutions for high availability of NGINX Plus instances:

  • Amazon EC2 – Elastic Load Balancing
  • Google Cloud Platform – Google Compute Engine HTTP load balancing
  • Microsoft Azure – Azure Traffic Manager with Azure Load Balancer

Please refer to the documentation provided by your cloud vendor. Also see our blog for a discussion of high‑availability solutions on AWS.

We recommend that you use the integrated cloud tools as simple high‑availability solutions and let NGINX Plus perform more sophisticated operations:

  • Security
  • SSL/TLS termination
  • Advanced request routing
  • Health checks
  • Session persistence
  • Monitoring
  • Caching

Configuring Multiple Web Entry Points

The preceding sections of this document, starting with Configuring Virtual Servers for HTTP and HTTPS Traffic, describe how to configure NGINX Plus load balancing for a single Web Entry Point.

You might need to configure multiple Web Entry Points through the same load balancer, for reasons like the following:

  • Access from your internal network vs. externally available servers
  • Access by different groups of users (employees, partners, customers)
  • Access with different networking requirements (for example, a multihop DMZ configuration)

If you need multiple Web Entry Points, then for each one you must:

  • Add a separate upstream block for each set of app servers
  • Add a separate server block for each load balancer entry point
  • Ensure that each shared memory zone has a unique name
  • Include the server_name directive in every server block
  • Change the listeners from any IP to specific IP addresses, if needed
  • Provide additional SSL/TLS certificate files if not using UCC or wildcard certificates

For a sample configuration, see Full Configuration for Multiple Web Entry Points.

Full Configuration Files

For your convenience, the configuration files in this section include all directives discussed in this guide. It is intended for reference. As explained in About Sample Values and Copying of Text, we recommend that you do not copy text from this document into configuration files, because it might include unwanted link text and not include whitespace and other formatting that makes the configuration easy to read. Instead, download the appropriate file from the NGINX, Inc. website as described in Creating and Modifying Configuration Files.

Note that these configuration files contain sample values that you need to change for your deployment.

Full Configuration for a Single Web Entry Point

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type text/html;

proxy_cache_path /var/oracle-cache keys_zone=cache_oracle:50m max_size=500m;

# Custom logging configuration
log_format main '$remote_addr - $remote_user [$time_local]
"$request" $status $body_bytes_sent "$http_referer"
"$http_user_agent" $upstream_addr';
access_log /var/log/nginx/access.log main;

upstream oracle {
zone oracle 64k;

# Production servers
server 172.31.11.210:8000 max_fails=0;
server 172.31.0.146:8000 max_fails=0;

# Disaster recovery servers
server 172.33.111.210:8000 max_fails=0 backup;
server 172.33.100.146:8000 max_fails=0 backup;

# Session persistence
sticky cookie ngxcookie;
}

server {
listen 80;
status_zone oracle-http-redirect;
return 302 https://$http_host$request_uri;
}

server {
listen 443 ssl http2;
server_name company.com;

ssl_certificate /etc/nginx/ssl/certificate-name.crt;
ssl_certificate_key /etc/nginx/ssl/private-key.key;
ssl_protocols TLSv1.2;

status_zone oracle-ssl;
proxy_cache cache_oracle;

location / {
proxy_pass http://oracle;
proxy_set_header Host $host;
proxy_cache_valid any 1h;
}

location @health_check {
internal;
proxy_connect_timeout 3s;
proxy_read_timeout 3s;
proxy_pass http://oracle;
proxy_set_header Host "oracle.company.com";
health_check match=oracleok interval=4s
uri=/OA_HTML/AppsLocalLogin.jsp;
}
}

match oracleok {
status 200-399;
header X-ORACLE-DMS-ECID;
}

server {
# Status zone required for live activity monitoring.
# Enable it for every 'server' block in other configuration files.
status_zone status-page;

# If NGINX Plus is listening on multiple IP addresses, uncomment
# this directive to restrict access to the live activity monitoring dashboard
# to a single IP address (substitute the appropriate address).
# listen 10.2.3.4:8080;

# Live activity monitoring is enabled on port 8080 by default.
listen 8080;

# HTTP basic authentication is enabled by default.
# You can add users with any htpasswd generator.
# Command-line and other online tools are very easy to find.
# You can also reuse the htpasswd file from an Apache HTTP
# server installation.
#auth_basic on;
#auth_basic_user_file /etc/nginx/users;

# Limit access to the dashboard to users on admin networks
# only. Uncomment the "allow" directive and change the network
# address.
#allow 10.0.0.0/8;
deny all;

# NGINX provides a built-in dashboard.
root /usr/share/nginx/html;
location = /status.html { }

# Standard HTTP features are fully supported with the dashboard
# This directive provides a redirect from "/" to "/status.html".
location = / {
return 301 /status.html;
}

# Main status location. HTTP features like authentication,
# access control, header changes, and logging are fully
# supported.
location /status {
status;
status_format json;
access_log off;
}
}
}

Full Configuration for Multiple Web Entry Points

This configuration is for two Web Entry Points with the following settings:

Web Entry Point 1 Web Entry Point 2
Domain name oracle-one.company.com oracle-two.company.com
SSL/TLS certificate and key server_one.crt & server_one.key server_two.crt & server_two.key
Status zone oracle-ssl-one oracle-ssl-two
Cache zone cache_oracle_one cache_oracle_two
Upstream name oracle_one oracle_two
EBS servers 172.31.11.210 & 172.31.0.146 172.31.11.211 & 172.31.0.147
Backup (DR) EBS servers 172.33.111.210 & 172.33.100.146 172.33.111.211 & 172.33.100.147
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type text/html;

proxy_cache_path /var/oracle-cache-one
keys_zone=cache_oracle_one:50m max_size=500m;
proxy_cache_path /var/oracle-cache-two
keys_zone=cache_oracle_two:50m max_size=500m;

# Custom logging configuration
log_format main '$remote_addr - $remote_user [$time_local]
"$request" $status $body_bytes_sent "$http_referer"
"$http_user_agent" $upstream_addr';
access_log /var/log/nginx/access.log main;

upstream oracle_one {
zone oracle_one 64k;

# Production servers
server 172.31.11.210:8000 max_fails=0;
server 172.31.0.146:8000 max_fails=0;

# Disaster recovery servers
server 172.33.111.210:8000 max_fails=0 backup;
server 172.33.100.146:8000 max_fails=0 backup;

# Session persistence
sticky cookie ngxcookie;
}

upstream oracle_two {
zone oracle_two 64k;

# Production servers
server 172.31.11.211:8000 max_fails=0;
server 172.31.0.147:8000 max_fails=0;

# Disaster recovery servers
server 172.33.111.211:8000 max_fails=0 backup;
server 172.33.100.147:8000 max_fails=0 backup;

# Session persistence
sticky cookie ngxcookie;
}

server {
listen 80;
status_zone oracle-http-redirect;
return 302 https://$http_host$request_uri;
}

server {
listen 192.168.210.10:443 ssl http2;
server_name oracle-one.company.com;

ssl_certificate /etc/nginx/ssl/server_one.crt;
ssl_certificate_key /etc/nginx/ssl/server_one.key;
ssl_protocols TLSv1.2;

status_zone oracle-ssl-one;
proxy_cache cache_oracle_one;

location / {
proxy_pass http://oracle_one;
proxy_set_header Host $host;
proxy_cache_valid any 1h;
}

location @health_check {
internal;
proxy_connect_timeout 3s;
proxy_read_timeout 3s;
proxy_pass http://oracle_one;
proxy_set_header Host "oracle-one.company.com";
health_check match=oracleok interval=4s
uri=/OA_HTML/AppsLocalLogin.jsp;
}
}

server {
listen 192.168.210.11:443 ssl http2;
server_name oracle-two.company.com;

ssl_certificate /etc/nginx/ssl/server_two.crt;
ssl_certificate_key /etc/nginx/ssl/server_two.key;
ssl_protocols TLSv1.2;

status_zone oracle-ssl-two;
proxy_cache cache_oracle_two;

location / {
proxy_pass http://oracle_two;
proxy_set_header Host $host;
proxy_cache_valid any 1h;
}

location @health_check {
internal;
proxy_connect_timeout 3s;
proxy_read_timeout 3s;
proxy_pass http://oracle_two;
proxy_set_header Host "oracle-two.company.com";
health_check match=oracleok interval=4s
uri=/OA_HTML/AppsLocalLogin.jsp;
}
}

match oracleok {
status 200-399;
header X-ORACLE-DMS-ECID;
}

server {
# Status zone required for live activity monitoring.
# Enable it for every 'server' block in other configuration files.
status_zone status-page;

# If NGINX Plus is listening on multiple IP addresses, uncomment
# this directive to restrict access to the live activity monitoring dashboard
# to a single IP address (substitute the appropriate address).
# listen 10.2.3.4:8080;

# Live activity monitoring is enabled on port 8080 by default.
listen 8080;

# HTTP basic authentication is enabled by default.
# You can add users with any htpasswd generator.
# Command-line and other online tools are very easy to find.
# You can also reuse the htpasswd file from an Apache HTTP
# server installation.
#auth_basic on;
#auth_basic_user_file /etc/nginx/users;

# Limit access to the dashboard to users on admin networks
# only. Uncomment the "allow" directive and change the network
# address.
#allow 10.0.0.0/8;
deny all;

# NGINX provides a built-in dashboard.
root /usr/share/nginx/html;
location = /status.html { }

# Standard HTTP features are fully supported with the dashboard
# This directive provides a redirect from "/" to "/status.html"
location = / {
return 301 /status.html;
}

# Main status location. HTTP features like authentication,
# access control, header changes, and logging are fully
# supported.
location /status {
status;
status_format json;
access_log off;
}
}
}

Revision History

  • Version 2 (July 2017) – Update about HTTP/2 support (NGINX Plus R11 and later)
  • Version 1 (November 2015) – Initial version (NGINX Plus R7, NGINX 1.9.5)