NGINX.COM
Web Server Load Balancing with NGINX Plus

In the first post in this series, we describe several approaches to improving the security of your SSL private keys. The post finished with a demonstration of a remote password distribution point (PDP) used to securely share encryption passwords with NGINX instances.

Secrets management systems like HashiCorp Vault operate in a similar fashion to that sample PDP:

  • They use a central (or highly available and distributed) secrets service that is accessed using HTTPS or another API
  • Clients are authenticated by authentication tokens or other means
  • Tokens can be revoked as required to control access to the secret

In this post, we show how to set up HashiCorp Vault to distribute SSL passwords. For even more security, you can set up an external hardware security module (HSM).

To completely eliminate on‑disk storage of SSL certificate‑key pairs, see the third post in this series, Using the NGINX Plus Key-Value Store to Secure Ephemeral SSL Keys from HashiCorp Vault. It explains how to generate ephemeral SSL keys from HashiCorp Vault and store them in memory in the NGINX Plus key‑value store.

This post applies to both NGINX Open Source and NGINX Plus. For ease of reading, we’ll refer to NGINX throughout.

Using HashiCorp Vault to Protect SSL Private Keys

The instructions in this section set up a central PDP server using Vault to distribute SSL passwords. They are based on DigitalOcean’s instructions; modify them as necessary to comply with your own Vault policies.

In our example, each remote web server has a unique authentication token. These tokens can be used to access secrets in Vault’s secret/webservers/ path, and we store the SSL passwords in secret/webservers/ssl_passwords.

We will see how to secure the tokens, and how to revoke individual authentication tokens when necessary.

Download, Install, and Initialize HashiCorp Vault on the PDP Server

  1. Follow the DigitalOcean instructions to download and extract Vault on your PDP server. We’re using the following sample /etc/vault.hcl file to make Vault remotely accessible, and to disable TLS (for ease of use when testing):

    backend "file" {
        path = "/var/lib/vault"
    }
    
    listener "tcp" {
        address = "0.0.0.0:8200"
        tls_disable = 1
    }
  2. Start Vault using the startup scripts (if you created them), or manually:

    user@pdp:~$ sudo /usr/local/bin/vault server -config=/etc/vault.hcl
  3. Initialize Vault and obtain the Initial Root Token:

    user@pdp:~$ export VAULT_ADDR=http://localhost:8200
    user@pdp:~$ vault init -key-shares=3 -key-threshold=2
    user@pdp:~$ vault operator unseal
    Initial Root Token: 86c5c2a4-8ab2-24dd-1816-48449c83114e
  4. We need to provide the Initial Root Token in many of the following commands. For convenience, we assign it to the root_token shell variable:

    user@pdp:~$ root_token=86c5c2a4-8ab2-24dd-1816-48449c83114e

Store the Secrets

  1. Still working on the PDP server, create a temporary file called /tmp/ssl_passwords.txt with the passwords in it.

  2. Store this file as a secret in Vault, and verify that you can retrieve it:

    user@pdp:~$ VAULT_TOKEN=$root_token vault kv put secret/webservers/ssl_passwords value=@/tmp/ssl_passwords.txt
    
    user@pdp:~$ VAULT_TOKEN=$root_token vault kv get -field=value secret/webservers/ssl_passwords
    password1
    password2
    ...
  3. For security, delete /tmp/ssl_passwords.txt.

  4. Create a policy specification in a file called web.hcl with the following contents:

    path "secret/webservers/*" {
        capabilities = ["read"]
    }
  5. Load the policy into Vault, naming it web:

    user@pdp:~$ VAULT_TOKEN=$root_token vault policy write web web.hcl
  6. Create a new authentication token, associate it with the web policy, and optionally include the display-name parameter to give it a user‑friendly name. Make a note of the token and token_accessor values; you’ll use them in subsequent commands:

    user@pdp:~$ VAULT_TOKEN=$root_token vault token create -policy=web -display-name=webserver1
    Key                  Value
    ---                  -----
    token                dcf75ffd-a245-860f-6960-dc9e834d3385
    token_accessor       0c1d6181-7adf-7b42-27be-b70cfa264048

    The NGINX web server uses this token to retrieve the SSL passwords. The web policy prevents the web server from retrieving secrets outside the secret/webservers/* path.

Verify the Web Server Can Retrieve the Token

  1. Working on the NGINX web server, install the Vault binary.
  2. Declare the location of the remote Vault server (here, http://pdp:8200), and then verify that the web server machine can retrieve the SSL passwords using the token:

    user@web1:~$ export VAULT_ADDR=http://pdp:8200
    user@web1:~$ VAULT_TOKEN=dcf75ffd-a245-860f-6960-dc9e834d3385 vault kv get -field=value secret/webservers/ssl_passwords
    password1
    password2
    ...

Configure the NGINX Vault Connector on the Web Server

  1. As part of setting up the sample PDP in the first post, we created a shell script called connector.sh on the NGINX host (web server machine). Here we modify it to use Vault:

    #!/bin/sh
    # Usage: connector_v.sh  
    
    CONNECTOR=$1
    CREDS=$2
    
    [ -e $CONNECTOR ] && /bin/rm -f $CONNECTOR
    
    mkfifo $CONNECTOR; chmod 600 $CONNECTOR
    
    export VAULT_ADDR=http://pdp:8200
    export VAULT_TOKEN=$CREDS
    
    while true; do
        vault kv get -field=value secret/webservers/ssl_passwords > $CONNECTOR
        sleep 0.1 # race condition, ensures EOF
    done
  2. Run the script as a background process, invoked as follows:

    root@web1:~# ./connector_v.sh /var/run/nginx/ssl_passwords \
    dcf75ffd-a245-860f-6960-dc9e834d3385 &
  3. Test the connector by reading from the connector path:

    root@web1:~$ cat /var/run/nginx/ssl_passwords
    password1
    password2
    ...
  4. Configure NGINX to read the ssl_passwords file on startup, and to use the contents as passwords to decrypt encrypted private keys. You can include the ssl_password_file directive either in a server block (like the one created for the standard configuration in the first post) or in the http context to apply it to multiple virtual servers:

    ssl_password_file /var/run/nginx/ssl_passwords;
  5. Verify that NGINX can read the password and decrypt the SSL keys:

    root@web1:~# nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful

Revoking a Web Server’s Authentication Token

You can easily revoke access if the web server is compromised or when it is decommissioned. To do so, you can directly revoke the authentication token used by the web server:

user@pdp:~$ VAULT_TOKEN=$root_token vault token revoke dcf75ffd-a245-860f-6960-dc9e834d3385

Vault tokens are sensitive items of data, and many Vault workflows do not store copies of tokens that are issued to an authenticated client. If a copy of a token is leaked, an attacker can impersonate the client.

Instead, it’s common to manage an active token using its accessor, which gives limited rights over the token and cannot be used to retrieve the token value. Rather than storing tokens when they are issued, store its corresponding accessor.

If you need to determine the accessor for a web server’s authentication token, run the vault list command to retrieve the list of accessors, and the vault token lookup command on each accessor to find the one with the relevant display name and policy:

user@pdp:~$ VAULT_TOKEN=$root_token vault list /auth/token/accessors
Keys
----
83be5a73-9025-1221-cb70-4b0e8a3ba8df
0c1d6181-7adf-7b42-27be-b70cfa264048
f043b145-7a63-01db-ea85-9f22f413c55e

user@pdp:~$ VAULT_TOKEN=$root_token vault token lookup -accessor 0c1d6181-7adf-7b42-27be-b70cfa264048
Key                 Value
---                 -----
...
display-name        webserver1       
...
policy              web
...

You can then revoke the token using its accessor:

user@pdp:~$ VAULT_TOKEN=$root_token vault token revoke -accessor 0c1d6181-7adf-7b42-27be-b70cfa264048

Security Implications

Using Vault has a similar security profile to the sample PDP described in the first post. SSL private keys can only be obtained if the corresponding password is obtained, and for this, an attacker needs to know the value of a current authentication token.

The primary benefit of using Vault is to automate and scale the secret store.

Using an External HSM to Manage Private Keys

None of the solutions we’ve covered so far in the series protect the private key when an attacker gains root access to the NGINX server. If an attacker can access NGINX’s runtime memory or generate a core dump, there are well‑known techniques to scan the process’s memory and locate private key data.

External hardware security modules (HSMs) address this by storing the SSL private keys in external, tamper‑proof hardware. They offer decryption as a service, and NGINX accesses that service whenever it needs to perform an SSL operation that requires the kay.

The NGINX server never sees the SSL private key data. An attacker who gains root access on the server cannot obtain the SSL private key, but can decrypt data on demand by accessing the HSM decryption service using the NGINX credentials.

Configuring NGINX to Access an HSM

NGINX delegates all SSL private key operations to a crypto library called OpenSSL. Third‑party HSM devices can be made available to NGINX by using the HSM vendor’s OpenSSL engine.

The NGINX configuration is specific to each vendor HSM, but generally follows a straightforward path:

  • Configure NGINX to use the vendor’s OpenSSL engine rather than the default software engine:

    ssl_certificate_key engine:vendor-hsm-engine:...;
  • Rather than using the real private key, configure NGINX to use the vendor‑supplied ‘fake’ key. This key contains a handle that identifies the real key on the HSM device:

    ssl_certificate_key ssl/vendor.private.key;

    The key may also contain the credentials to access the HSM device, or the credentials may be provided using additional vendor‑specific configuration.

  • (Optional) Apply any desired tuning, such as increasing the number of NGINX worker processes, to maximize the performance of NGINX and the HSM.

For an example of HSM setup, refer to Amazon’s CloudHSM documentation.

Security Implications

External HSMs are a highly secure method of storing SSL private keys. An attacker with root access to the NGINX server is able to leverage the NGINX credentials to decrypt arbitrary data using the HSM but is not able to obtain the unencrypted private key. An HSM makes it significantly harder for an attacker to impersonate a website or to decrypt arbitrary data offline.

Conclusion

It’s essential to ensure that secret data such as the SSL private key is fully protected because the consequences of disclosure are very serious.

For many organizations with appropriate security processes in place, it’s sufficient to store the private key on the frontend load balancers and then limit and audit all access to those servers (the standard configuration described in the first post).

For organizations that need to deploy NGINX configuration frequently, the measures in this post and the first one can be used to limit the users or entities who can see the private key data.

In the third post in this series, Using the NGINX Plus Key-Value Store to Secure Ephemeral SSL Keys from HashiCorp Vault, we explain how to automate the provisioning of keys and certificates from Vault to NGINX Plus’s key‑value store, using the NGINX Plus API.

Note again that none of these methods reduces the need to fully secure the running NGINX instances from remote access or configuration manipulation.

Try NGINX Plus for yourself – start your free 30-day trial today or contact us to discuss your use cases.

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Owen Garrett

Sr. Director, Product Management

Owen is a senior member of the NGINX Product Management team, covering open source and commercial NGINX products. He holds a particular responsibility for microservices and Kubernetes‑centric solutions. He’s constantly amazed by the ingenuity of NGINX users and still learns of new ways to use NGINX with every discussion.

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.