NGINX.COM
Web Server Load Balancing with NGINX Plus

When announcing the R29 release of NGINX Plus, we briefly covered its new native support for parsing MQTT messages. In this post, we’ll build on that and discuss how NGINX Plus can be configured to optimize MQTT deployments in enterprise environments.

What Is MQTT?

MQTT stands for Message Queuing Telemetry Transport. It’s a very popular, lightweight publish-subscribe messaging protocol, ideal for connecting Internet of Things (IoT) or machine-to-machine (M2M) devices and applications over the internet. MQTT is designed to operate efficiently in low-bandwidth or low-power environments, making it an ideal choice for applications with a large number of remote clients. It’s used in a variety of industries, including consumer electronics, automotive, transportation, manufacturing, and healthcare.

NGINX Plus MQTT Message Processing

NGINX Plus R29 supports MQTT 3.1.1 and MQTT 5.0. It acts as a proxy between clients and brokers, offloading tasks from core systems, simplifying scalability, and reducing compute costs. Specifically, NGINX Plus parses and rewrites portions of MQTT CONNECT messages, enabling features like:

  • MQTT broker load balancing 
  • Session persistence (reconnecting clients to the same broker) 
  • SSL/TLS termination 
  • Client certificate authentication 

MQTT message processing directives must be defined in the stream context of an NGINX configuration file and are provided by the ngx_stream_mqtt_preread_module
and ngx_stream_mqtt_filter_module.

The preread module processes MQTT data prior to NGINX’s internal proxying, allowing load balancing and upstream routing decisions to be made based on parsed message data.

The filter module enables rewriting of the clientid, username, and password fields within received CONNECT messages. The ability to set these fields to variables and complex values expands configuration options significantly, enabling NGINX Plus to mask sensitive device information or insert data like a TLS certificate distinguished name.

MQTT Directives and Variables

Several new directives and embedded variables are now available for tuning your NGINX configuration to optimize MQTT deployments and meet your specific needs.

Preread Module Directives and Embedded Variables

  • mqtt_preread – Enables MQTT parsing, extracting the clientid and username fields from CONNECT messages sent by client devices. These values are made available via embedded variables and help hash sessions to load balanced upstream servers (examples below).
  • $mqtt_preread_clientid – Represents the MQTT client identifier sent by the device.
  • $mqtt_preread_username – Represents the username sent by the client for authentication purposes.

Filter Module Directives

  • mqtt – Defines whether MQTT rewriting is enabled.
  • mqtt_buffers – Overrides the maximum number of MQTT processing buffers that can be allocated per connection and the size of each buffer. By default, NGINX will impose a limit of 100 buffers per connection, each 1k in length. Typically, this is optimal for performance, but may require tuning in special situations. For example, longer MQTT messages require a larger buffer size. Systems processing a larger volume of MQTT messages for a given connection within a short period of time may benefit from an increased number of buffers. In most cases, tuning buffer parameters has little bearing on underlying system performance, as NGINX constructs buffers from an internal memory pool.
  • mqtt_rewrite_buffer_size – Specifies the size of the buffer used for constructing MQTT messages. This directive has been deprecated and is obsolete since NGINX Plus R30.
  • mqtt_set_connect – Rewrites parameters of the CONNECT message sent from a client. Supported parameters include: clientid, username, and password.

MQTT Examples

Let’s explore the benefits of processing MQTT messages with NGINX Plus and the associated best practices in more detail. Note that we use ports 1883 and 8883 in the examples below. Port 1883 is the default unsecured MQTT port, while 8883 is the default SSL/TLS encrypted port.

MQTT Broker Load Balancing

The ephemeral nature of MQTT devices may cause client IPs to change unexpectedly. This can create challenges when routing device connections to the correct upstream broker. The subsequent movement of device connections from one upstream broker to another can result in expensive syncing operations between brokers, adding latency and cost.

By parsing the clientid field in an MQTT CONNECT message, NGINX can establish sticky sessions to upstream service brokers. This is achieved by using the clientid as a hash key for maintaining connections to broker services on the backend.

In this example, we proxy MQTT device data using the clientid as a token for establishing sticky sessions to three upstream brokers. We use the consistent parameter so that if an upstream server fails, its share of the traffic is evenly distributed across the remaining servers without affecting sessions that are already established on those servers.

stream {
      mqtt_preread on; 
     
      upstream backend {
          zone tcp_mem 64k;
          hash $mqtt_preread_clientid consistent;
    
          server 10.0.0.7:1883; # upstream mqtt broker 1
          server 10.0.0.8:1883; # upstream mqtt broker 2
          server 10.0.0.9:1883; # upstream mqtt broker 3 
      }
    
      server {
          listen 1883;
          proxy_pass backend;
          proxy_connect_timeout 1s;
      }
  }

NGINX Plus can also parse the username field of an MQTT CONNECT message. For more details, see the ngx_stream_mqtt_preread_module specification

SSL/TLS Termination

Encrypting device communications is key to ensuring data confidentiality and protecting against man-in-the-middle attacks. However, TLS handshaking, encryption, and decryption can be a resource burden on an MQTT broker. To solve this, NGINX Plus can offload data encryption from a broker (or a cluster of brokers), simplifying security rules and allowing brokers to focus on processing device messages. 

In this example, we show how NGINX can be used to proxy TLS-encrypted MQTT traffic from devices to a backend broker. The ssl_session_cache directive defines a 5-megabyte cache, which is enough to store approximately 20,000 SSL sessions. NGINX will attempt to reach the proxied broker for five seconds before timing out, as defined by the proxy_connect_timeout directive.

stream {
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_session_cache shared:SSL:5m;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 5s;
      }
  } 

Client ID Substitution

For security reasons, you may opt to not store client-identifiable information in the MQTT broker’s database. For example, a device may send a serial number or other sensitive data as part of an MQTT CONNECT message. By replacing a device’s identifier with other known static values received from a client, an alternate unique key can be established for every device attempting to reach NGINX Plus proxied brokers.

In this example, we extract a unique identifier from a device’s client SSL certificate and use it to mask its MQTT client ID. Client certificate authentication (mutual TLS) is controlled with the ssl_verify_client directive. When set to the on parameter, NGINX ensures that client certificates are signed by a trusted Certificate Authority (CA). The list of trusted CA certificates is defined by the ssl_client_certificate directive. 

stream {
      mqtt on; 
    
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_client_certificate /etc/nginx/certs/client-ca.crt;
          ssl_session_cache shared:SSL:10m;
          ssl_verify_client on;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 1s;
          
          mqtt_set_connect clientid $ssl_client_serial;
      }
  }

Client Certificate as an Authentication Credential

One common approach to authenticating MQTT clients is to use data stored in a client certificate as the username. NGINX Plus can parse client certificates and rewrite the MQTT username field, offloading this task from backend brokers. In the following example, we extract the client certificate’s Subject Distinguished Name (Subject DN) and copy it to the username portion of an MQTT CONNECT message.

stream {
      mqtt on; 
     
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_client_certificate /etc/nginx/certs/client-ca.crt;
          ssl_session_cache shared:SSL:10m;
          ssl_verify_client on;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 1s;
          
          mqtt_set_connect username $ssl_client_s_dn;
      }
  } 

For a complete specification on NGINX Plus MQTT CONNECT message rewriting, see the ngx_stream_mqtt_filter_module specification.

Get Started Today

Future developments to MQTT in NGINX Plus may include parsing of other MQTT message types, as well as deeper parsing of the CONNECT message to enable functions like:

  • Additional authentication and access control mechanisms
  • Protecting brokers by rate limiting “chatty” clients
  • Message telemetry and connection metrics

If you’re new to NGINX Plus, sign up for a free 30-day trial to get started with MQTT. We would also love to hear your feedback on the features that matter most to you. Let us know what you think in the comments.

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Michael Vernik

Michael Vernik

Sr Product Manager

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.