NGINX.COM
Web Server Load Balancing with NGINX Plus

[Editor– This post is an extract from our comprehensive eBook, Managing Kubernetes Traffic with F5 NGINX: A Practical Guide. Download it for free today.]

Along with HTTP traffic, NGINX Ingress Controller load balances TCP and UDP traffic, so you can use it to manage traffic for a wide range of apps and utilities based on those protocols, including:

  • MySQL, LDAP, and MQTT – TCP‑based apps used by many popular applications
  • DNS, syslog, and RADIUS – UDP‑based utilities used by edge devices and non‑transactional applications

TCP and UDP load balancing with NGINX Ingress Controller is also an effective solution for distributing network traffic to Kubernetes applications in the following circumstances:

  • You are using end-to-end encryption (EE2E) and having the application handle encryption and decryption rather than NGINX Ingress Controller
  • You need high‑performance load balancing for applications that are based on TCP or UDP
  • You want to minimize the amount of change when migrating an existing network (TCP/UDP) load balancer to a Kubernetes environment

NGINX Ingress Controller comes with two NGINX Ingress resources that support TCP/UDP load balancing:

  • GlobalConfiguration resources are typically used by cluster administrators to specify the TCP/UDP ports (listeners) that are available for use by DevOps teams. Note that each NGINX Ingress Controller deployment can only have one GlobalConfiguration resource.
  • TransportServer resources are typically used by DevOps teams to configure TCP/UDP load balancing for their applications. NGINX Ingress Controller listens only on ports that were instantiated by the administrator in the GlobalConfiguration resource. This prevents conflicts between ports and provides an extra layer of security by ensuring DevOps teams expose to public external services only ports that the administrator has predetermined are safe.

The following diagram depicts a sample use case for the GlobalConfiguration and TransportServer resources. In gc.yaml, the cluster administrator defines TCP and UDP listeners in a GlobalConfiguration resource. In ts.yaml, a DevOps engineer references the TCP listener in a TransportServer resource that routes traffic to a MySQL deployment.

Topology diagram of use case for GlobalConfiguration and TransportServer resources

The GlobalConfiguration resource in gc.yaml defines two listeners: a UDP listener on port 514 for connection to a syslog service and a TCP listener on port 5353 for connection to a MySQL service.

Lines 6–8 of the TransportServer resource in ts.yaml reference the TCP listener defined in gc.yaml by name (mysql-tcp) and lines 9–14 define the routing rule that sends TCP traffic to the mysql-db upstream.

In this example, a DevOps engineer uses the MySQL client to verify that the configuration is working, as confirmed by the output with the list of tables in the rawdata_content_schema database inside the MySQL deployment.

$ echo “SHOW TABLES” | mysql –h <external_IP_address> -P <port> -u <user> –p rawdata_content_schema 
Enter Password: <password>
Tables_in_rawdata_content_schema
authors
posts

TransportServer resources for UDP traffic are configured similarly; for a complete example, see Basic TCP/UDP Load Balancing in the NGINX Ingress Controller repo on GitHub. Advanced NGINX users can extend the TransportServer resource with native NGINX configuration using the stream-snippets ConfigMap key, as shown in the Support for TCP/UDP Load Balancing example in the repo.

For more information about features you can configure in TransportServer resources, see the NGINX Ingress Controller documentation.

This post is an extract from our comprehensive eBook, Managing Kubernetes Traffic with NGINX: A Practical Guide. Download it for free today.

Try the NGINX Ingress Controller based on NGINX Plus for yourself in a 30-day free trial today or contact us to discuss your use cases.

Hero image
Managing Kubernetes Traffic with F5 NGINX: A Practical Guide

Learn how to manage Kubernetes traffic with F5 NGINX Ingress Controller and F5 NGINX Service Mesh and solve the complex challenges of running Kubernetes in production.



About The Author

Amir Rawdat

Solutions Engineer

Amir Rawdat is a technical marketing engineer at NGINX, where he specializes in content creation of various technical topics. He has a strong background in computer networking, computer programming, troubleshooting, and content creation. Previously, Amir was a customer application engineer at Nokia.

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.