Web Server Load Balancing with NGINX Plus

Easier Deployment and Upgrade of NGINX Service Mesh

Service meshes are rapidly becoming a critical component for the cloud native stack, especially for users of the Kubernetes platform. A service mesh provides critical observability, security, and traffic control so that your Kubernetes apps don’t need to implement these features, which frees developers to focus on business logic.

NGINX Service Mesh is our fully integrated service mesh platform. It provides all the advantages of a service mesh while leveraging a data plane powered by NGINX Plus to enable key features like mTLS, traffic management, and high availability.

NGINX Service Mesh Release 1.1.0 introduces three key enhancements that make it easier to deploy and manage our production‑ready service mesh in Kubernetes: Helm support, air‑gap installation, and in‑place upgrades.

Helm Support

NGINX Service Mesh includes the nginx-meshctl CLI tool for fully scriptable installation, upgrade, and removal as part of any CI/CD pipeline. But a CLI is not always the preferred approach for managing Kubernetes services. NGINX Service Mesh Release 1.1.0 adds support for Helm, a popular and supported tool for automating creation, configuration, packaging, and deployment of applications and services to Kubernetes.

To use Helm with NGINX Service Mesh, first add the helm repository:

# helm repo add nginx-service-mesh
# helm repo update

and then install the chart with a release‑name of your choosing in a dedicated namespace.

# helm install release-name nginx-service-mesh –n nginx-mesh-namespace

For more information on deploying NGINX Service Mesh with Helm, see our documentation.

Air-Gap Installation

In accordance with standard Kubernetes practice, by default NGINX Service Mesh pulls the container images for both the control plane and data plane from various Kubernetes‑supported registries at deployment time. Some of the images come from the public NGINX container registry while others (such as Prometheus and Grafana) are pulled from public registries. While this model works well for Kubernetes environments that have public outbound access, it does not work for restricted, locked down, and more secure Kubernetes environments with no direct public outbound access.

NGINX Service Mesh Release 1.1.0 introduces support for air‑gapped installation, in which you pre‑pull images and push them into your own private image registries which are accessible only from internal Kubernetes environments.

When you push the pre‑pulled images to your private registry server, you must use the image names and tags specified in the documentation. You then instruct NGINX Service Mesh to pull the images from the private registry server by including the --disable-public-images flag on the nginx-meshctl deploy command:

# nginx-meshctl deploy --registry-server your-private-registry --disable-public-images

In-Place Upgrades

Previously, you had two choices when upgrading NGINX Service Mesh: remove the running deployment before installing an updated release, or deploy the update in a separate cluster and migrate users over to it. Both were less than ideal, because you had to redeploy services managed by the mesh before the update.

NGINX Service Mesh 1.1.0 introduces support for in‑place upgrades for a much less disruptive experience. In addition, all required container images and the NGINX Service Mesh control plane are automatically upgraded, and Custom Resource Definitions (CRDs) are maintained.

You can upgrade your current deployment to the latest version by installing the nginx-meshctl command line tool and running the following command.

# nginx-meshctl upgrade

Note: When executing in‑place upgrades, the data plane sidecars continue to run from the previous version until rerolled or scaled up. In the case of scaling up, the new pods run the latest sidecar version.


NGINX Service Mesh 1.1.0 continues our focus on improving user experience – learn more about this release in the release notes and download NGINX Service Mesh for free at the F5 downloads page to get started. You can deploy it in your development, test, and production environments, and submit your feedback on GitHub. Not sure if you’re ready for a service mesh? Check out our webinar featuring service mesh experts, Are You Service Mesh Ready? Moving from Consideration to Implementation.

To get the most out of NGINX Service Mesh, we suggest pairing it with NGINX Ingress Controller so you can manage ingress and egress together. The demo below and associated blog by NGINX engineer Kate Osborn shows how you can simplify Kubernetes ingress and egress traffic management by combining NGINX Service Mesh and the NGINX Plus-based version of NGINX Ingress Controller. Start your free 30-day trial of NGINX Ingress Controller today or contact us to discuss your use cases.

Hero image
Microservices: From Design to Deployment

The complete guide to microservices development


No More Tags to display