NGINX.COM
Web Server Load Balancing with NGINX Plus

Today, we reached a significant milestone and are very excited to announce the first major release of NGINX Gateway Fabric – version 1.0!

NGINX Gateway Fabric provides fast, reliable, and secure Kubernetes app connectivity leveraging Gateway API specifications and one of the most widely used data planes in the world, NGINX.

With NGINX Gateway Fabric, we’ve created a new tool category for Kubernetes – a unified application delivery fabric that is designed to streamline and simplify app, service, and API connectivity in Kubernetes, reducing complexity, improving availability, and providing security and visibility at scale.

NGINX Gateway Fabric, a part of Connectivity Stack for Kubernetes, is our conformant implementation of the Gateway API that is built on the proven NGINX data plane. The Gateway API is a cross-vendor, open source project intended to standardize and improve app and service networking in Kubernetes, and NGINX is an active participant of this project.

The Gateway API evolved from the Kubernetes Ingress API to address the limitations of using Ingress objects in production, including complexity and error proneness when configuring advanced use cases and supporting multi-tenant teams in the same infrastructure. In addition, the Gateway API formed the Gateway API for Mesh Management and Administration (GAMMA) subgroup to research and define capabilities and resources of the Gateway API specifications for service mesh use cases.

At NGINX, we see the long-term future of unified app and API connectivity to, from, and within a Kubernetes cluster in the Gateway API, and NGINX Gateway Fabric is the reflection of our vision. It is architected to enable both north-south and east-west Kubernetes app and service connectivity use cases, effectively combining Ingress controller and service mesh capabilities in one unified tool that leverages the same control and data planes with centralized management across any Kubernetes environment.

With version 1.0, we are focusing on advanced connectivity use cases at the edge of a Kubernetes cluster, such as blue-green deployments, TLS termination, and SNI routing. In the future roadmap, there are plans to expand these capabilities with more security and observability features, including addressing service-to-service communications use cases.

What Is NGINX Gateway Fabric?

NGINX Gateway Fabric is architected to deliver future-proof connectivity for apps and services to, from, and within a Kubernetes cluster with its built-in support for advanced use cases, role-based API model, and extensibility that unlocks the true power of NGINX.

NGINX Gateway Fabric standardizes on three primary Gateway API resources (GatewayClass, Gateway, and Routes) with role‑based access control (RBAC) mapping to the associated roles (infrastructure providers, cluster operators, and application developers).

Clearly defining the scope of responsibility and separation for different roles streamlines and simplifies administration. Specifically, infrastructure providers define GatewayClasses for Kubernetes clusters while cluster operators deploy and configure Gateways within a cluster, including policies. Application developers are then free to attach Routes to Gateways to expose their applications externally while sharing the same underlying infrastructure. When clients connect to their apps, NGINX Gateway Fabric routes these requests to the respective application.

To learn more on how NGINX Gateway Fabric processes the complex routing rules, read our blog How NGINX Gateway Fabric Implements Complex Routing Rules.

NGINX Gateway Fabric Benefits

NGINX Gateway Fabric helps increase uptime and reduce complexity of your Kubernetes environment from edge to cloud. It is designed to simplify operations, unlock advanced capabilities, and provide seamless interoperability for Kubernetes environments, delivering improved, future-proof Kubernetes app and service connectivity.

Benefits of NGINX Gateway Fabric include:

  • Data plane – Built on one of the world’s most popular data planes, NGINX Gateway Fabric provides fast, reliable, and secure connectivity for Kubernetes apps. It simplifies and streamlines Kubernetes platform deployment and management by leveraging the same data and control planes across any hybrid, multi-cloud Kubernetes environment, reducing complexity and tool sprawl.
  • Extensibility – Unlike Kubernetes Ingress resources, many advanced use cases are available “out of the box” with NGINX Gateway Fabric, including blue-green and canary deployments, A/B testing, and request/response manipulation. It also defines an annotation-less extensibility model with extension points and policy attachments to unlock advanced NGINX data plane features that are not supported by the API itself.
  • Interoperability – NGINX Gateway Fabric is a dedicated and conformant implementation of the Gateway API, which provides high-level configuration compatibility and portability for easier migration across different implementations. Its Kubernetes-native design ensures seamless ecosystem integration with other Kubernetes platform tools and processes like Prometheus and Grafana.
  • Governance – NGINX Gateway Fabric features a native role-based API model that enables self-service governance capabilities to share the infrastructure across multi-tenant teams. As an open source project, it operates compliant with established community governance procedures, delivering full transparency in its development process, features roadmap, and contributions.
  • Conformance – NGINX Gateway Fabric is tested and validated to conform with the Gateway API specifications in accordance with standardized conformance tests, ensuring a consistent experience with API operations.

NGINX Gateway Fabric Architecture

Rather than shoehorn Gateway API capabilities into NGINX Ingress Controller, we created NGINX Gateway Fabric as an entirely separate project to implement the Kubernetes Gateway API. If you are curious about the reasoning behind that, read our blog Why We Decided to Start Fresh with Our NGINX Gateway Fabric.

An NGINX Gateway Fabric pod consists of two containers:

  • nginx container – Provides the data plane and consists of an NGINX master process and NGINX worker processes. The master process controls the worker processes, which handle the client traffic and load balance the traffic to the backend applications.
  • nginx-gateway container – Provides the control plane, watches Kubernetes objects (Services, Endpoints, Secrets, and Gateway API CRDs), and configures NGINX.

For the detailed description of NGINX Gateway Fabric’s design, architecture, and component interactions, refer to the project documentation.

Getting Started with NGINX Gateway Fabric

If you are interested in NGINX’s implementation of the Gateway API, check out the NGINX Gateway Fabric project on GitHub. You can get involved by:

  • Joining the project as a contributor
  • Trying the implementation in your lab
  • Testing and providing feedback

To learn more about how you can enhance application delivery with NGINX Kubernetes solutions, visit the Connectivity Stack for Kubernetes web page.

Are you still thinking about why you should try the Gateway API? Read our blog 5 Reasons to Try the Kubernetes Gateway API to get the answer.

Also, don’t miss the chance to visit the NGINX booth at KubeCon North America 2023 to chat with the developers of NGINX Gateway Fabric. NGINX, part of F5, is proud to be a Platinum sponsor of KubeCon NA 2023, and we hope to see you there!

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Ilya Krutov

Ilya Krutov

Product Marketing Manager, NGINX

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.