NGINX.COM
Web Server Load Balancing with NGINX Plus

Shifting Security Left with F5 NGINX App Protect on Amazon EKS

According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures.

Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years ago. Yet today more than 75% of organizations world‑wide run containerized applications in production, up 30% from 2019. One critical issue in Kubernetes environments, including Amazon Elastic Kubernetes Service (EKS), is security. All too often security is “bolted on” at the end of the app development process, and sometimes not even until after a containerized application is already up and running.

The current wave of digital transformation, accelerated by the COVID‑19 pandemic, has forced many businesses to take a more holistic approach to security and consider a “shift left” strategy. Shifting security left means introducing security measures early into the software development lifecycle (SDLC) and using security tools and controls at every stage of the CI/CD pipeline for applications, containers, microservices, and APIs. It represents a move to a new paradigm called DevSecOps, where security is added to DevOps processes and integrates into the rapid release cycles typical of modern software app development and delivery.

DevSecOps represents a significant cultural shift. Security and DevOps teams work with a common purpose: to bring high‑quality products to market quickly and securely. Developers no longer feel stymied at every turn by security procedures that stop their workflow. Security teams no longer find themselves fixing the same problems repeatedly. This makes it possible for the organization to maintain a strong security posture, catching and preventing vulnerabilities, misconfigurations, and violations of compliance or policy as they occur.

Shifting security left and automating security as code protects your Amazon EKS environment from the outset. Learning how to become production‑ready at scale is a big part of building a Kubernetes foundation. Proper governance of Amazon EKS helps drive efficiency, transparency, and accountability across the business while also controlling cost. Strong governance and security guardrails create a framework for better visibility and control of your clusters. Without them, your organization is exposed to greater risk of security breaches and the accompanying longtail costs associated with damage to revenue and reputation.

To find out more about what to consider when moving to a security‑first strategy, take a look at this recent report from O’Reilly, Shifting Left for Application Security.

Automating Security for Amazon EKS with GitOps

Automation is an important enabler for DevSecOps, helping to maintain consistency even at a rapid pace of development and deployment. Like infrastructure as code, automating with a security-as-code approach entails using declarative policies to maintain the desired security state.

GitOps is an operational framework that facilitates automation to support and simplify application delivery and cluster management. The main idea of GitOps is having a Git repository that stores declarative policies of Kubernetes objects and the applications running on Kubernetes, defined as code. An automated process completes the GitOps paradigm to make the production environment match all stored state descriptions.

The repository acts as a source of truth in the form of security policies, which are then referenced by declarative configuration-as-code descriptions as part of the CI/CD pipeline process. As an example, NGINX maintains a GitHub repository with an Ansible role for F5 NGINX App Protect which we hope is useful for helping teams wanting to shift security left.

With such a repo, all it takes to deploy a new application or update an existing one is to update the repo. The automated process manages everything else, including applying configurations and making sure that updates are successful. This ensures that everything happens in the version control system for developers and is synchronized to enforce security on business‑critical applications.

When running on Amazon EKS, GitOps makes security seamless and robust, while virtually eliminating human errors and keeping track of all versioning changes that are applied over time.

Diagram showing how to shift left using security as code with NGINX App Protect WAF and DoS, Jenkins, and Ansible
Figure 1: NGINX App Protect helps you shift security lift with security as code at all phases of your software development lifecycle

NGINX App Protect and NGINX Ingress Controller Protect Your Apps and APIs in Amazon EKS

A robust design for Kubernetes security policy must accommodate the needs of both SecOps and DevOps and include provisions for adapting as the environment scales. Kubernetes clusters can be shared in many ways. For example, a cluster might have multiple applications running in it and sharing its resources, while in another case there are multiple instances of one application, each for a different end user or group. This implies that security boundaries are not always sharply defined and there is a need for flexible and fine‑grained security policies.

The overall security design must be flexible enough to accommodate exceptions, must integrate easily into the CI/CD pipeline, and must support multi‑tenancy. In the context of Kubernetes, a tenant is a logical grouping of Kubernetes objects and applications that are associated with a specific business unit, team, use case, or environment. Multi‑tenancy, then, means multiple tenants securely sharing the same cluster, with boundaries between tenants enforced based on technical security requirements that are tightly connected to business needs.

An easy way to implement low‑latency, high‑performance security on Amazon EKS is by embedding the NGINX App Protect WAF and DoS modules with NGINX Ingress Controller. None of our other competitors provide this type of inline solution. Using one product with synchronized technology provides several advantages, including reduced compute time, costs, and tool sprawl. Here are some additional benefits.

  • Securing the application perimeter – In a well‑architected Kubernetes deployment, NGINX Ingress Controller is the only point of entry for data‑plane traffic flowing to services running within Kubernetes, making it an ideal location for a WAF and DoS protection.
  • Consolidating the data plane – Embedding the WAF within NGINX Ingress Controller eliminates the need for a separate WAF device. This reduces complexity, cost, and the number of points of failure.
  • Consolidating the control plane – WAF and DoS configuration can be managed with the Kubernetes API, making it significantly easier to automate CI/CD processes. NGINX Ingress Controller configuration complies with Kubernetes role‑based access control (RBAC) practices, so you can securely delegate the WAF and DoS configurations to a dedicated DevSecOps team.

The configuration objects for NGINX App Protect WAF and DoS are consistent across both NGINX Ingress Controller and NGINX Plus. A master configuration can easily be translated and deployed to either device, making it even easier to manage WAF configuration as code and deploy it to any application environment

To build NGINX App Protect WAF and DoS into NGINX Ingress Controller, you must have subscriptions for both NGINX Plus and NGINX App Protect WAF or DoS. A few simple steps are all it takes to build the integrated NGINX Ingress Controller image (Docker container). After deploying the image (manually or with Helm charts, for example), you can manage security policies and configuration using the familiar Kubernetes API.

Diagram showing topology for deploying NGINX App Protect WAF and DoS on NGINX Ingress Controller in Amazon EKS
Figure 2: NGINX App Protect WAF and DoS on NGINX Ingress Controller routes app and API traffic to pods and microservices running in Amazon EKS

The NGINX Ingress Controller based on NGINX Plus provides granular control and management of authentication, RBAC‑based authorization, and external interactions with pods. When the client is using HTTPS, NGINX Ingress Controller can terminate TLS and decrypt traffic to apply Layer 7 routing and enforce security.

NGINX App Protect WAF and NGINX App Protect DoS can then be deployed to enforce security policies to protect against point attacks at Layer 7 as a lightweight software security solution. NGINX App Protect WAF secures Kubernetes apps against OWASP Top 10 attacks, and provides advanced signatures and threat protection, bot defense, and Dataguard protection against exploitation of personally identifiable information (PII). NGINX App Protect DoS provides an additional line of defense at Layers 4 and 7 to mitigate sophisticated application‑layer DoS attacks with user behavior analysis and app health checks to protect against attacks that include Slow POST, Slowloris, flood attacks, and Challenger Collapsar.

Such security measures protect both REST APIs and applications accessed using web browsers. API security is also enforced at the Ingress level following the north‑south traffic flow.

NGINX Ingress Controller with NGINX App Protect WAF and DoS can secure Amazon EKS traffic on a per‑request basis rather than per‑service: this is a more useful view of Layer 7 traffic and a far better way to enforce SLAs and north‑south WAF security.

Diagram showing NGINX Ingress Controller with NGINX App Protect WAF and DoS routing north-south traffic to nodes in Amazon EKS
Figure 3: NGINX Ingress Controller with NGINX App Protect WAF and DoS routes north-south traffic to nodes in Amazon EKS

The latest High‑Performance Web Application Firewall Testing report from GigaOm shows how NGINX App Protect WAF consistently delivers strong app and API security while maintaining high performance and low latency, outperforming the other three WAFs tested – AWS WAF, Azure WAF, and Cloudflare WAF – at all tested attack rates.

As an example, Figure 4 shows the results of a test where the WAF had to handle 500 requests per second (RPS), with 95% (475 RPS) of requests valid and 5% of requests (25 RPS) “bad” (simulating script injection). At the 99th percentile, latency for NGINX App Protect WAF was 10x less than AWS WAF, 60x less than Cloudflare WAF, and 120x less than Azure WAF.

Graph showing latency at 475 RPS with 5% bad traffic at various percentiles for 4 WAFs: NGINX App Protect WAF, AWS WAF, Azure WAF, and Cloudflare WAF
Figure 4: Latency for 475 RPS with 5% bad traffic

Figure 5 shows the highest throughput each WAF achieved at 100% success (no 5xx or 429 errors) with less than 30 milliseconds latency for each request. NGINX App Protect WAF handled 19,000 RPS versus Cloudflare WAF at 14,000 RPS, AWS WAF at 6,000 RPS, and Azure WAF at only 2,000 RPS.

Graph showing maximum throughput at 100% success rate: 19,000 RPS for NGINX App Protect WAF; 14,000 RPS for Cloudflare WAF; 6,000 RPS for AWS WAF; 2,000 RPS for Azure WAF
Figure 5: Maximum throughput at 100% success rate

How to Deploy NGINX App Protect and NGINX Ingress Controller on Amazon EKS

NGINX App Protect WAF and DoS leverage an app‑centric security approach with fully declarative configurations and security policies, making it easy to integrate security into your CI/CD pipeline for the application lifecycle on Amazon EKS.

NGINX Ingress Controller provides several custom resource definitions (CRDs) to manage every aspect of web application security and to support a shared responsibility and multi‑tenant model. CRD manifests can be applied following the namespace grouping used by the organization, to support ownership by more than one operations group.

When publishing an application on Amazon EKS, you can build in security by leveraging the automation pipeline already in use and layering the WAF security policy on top.

Additionally, with NGINX App Protect on NGINX Ingress Controller you can configure resource usage thresholds for both CPU and memory utilization, to keep NGINX App Protect from starving other processes. This is particularly important in multi‑tenant environments such as Kubernetes which rely on resource sharing and can potentially suffer from the ‘noisy neighbor’ problem.

Configuring Logging with NGINX CRDs

The logs for NGINX App Protect and NGINX Ingress Controller are separate by design, to reflect how security teams usually operate independently of DevOps and application owners. You can send NGINX App Protect logs to any syslog destination that is reachable from the Kubernetes pods, by setting the parameter to the app-protect-security-log-destination annotation to the cluster IP address of the syslog pod. Additionally, you can use the APLogConf resource to specify which NGINX App Protect logs you care about, and by implication which logs are pushed to the syslog pod. NGINX Ingress Controller logs are forwarded to the local standard output, as for all Kubernetes containers.

This sample APLogConf resource specifies that all requests are logged (not only malicious ones) and sets the maximum message and request sizes that can be logged.

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
 name: logconf 
 namespace: dvwa 
spec: 
 content: 
   format: default 
   max_message_size: 64k 
   max_request_size: any 
 filter: 
   request_type: all

Defining a WAF Policy with NGINX CRDs

The APPolicy Policy object is a CRD that defines a WAF security policy with signature sets and security rules based on a declarative approach. This approach applies to both NGINX App Protect WAF and DoS, while the following example focuses on WAF. Policy definitions are usually stored on the organization’s source of truth as part of the SecOps catalog.

apiVersion: appprotect.f5.com/v1beta1 
kind: APPolicy 
metadata: 
  name: sample-policy
spec: 
  policy: 
    name: sample-policy 
    template: 
      name: POLICY_TEMPLATE_NGINX_BASE 
    applicationLanguage: utf-8 
    enforcementMode: blocking 
    signature-sets: 
    - name: Command Execution Signatures 
      alarm: true 
      block: true
[...]

Once the security policy manifest has been applied on the Amazon EKS cluster, create an APLogConf object called log-violations to define the type and format of entries written to the log when a request violates a WAF policy:

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
  name: log-violations
spec: 
  content: 
    format: default 
    max_message_size: 64k 
    max_request_size: any 
  filter: 
    request_type: illegal

The waf-policy Policy object then references sample-policy for NGINX App Protect WAF to enforce on incoming traffic when the application is exposed by NGINX Ingress Controller. It references log-violations to define the format of log entries sent to the syslog server specified in the logDest field.

apiVersion: k8s.nginx.org/v1 
kind: Policy 
metadata: 
  name: waf-policy 
spec: 
  waf: 
    enable: true 
    apPolicy: "default/sample-policy" 
    securityLog: 
      enable: true 
      apLogConf: "default/log-violations" 
      logDest: "syslog:server=10.105.238.128:5144"

Deployment is complete when DevOps publishes a VirtualServer object that configures NGINX Ingress Controller to expose the application on Amazon EKS:

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: eshop-vs
spec:
  host: eshop.lab.local
  policies:
  - name: default/waf-policy
  upstreams:
  - name: eshop-upstream
    service: eshop-service
    port: 80
  routes:
  - path: /
    action:
      pass: eshop-upstream

The VirtualServer object makes it easy to publish and secure containerized apps running on Amazon EKS while upholding the shared responsibility model, where SecOps provides a comprehensive catalog of security policies and DevOps relies on it to shift security left from day one. This enables organizations to transition to a DevSecOps strategy.

Conclusion

For companies with legacy apps and security solutions built up over years, shifting security left on Amazon EKS is likely a gradual process. But reframing security as code that is managed and maintained by the security team and consumed by DevOps helps deliver services faster and make them production ready.

To secure north‑south traffic in Amazon EKS, you can leverage NGINX Ingress Controller embedded with NGINX App Protect WAF for protect against point attacks at Layer 7 and NGINX App Protect DoS for DoS mitigation at Layers 4 and 7.

To try NGINX Ingress Controller with NGINX App Protect WAF, start a free 30-day trial on the AWS Marketplace or contact us to discuss your use cases.

To discover how you can prevent security breaches and protect your Kubernetes apps at scale using NGINX Ingress Controller and NGINX App Protect WAF and DoS on Amazon EKS, please download our eBook, Add Security to Your Amazon EKS with F5 NGINX.

To learn more about how NGINX App Protect WAF outperforms the native WAFs for AWS, Azure, and Cloudflare, download the High-Performance Web Application Firewall Testing report from GigaOm and register for the webinar on December 6 where GigaOm analyst Jake Dolezal reviews the results.

A Deeper Dive into WebAssembly, the New Executable Format for the Web

Things are constantly changing in the world of computing. From mainframes to cloud IaaS, from virtual machines to Linux, we are constantly extending and reinventing technologies. Often these changes are driven by the fact that “the way we’ve always done it” no longer works in a new paradigm, or actually wasn’t that great to start with.

We don’t have to look hard to see recent examples. Virtual machines (VMs), containers, Kubernetes, and OpenTelemetry are just a few examples where changing requirements inspired new solutions. And as Kubernetes and OpenTelemetry show, when the solution is right a tsunami of adoption follows.

I recently spoke with some industry experts about three technologies they predict will be the Next Big Things. One of the three in particular deserves a more detailed look: WebAssembly (often abbreviated as Wasm). Wasm has caught the interest of many because it extends the language support for browsers beyond JavaScript. No, it’s not a replacement for JavaScript; rather, it’s the fourth and newest language accepted by the World Wide Web Consortium (W3C) as an official web standard (along with HTML, CSS, and JavaScript).

What Is WebAssembly?

Back in 2015, Mozilla started work on a new standard to define a “a portable, size‑ and load-time-efficient format and execution model” as a compilation target for web browsers. WebAssembly basically was designed to allow languages other than JavaScript to run within the browser. And Wasm quickly caught on with browser vendors, with all the major browsers supporting it.

Some of you may recall the days of cross‑compilers, where code was targeted at an external environment, often in control hardware. The targets varied wildly, which meant the cross‑compilation had to be strictly aligned between the generating system and the target system. Wasm works in a similar way, providing a binary executable to a defined runtime on a wide range of platforms. Since the runtime is a low‑level virtual machine (think JVM), it can be embedded into a number of host applications.

So Wasm is a portable binary code for executing programs and a set of interfaces for interaction between the program and its environment. No where does it make any web‑specific assumptions, so it can be used widely. In fact, the interest in Wasm is heavily driven by its potential in server‑side use cases. Companies like Cosmonic, Fermyon, and Suborbital are showing that Wasm will impact our future in apps from the browser to the back end.

One of the compelling features driving adoption has been Wasm’s support for many languages, with pretty complete coverage for nearly every popular language, including C, C++, Go, Ruby, and Rust. Partial implementations for other languages are available or likely underway. But a word of caution – Wasm’s support for a given language may be limited to a particular context: the browser, outside the browser, or even directly on a system. So it’s important to verify that Wasm supports a language in the context where you want to use it.

Why Should You Care About WebAssembly?

WebAssembly has focused on several crucial features that all browsers need. We’ve mentioned the polyglot nature of Wasm, which has allowed the browser to be extended to many languages. But there are several others.

  • Speed/performance – No one likes waiting for web pages to load. It’s even less fun waiting for a web application to load and start. Wasm’s compilation model makes loading fast. In fact, performance can approach that of a native application.
  • Size – For web apps, the size of the objects being downloaded is the crucial factor. Smaller binaries mean faster time to start.
  • Cross‑platform – Web browsers are the universal access point for websites and apps, so we want “write once, run anywhere” (on every browser) to be a reality rather than just a promise. Wasm already delivers on this pretty well, and will continue to improve.
  • Multi‑lingual – In the modern age of adaptive applications and microservices architectures, no one language is suited for every purpose. By extending apps from many languages to the browser, Wasm removes a major “anti‑pattern” blocker, also known as “I don’t use that language”. Especially when we look at server‑side use, we need to move beyond solely JavaScript, and Wasm enables us to do that.
  • Security – If you have to run untrusted code in your browser, it must be isolated. Wasm achieves isolation with memory‑safe sandboxed execution environments. The current implementation isn’t perfect, but Wasm contributors are heavily focused on it, so I expect rapid improvement.

Its focus on these features makes Wasm incredibly high‑performance, efficient, and honestly, fun. A great place to learn more about that is the recording of Franziska Hinkelmann’s talk at CovalenceConf 2019: Speed, Speed, Speed: JavaScript vs C++ vs WebAssembly. Keep in mind that right now Wasm might not be all that much faster than JavaScript, but it doesn’t yet have JavaScript’s many years of optimization work behind it either. Also, speed is not the only factor in Wasm’s favor – size and the ability to use your language of choice also matter.

How Do You Get Started?

There are a lot of great resources on WebAssembly. It is clearly the path a number of companies are following, from startups to major players:

  • Adobe has demonstrated Photoshop running in Wasm in the browser.
  • Figma saw load time cut by 3X with Wasm.
  • Wasm Labs at VMware has demonstrated WordPress, a PHP runtime, and a database all running in the browser.

But you don’t have to start by switching over to Wasm in one fell swoop. These are all great resources for learning more:

Summary

WebAssembly is still a work in progress, and as with any new technology, some aspects are more complete than others. Even so, the concepts behind it are compelling, and I predict that it’s going to be a game changer, a major force for the future of adaptive apps, either in browser or in a cloud. Without a doubt, WebAssembly is one of the technologies you need to learn about now.

Adaptive Governance Gives API Developers the Autonomy They Need

Today’s enterprise is often made up of globally distributed teams building and deploying APIs and microservices, usually across more than one deployment environment. According to F5’s State of Application Strategy Report, 81% of organizations operate across three or more environments ranging across public cloud, private cloud, on premises, and edge.

Ensuring the reliability and security of these complex, multi‑cloud architectures is a major challenge for the Platform Ops teams responsible for maintaining them. According to software engineering leaders surveyed in the F5 report, visibility (45%) and consistent security (44%) top the list of multi‑cloud challenges faced by Platform Ops teams.

With the growing number of APIs and microservices today, API governance is quickly becoming one of the most important topics for planning and implementing an enterprise‑wide API strategy. But what is API governance, and why is it so important for your API strategy?

What Is API Governance?

At the most basic level, API governance involves creating policies and running checks and validations to ensure APIs are discoverable, reliable, observable, and secure. It provides visibility into the state of the complex systems and business processes powering your modern applications, which you can use to guide the evolution of your API infrastructure over time.

Why Do You Need API Governance?

The strategic importance of API governance cannot be overestimated: it’s the means by which you realize your organization’s overall API strategy. Without proper governance, you can never achieve consistency across the design, operation, and productization of your APIs.

When done poorly, governance often imposes burdensome requirements that slow teams down. When done well, however, API governance reduces work, streamlines approvals, and allows different teams in your organization to function independently while delivering on the overall goals of your API strategy.

What Types of APIs Do You Need to Govern?

Building an effective API governance plan as part of your API strategy starts with identifying the types of APIs you have in production, and the tools, policies, and guidance you need to manage them. Today, most enterprise teams are working with four primary types of APIs:

  • External APIs – Delivered to external consumers and developers to enable self‑service integrations with data and capabilities
  • Internal APIs – Used for connecting internal applications and microservices and only available to your organization’s developers
  • Partner APIs – Facilitate strategic business relationships by sharing access to your data or applications with developers from partner organizations
  • Third-Party APIs – Consumed from third‑party vendors as a service, often for handling payments or enabling access to data or applications

Each type of API in the enterprise must be governed to ensure it is secure, reliable, and accessible to the teams and users who need to access it.

What API Governance Models Can You Use?

There are many ways to define and apply API governance. At NGINX, we typically see customers applying one of two models:

  • Centralized – A central team reviews and approves changes; depending on the scale of operations, this team can become a bottleneck that slows progress
  • Decentralized – Individual teams have autonomy to build and manage APIs; this increases time to market but sacrifices overall security and reliability

As companies progress in their API‑first journeys, however, both models start to break down as the number of APIs in production grows. Centralized models often try to implement a one-size-fits-all approach that requires various reviews and signoffs along the way. This slows dev teams down and creates friction – in their frustration, developers sometimes even find ways to work around the requirements (the dreaded “shadow IT”).

The other model – decentralized governance – works well for API developers at first, but over time complexity increases. Unless the different teams deploying APIs communicate frequently, the overall experience becomes inconsistent across APIs: each is designed and functions differently, version changes result in outages between services, and security is enforced inconsistently across teams and services. For the teams building APIs, the additional work and complexity eventually slows development to a crawl, just like the centralized model.

Cloud‑native applications rely on APIs for the individual microservices to communicate with each other, and to deliver responses back to the source of the request. As companies continue to embrace microservices for their flexibility and agility, API sprawl will not be going away. Instead, you need a different approach to governing APIs in these complex, constantly changing environments.

Use Adaptive Governance to Empower API Developers

Fortunately, there is a better way. Adaptive governance offers an alternative model that empowers API developers while giving Platform Ops teams the control they need to ensure the reliability and security of APIs across the enterprise.

At the heart of adaptive governance is balancing control (the need for consistency) with autonomy (the ability to make local decisions) to enable agility across the enterprise. In practice, the adaptive governance model unbundles and distributes decision making across teams.

Platform Ops teams manage shared infrastructure (API gateways and developer portals) and set global policies to ensure consistency across APIs. Teams building APIs, however, act as the subject matter experts for their services or line of business. They are empowered to set and apply local policies for their APIs – role‑based access control (RBAC), rate limiting for their service, etc. – to meet requirements for their individual business contexts.

Adaptive governance allows each team or line of business to define its workflows and balance the level of control required, while using the organization’s shared infrastructure.

Implement Adaptive Governance for Your APIs with NGINX

As you start to plan and implement your API strategy, follow these best practices to implement adaptive governance in your organization:

Let’s look at how you can accomplish these use cases with API Connectivity Manager, part of F5 NGINX Management Suite.

Provide Shared Infrastructure

Teams across your organization are building APIs, and they need to include similar functionality in their microservices: authentication and authorization, mTLS encryption, and more. They also need to make documentation and versioning available to their API consumers, be those internal teams, business partners, or external developers.

Rather than requiring teams to build their own solutions, Platform Ops teams can provide access to shared infrastructure. As with all actions in API Connectivity Manager, you can set this up in just a few minutes using either the UI or the fully declarative REST API, which enables you to integrate API Connectivity Manager into your CI/CD pipelines. In this post we use the UI to illustrate some common workflows.

API Connectivity Manager supports two types of Workspaces: infrastructure and services. Infrastructure Workspaces are used by Platform Ops teams to onboard and manage shared infrastructure in the form of API Gateway Clusters and Developer Portal Clusters. Services Workspaces are used by API developers to publish and manage APIs and documentation.

To set up shared infrastructure, first add an infrastructure Workspace. Click Infrastructure in the left navigation column and then the  + Add  button in the upper right corner of the tab. Give your Workspace a name (here, it’s team-sentence – an imaginary team building a simple “Hello, World!” API).

Screenshot of Workspaces page on Infrastructure tab of API Connectivity Manager UI
Figure 1: Add Infrastructure Workspaces

Next, add an Environment to the Workspace. Environments contain API Gateway Clusters and Developer Portal Clusters. Click the name of your Workspace and then the icon in the Actions column; select Add from the drop‑down menu.

The Create Environment panel opens as shown in Figure 2. Fill in the Name (and optionally, Description) field, select the type of environment (production or non‑production), and click the + Add button for the infrastructure you want to add (API Gateway Clusters, Developer Portal Clusters, or both). Click the  Create  button to finish setting up your Environment. For complete instructions, see the API Connectivity Manager documentation.

Screenshot of Create Environment panel in API Connectivity Manager UI
Figure 2: Create an Environment and onboard infrastructure

Give Teams Agency

Providing logical separation for teams by line of business, geographic region, or other logical boundary makes sense – if that doesn’t deprive them of access to the tools they need to succeed. Having access to shared infrastructure shouldn’t mean teams have to worry about activities at the global level. Instead, you want to have them focus on defining their own requirements, charting a roadmap, and building their microservices.

To help teams organize, Platform Ops teams can provide services Workspaces for teams to organize and operate their services and documentation. These create logical boundaries and provide access to different environments – development, testing, and production, for example – for developing services. The process is like creating the infrastructure Workspace we made in the previous section.

First, click Services in the left navigation column and then the  + Add  button in the upper right corner of the tab. Give and give your Workspace a name (here, api-sentence for our “Hello, World” service), and optionally provide a description and contact information.

Screenshot of Workspaces page on Services tab of API Connectivity Manager UI
Figure 3: Create a services Workspace

At this point, you can invite API developers to start publishing proxies and documentation in the Workspace you’ve created for them. For complete instructions on publishing API proxies and documentation, see the API Connectivity Manager documentation.

Balance Global Policies and Local Control

Adaptive governance requires a balance between enforcing global policies and empowering teams to make decisions that boost agility. You need to establish a clear separation of responsibilities by defining the global settings enforced by Platform Ops and setting “guardrails” that define the tools API developers use and the decisions they can make.

API Connectivity Manager provides a mix of global policies (applied to shared infrastructure) and granular controls managed at the API proxy level.

Global policies available in API Connectivity Manager include:

  • Error Response Format – Customize the API gateway’s error code and response structure
  • Log Format – Enable access logging and customize the format of log entries
  • OpenID Connect – Secure access to APIs with an OpenID Connect policy
  • Response Headers – Include or exclude headers in the response
  • Request Body Size – Limit the size of incoming API payloads
  • Inbound TLS – Set the policy for TLS connections with API clients
  • Backend TLS – Secure the connection to backend services with TLS

API proxy policies available in API Connectivity Manager include:

  • Allowed HTTP Methods – Define which request methods can be used (GET, POST, PUT, etc.)
  • Access Control – Secure access to APIs using different authentication and authorization techniques (API keys, HTTP Basic Authentication, JSON Web Tokens)
  • Backend Health Checks – Run continuous health checks to avoid failed requests to backend services
  • CORS – Enable controlled access to resources by clients from external domains
  • Caching – Improve API proxy performance with caching policies
  • Proxy Request Headers – Pass select headers to backend services
  • Rate Limiting – Limit incoming requests and secure API workloads

In the following example, we use the UI to define a policy that secures communication between an API Gateway Proxy and backend services.

Click Infrastructure in the left navigation column. After you click the name of the Environment containing the API Gateway Cluster you want to edit, the tab displays the API Gateway Clusters and Developer Portal Clusters in that Environment.

Screenshot of Environment page on Infrastructure tab of API Connectivity Manager UI
Figure 4: Configure global policies for API Gateway Clusters and Developer Portal Clusters

In the row for the API Gateway Cluster to which you want to apply a policy, click the icon in the Actions column and select Edit Advanced Configuration from the drop‑down menu. Click Global Policies in the left column to display a list of all the global policies you can configure.

Screenshot of Global Policies page in API Connectivity Manager UI
Figure 5: Configure policies for an API Gateway Cluster

To apply the TLS Backend policy, click the icon at the right end of its row and select Add Policy from the drop‑down menu. Fill in the requested information, upload your certificate, and click Add. Then click the  Save and Submit  button. From now on, traffic between the API Gateway Cluster and the backend services is secured with TLS. For complete instructions, see the API Connectivity Manager documentation.

Summary

Planning and implementing API governance is a crucial step ensuring your API strategy is successful. By working towards a distributed model and relying on adaptive governance to address the unique requirements of different teams and APIs, you can scale and apply uniform governance without sacrificing the speed and agility that make APIs, and cloud‑native environments, so productive.

Get Started

Start a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

API Connectivity Manager Helps Dev and Ops Work Better Together

Cloud‑native applications are composed of dozens, hundreds, or even thousands of APIs connecting microservices together. Together, these services and APIs deliver the resiliency, scalability, and flexibility that are at the heart of cloud‑native applications.

Today, these underlying APIs and microservices are often built by globally distributed teams, which need to operate with a degree of autonomy and independence to deliver capabilities to market in a timely fashion.

At the same time, Platform Ops teams are responsible for ensuring the overall reliability and security of the enterprise’s apps and infrastructure, including its underlying APIs. They need visibility into API traffic and the ability to set global guardrails to ensure uniform security and compliance – all while providing an excellent API developer experience.

While the interests of these two groups can be in conflict, we don’t believe that’s inevitable. We built API Connectivity Manager as part of F5 NGINX Management Suite to simultaneously enable Platform Ops teams to keep things safe and running smoothly and API developers to build and release new capabilities with ease.

Creating an Architecture to Support API Connectivity

As a next‑generation management plane, API Connectivity Manager is built to realize and extend the power of NGINX Plus as an API gateway. If you have used NGINX in the past, you are familiar with NGINX’s scalability, reliability, flexibility, and performance as an API gateway.

As we designed API Connectivity Manager, we developed a new architecture to enable Platform Ops teams and developers to better work together. It uses the following standard industry terms for its components (some of which differ from the names familiar to experienced NGINX users):

  • Workspace – An isolated collection of infrastructure for the dedicated management and use of a single business unit or team; usually includes multiple Environments
  • Environment – A logical collection of NGINX Plus instances (clusters) acting as API gateways or developer portals for a specific phase of the application life cycle, such as development, test, or production
  • API Gateway Proxy Cluster – A logical representation of the NGINX API gateway that groups NGINX Plus instances and synchronizes the state between them
  • API Proxy – A representation of a published instance of an API and includes routing, versioning, and other policies
  • Policies – A global or local abstraction for defining specific functions of the API proxy like traffic resiliency, security, or quality of service

The following diagram illustrates how the components are nested within a Workspace:

Diagram of nested API Connectivity Manager administrative objects. From outermost in: Workspace, Environment, API Gateway Proxy Cluster, API Proxy, Policies.

This logical hierarchy enables a variety of important use cases that support enterprise‑wide API connectivity. For example, Workspaces can incorporate multiple types of infrastructure (for example, public cloud and on premises) giving teams access to both – and providing visibility into API traffic across both environments to infrastructure owners.

We’ll look at the architectural components in more depth in future posts. For now, let’s look at how API Connectivity Manager serves the different personas that contribute to API development and delivery.

API Connectivity Manager for Platform Ops

The Platform Ops team is responsible for building and managing the infrastructure lifecycle for each organization. They provide the platform on which developers build applications and services that serve customers and partners. Development teams are often decentralized, working across multiple environments and reporting to different lines of business. Meeting the needs of these dispersed groups and environments while maintaining enterprise‑wide standards is one of the biggest challenges Platform Ops teams face today.

API Connectivity Manager offers innovative ways of segregating the teams and their operations using Workspaces as a logical boundary. It also enables Platform Ops teams to manage the infrastructure lifecycle without interfering with the teams building and deploying APIs. API Connectivity Manager comes with a built‑in set of default global policies to provide basic security and configuration for NGINX Plus API gateways and developer portals. Platform Ops teams can then configure supplemental global policies to optionally require mTLS, configure log formats, or standardize proxy headers.

The global policy imposes uniformity and brings consistency to the APIs that are deployed in the shared API gateway cluster. API Connectivity Manager also offers the organizations the chance to run decentralized data‑plane clusters to physically separate where each team or line of business deploys its APIs.

Diagram showing how API Connectivity Manager enables multiple groups to manage their own Workspaces and Environments

In a subsequent blog, we will explore how Platform Ops teams can use API Connectivity Manager to ensure API security and governance while helping API developers succeed.

API Connectivity Manager for Developers

Teams from different lines of business own and operate their own sets of APIs. They need control over their API products to deliver their applications and services to market on time, which means they can’t afford to wait for other teams’ approval to use shared infrastructure. At the same time, there need to be “guardrails” in place to prevent teams from stepping on each other’s toes.

Like the Platform Ops persona, developers as API owners get their own Workspaces which segregate their APIs from other teams. API Connectivity Manager provides policies at the API Proxy level for API owners to configure service‑level settings like rate limiting and additional security requirements.

Diagram showing how API Proxy policies for services managed by API owners build on a foundation of global policies for infrastructure managed by Platform Ops

In a subsequent blog, we will explore how developers can use API Connectivity Manager to simplify API lifecycle management.

Get Started

Start a 30-day free trial of NGINX Management Suite, which includes API Connectivity Manager and Instance Manager.

Enabling Self-Service DNS and Certificate Management in Kubernetes

The ultimate goal of application development is, of course, to expose apps on the Internet. For a developer, Kubernetes simplifies this process to a degree by providing the Ingress controller as the mechanism for routing requests to the application. But not everything is as self‑service as you probably would like: you still need a record in the Domain Name System (DNS) to map the domain name for the app to the Ingress controller’s IP address and a TLS certificate to secure connections using HTTPS. In most organizations, you don’t own DNS or TLS yourself and so have to coordinate with the operational group (or groups!) that do.

Things aren’t necessarily any easier for operators. In most organizations the need to update DNS records is rare enough that procedures – both business rules and the actual technical steps – tend to be sparse or non‑existent. This means that when you need to add a DNS record you first need to find the documentation, ask a colleague, or (in a worst case) figure it out. You also need to ensure you’re in compliance with any corporate security rules and make sure that the ingress is tagged properly for the firewalls.

Fortunately, there is a way to make life easier for both developers and operators. In this post, we show how operators can configure a Kubernetes deployment to enable self‑service for developers to update DNS records and generate TLS certificates in a Kubernetes environment. By building out the infrastructure ahead of time, you can assure that all necessary business and technical requirements are being satisfied.

Overview and Prerequisites

With the solution in place, all a developer needs to do to expose an application to the Internet is create an Ingress controller following a supplied template that includes a fully qualified domain name (FQDN) within a domain managed by the Kubernetes installation. Kubernetes uses the template to allocate an IP address for the Ingress controller, create the DNS A record to map the FQDN to the IP address, and generate TLS certificates for the FQDN and add them to the Ingress controller. Cleanup is just as easy: when the Ingress is removed, the DNS records are cleaned up.

The solution leverages the following technologies (we provide installation and configuration instructions below):

Before configuring the solution, you need:

  • A Kubernetes cloud installation with an egress (LoadBalancer) object. The solution uses Linode, but other cloud providers also work.
  • A domain name hosted with Cloudflare, which we chose because it’s one of the supported DNS providers for cert-manager and supports ExternalDNS (in beta as of the time of writing). We strongly recommend that the domain not be used for production or any other critical purpose.
  • Access to the Cloudflare API, which is included in the free tier.
  • Helm for installing and deploying Kubernetes.
  • kubectl as the command‑line interface for Kubernetes.
  • Optionally, K9s, a well‑constructed tangible user interface (TUI) that provides a more structured way to interact with Kubernetes.

We also assume you have a basic understanding of Kubernetes (how to apply a manifest, use a Helm chart, and issue kubectl commands to view output and troubleshoot). Understanding the basic concepts of Let’s Encrypt is helpful but not required; for an overview, see our blog. You also don’t need to know how cert-manager works, but if you’re interested how it (and certificates in general) work with NGINX Ingress Controller, see my recent post, Automating Certificate Management in a Kubernetes Environment.

We have tested the solution on both macOS and Linux. We haven’t tested on Windows Subsystem for Linux version 2 (WSL2), but don’t foresee any issues.

Note: The solution is intended as a sample proof of concept, and not for production use. In particular, it does not incorporate all best practices for operation and security. For information on those topics, see the cert-manager and ExternalDNS documentation.

Deploying the Solution

Follow the steps in these sections to deploy the solution:

Download Software

  1. Download your Cloudflare API Token.
  2. Clone the NGINX Ingress Controller repository:

    $ git clone https://github.com/nginxinc/kubernetes-ingress.git
    Cloning into 'kubernetes-ingress'...
    remote: Enumerating objects: 45176, done.
    remote: Counting objects: 100% (373/373), done.
    remote: Compressing objects: 100% (274/274), done.
    remote: Total 45176 (delta 173), reused 219 (delta 79), pack-reused 44803
    Receiving objects: 100% (45176/45176), 60.45 MiB | 26.81 MiB/s, done.
    Resolving deltas: 100% (26592/26592), done.
  3. Verify that you can connect to the Kubernetes cluster.

    $ kubectl cluster-info
    Kubernetes control plane is running at https://ba35bacf-b072-4600-9a04-e04...6a3d.us-west-2.linodelke.net:443
    KubeDNS is running at https://ba35bacf-b072-4600-9a04-e04...6a3d.us-west-2.linodelke.net:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
     
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Deploy NGINX Ingress Controller

  1. Using Helm, deploy NGINX Ingress Controller. Note that we are adding three non‑standard configuration options:

    • controller.enableCustomResources – Instructs Helm to install the custom resource definitions (CRDs) used to create the NGINX VirtualServer and VirtualServerRoute custom resources.
    • controller.enableCertManager – Configures NGINX Ingress Controller to communicate with cert-manager components.
    • controller.enableExternalDNS – Configures the Ingress Controller to communicate with ExternalDNS components.
    $ helm install nginx-kic nginx-stable/nginx-ingress --namespace nginx-ingress  --set controller.enableCustomResources=true --create-namespace  --set controller.enableCertManager=true --set controller.enableExternalDNS=true
    NAME: nginx-kic
    LAST DEPLOYED: Day Mon  DD hh:mm:ss YYYY
    NAMESPACE: nginx-ingress
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    The NGINX Ingress Controller has been installed.
  2. Verify that NGINX Ingress Controller is running and note the value in the EXTERNAL-IP field – it’s the IP address for NGINX Ingress Controller (here, www.xxx.yyy.zzz). The output is spread across two lines for legibility.

    $ kubectl get services --namespace nginx-ingress
    NAME                      TYPE           CLUSTER-IP      ...
    nginx-kic-nginx-ingress   LoadBalancer   10.128.152.88   ... 
    
       ... EXTERNAL-IP       PORT(S)                      AGE
       ... www.xxx.yyy.zzz   80:32457/TCP,443:31971/TCP   3h8m

Deploy cert-manager

In the solution, cert-manager uses the DNS-01 challenge type when obtaining a TLS certificate, which requires the Cloudflare API token be provided during creation of the ClusterIssuer resource. In the solution, the API token is provided as a Kubernetes Secret.

  1. Using Helm, deploy cert-manager:

    $ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.1  --set installCRDs=true
    NAME: cert-manager
    LAST DEPLOYED: Day Mon  DD hh:mm:ss YYYY
    NAMESPACE: cert-manager
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    cert-manager v1.9.1 has been deployed successfully!
  2. Deploy the Cloudflare API token as a Kubernetes Secret, substituting it for <your-API-token>:

    $ kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: Cloudflare-api-token-secret
      namespace: cert-manager
    type: Opaque
    stringData:
      api-token: "<your-API-token>"
    EOF
    secret/Cloudflare-api-token-secret created
  3. Create a ClusterIssuer object, specifying Cloudflare-api-token-secret (defined in the previous step) as the place to retrieve the token. If you wish, you can replace example-issuer in the metadata.name field (and example-issuer-account-key in the spec.acme.privateKeySecretRef.name field) with a different name.

    $ kubectl apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: example-issuer
      namespace: cert-manager
    spec:
      acme:
        email: example@example.com
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: example-issuer-account-key
        solvers:
          - dns01:
              Cloudflare:
                apiTokenSecretRef:
                  name: Cloudflare-api-token-secret
                  key: api-token
    EOF
    clusterissuer.cert-manager.io/example-issuer created
  4. Verify that the ClusterIssuer is deployed and ready (the value in the READY field is True).

    $ kubectl get clusterissuer
    NAME             READY   AGE
    example-issuer   True    3h9m

Deploy ExternalDNS

Like cert-manager, the ExternalDNS project requires a Cloudflare API Token to manage DNS. The same token can be used for both projects, but that is not required.

  1. Create the ExternalDNS CRDs for NGINX Ingress Controller to enable integration between the projects.

    $ kubectl create -f ./kubernetes-ingress/deployments/common/crds/externaldns.nginx.org_dnsendpoints.yaml
    customresourcedefinition.apiextensions.k8s.io/dnsendpoints.externaldns.nginx.org created
  2. Create the External DNS service (external-dns). Because the manifest is rather long, here we break it into two parts. The first part configures accounts, roles, and permissions:

    • Creates a ServiceAccount object called external-dns to manage all write and update operations for managing DNS.
    • Creates a ClusterRole object (also called external-dns) that defines the required permissions.
    • Binds the ClusterRole to the ServiceAccount.
    $ kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: external-dns
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: external-dns
    rules:
    - apiGroups: [""]
      resources: ["services","endpoints","pods"]
      verbs: ["get","watch","list"]
    - apiGroups: ["extensions","networking.k8s.io"]
      resources: ["ingresses"]
      verbs: ["get","watch","list"]
    - apiGroups: ["externaldns.nginx.org"]
      resources: ["dnsendpoints"]
      verbs: ["get","watch","list"]
    - apiGroups: ["externaldns.nginx.org"]
      resources: ["dnsendpoints/status"]
      verbs: ["update"]
    - apiGroups: [""]
      resources: ["nodes"]
      verbs: ["list","watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: external-dns-viewer
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: external-dns
    subjects:
    - kind: ServiceAccount
      name: external-dns
      namespace: default
    EOF
    serviceaccount/external-dns created
    clusterrole.rbac.authorization.k8s.io/external-dns created
    clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created

    The second part of the manifest creates the ExternalDNS deployment:

    • Creates a domain filter, which limits the scope of possible damage done by ExternalDNS as it manages domains. For example, you might specify the domain names of staging environments to prevent changes to production environments. In this example, we set domain-filter to example.com.
    • Sets the CF_API_TOKEN environment variable to your Cloudflare API Token. For <your-API-token>, substitute either the actual token or a Secret containing the token. In the latter case, you also need to project the Secret into the container using an environment variable.
    • Sets the FREE_TIER environment variable to "true" (appropriate unless you have a paid Cloudflare subscription).
    $  kubectl apply -f - <<EOF
     
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: external-dns
    spec:
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: external-dns
      template:
        metadata:
          labels:
            app: external-dns
        spec:
          serviceAccountName: external-dns
          containers:
          - name: external-dns
            image: k8s.gcr.io/external-dns/external-dns:v0.12.0
            args:
            - --source=service
            - --source=ingress
            - --source=crd
            - --crd-source-apiversion=externaldns.nginx.org/v1
            - --crd-source-kind=DNSEndpoint
            - --domain-filter=example.com
            - --provider=Cloudflare
            env:
              - name: CF_API_TOKEN
                value: "<your-API-token>"
              - name: FREE_TIER
                value: "true"
    EOF
    serviceaccount/external-dns created
    clusterrole.rbac.authorization.k8s.io/external-dns created
    clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
    deployment.apps/external-dns created

Deploy the Sample Application

Use the standard NGINX Ingress Controller sample application called Cafe for testing purposes.

  1. Deploy the Cafe application.

    $ kubectl apply -f ./kubernetes-ingress/examples/ingress-resources/complete-example/cafe.yaml
    deployment.apps/coffee created
    service/coffee-svc created
    deployment.apps/tea created
    service/tea-svc created
  2. Deploy NGINX Ingress Controller for the Cafe application. Note the following settings:

    • kind: VirtualServer – We are using the NGINX VirtualServer custom resource, not the standard Kubernetes Ingress resource.
    • spec.host – Replace cafe.example.com with the name of the host you are deploying. The host must be within the domain being managed with ExternalDNS.
    • spec.tls.cert-manager.cluster-issuer – If you’ve been using the values specified in this post, this is example-issuer. If necessary, substitute the name you chose in Step 3 of Deploy cert‑manager.
    • spec.externalDNS.enable – The value true tells ExternalDNS to create a DNS A record.

    Note that the time it takes for this step to complete is highly dependent on the DNS provider, as Kubernetes is interacting with the provider’s DNS API.

    $ kubectl apply -f - <<EOF
    apiVersion: k8s.nginx.org/v1
    kind: VirtualServer
    metadata:
      name: cafe
    spec:
      host: cafe.example.com
      tls:
        secret: cafe-secret
        cert-manager:
          cluster-issuer: example-issuer
      externalDNS:
        enable: true
      upstreams:
      - name: tea
        service: tea-svc
        port: 80
      - name: coffee
        service: coffee-svc
        port: 80
      routes:
      - path: /tea
        action:
          pass: tea
      - path: /coffee
        action:
          pass: coffee
    EOF
    virtualserver.k8s.nginx.org/cafe created

Validate the Solution

  1. Verify the DNS A record – in particular that in the ANSWER SECTION block the FQDN (here, cafe.example.com) is mapped to the correct IP address (www.xxx.yyy.zzz).

    $ dig cafe.example.com
     
    ; <<>> DiG 9.10.6 <<>> cafe.example.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22633
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
     
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;cafe.example.com.		IN	A
     
    ;; ANSWER SECTION:
    cafe.example.com.	279	IN	A	www.xxx.yyy.zzz
     
    ;; Query time: 1 msec
    ;; SERVER: 2607:fb91:119b:4ac4:2e0:xxxx:fe1e:1359#53(2607:fb91:119b:4ac4:2e0:xxxx:fe1e:1359)
    ;; WHEN: Day Mon  DD hh:mm:ss TZ YYYY
    ;; MSG SIZE  rcvd: 67
  2. Check that the certificate is valid (the value in the READY field is True).

    $ kubectl get certificates
    NAME          READY   SECRET        AGE
    cafe-secret   True    cafe-secret   8m51s
  3. Verify that you can reach the application.

    $ curl https://cafe.example.com/coffee
    Server address: 10.2.2.4:8080
    Server name: coffee-7c86d7d67c-lsfs6
    Date: DD/Mon/YYYY:hh:mm:ss +TZ-offset
    URI: /coffee
    Request ID: 91077575f19e6e735a91b9d06e9684cd
    $ curl https://cafe.example.com/tea
    Server address: 10.2.2.5:8080
    Server name: tea-5c457db9-ztpns
    Date: DD/Mon/YYYY:hh:mm:ss +TZ-offset
    URI: /tea
    Request ID: 2164c245a495d22c11e900aa0103b00f

What Happens When a Developer Deploys NGINX Ingress Controller

A lot happens under the covers once the solution is in place. The diagram shows what happens when a developer deploys the NGINX Ingress Controller with an NGINX VirtualServer custom resource. Note that some operational details are omitted.

What happens when a developer deploys the NGINX Ingress Controller with an NGINX VirtualServer custom resource: (1) Developer deploys a VirtualServer resource using the NGINX CRD (2) Kubernetes creates the VirtualServer using NGINX Ingress Controller (3) NGINX Ingress Controller calls ExternalDNS to create a DNS A record (4) ExternalDNS creates the A record in DNS (5) NGINX Ingress Controller calls cert-manager to request a TLS certificate (6) cert-manager adds a DNS record for use during the DNS-01 challenge (7) cert-manager contacts Let’s Encrypt to complete the challenge (8) Let’s Encrypt validates the challenge against DNS (9) Let’s Encrypt issues the TLS certificate (10) cert-manager provides the TLS certificate to NGINX Ingress Controller (11) NGINX Ingress Controller routes TLS-secured external requests to the application pods

  1. Developer deploys a VirtualServer resource using the NGINX CRD
  2. Kubernetes creates the VirtualServer using NGINX Ingress Controller
  3. NGINX Ingress Controller calls ExternalDNS to create a DNS A record
  4. ExternalDNS creates the A record in DNS
  5. NGINX Ingress Controller calls cert-manager to request a TLS certificate
  6. cert-manager adds a DNS record for use during the DNS-01 challenge
  7. cert-manager contacts Let’s Encrypt to complete the challenge
  8. Let’s Encrypt validates the challenge against DNS
  9. Let’s Encrypt issues the TLS certificate
  10. cert-manager provides the TLS certificate to NGINX Ingress Controller
  11. NGINX Ingress Controller routes TLS‑secured external requests to the application pods

Troubleshooting

Given the complexity of Kubernetes along with the components we are using, it is difficult to provide a comprehensive troubleshooting guide. That said, there are some basic suggestions to help you determine the problem.

  • Use the kubectl get and kubectl describe commands to validate the configuration of deployed objects.
  • Use the kubectl logs <component> command to view log files for the various deployed components.
  • Use K9s to inspect the installation; the software highlights problems in yellow or red (depending on severity) and provides an interface to access logs and details about objects.

If you are still having issues, please find us on the NGINXCommunity Slack and ask for help! We have a vibrant community and are always happy to work through issues.

To try the NGINX Ingress Controller based on NGINX Plus, start your 30-day free trial today or contact us to discuss your use cases.

NGINX Unit Greets Autumn 2022 with New Features (a Statistics Engine!) and Exciting Plans


First things first: it’s been quite a while since we shared news from the NGINX Unit team – these tumultuous times have affected everyone, and we’re no exception. This March, two founding members of the Unit team, Valentin Bartenev and Maxim Romanov, decided to move on to other opportunities after putting years of work and tons of creativity into NGINX Unit. Let’s give credit where credit is due: without them, NGINX Unit wouldn’t be where it is now. Kudos, guys.

Still, our resolve stays strong, as does our commitment to bringing NGINX co‑founder Igor Sysoev’s original aspirations for NGINX Unit to fruition. The arrival of the two newest team members, Alejandro Colomar and Andrew Clayton, has boosted the development effort, so now we have quite a few noteworthy items from NGINX Unit versions 1.25 through 1.28 to share with you.

Observability Is a Thing Now

One of Unit’s key aspirations has always been observability, and version 1.28.0 includes the first iteration of one of the most eagerly awaited features: a statistics engine. Its output is exposed at the new /status API endpoint:

$ curl --unix-socket /var/run/control.unit.sock http://localhost/status

Most of the fields here are self‑descriptive: connections (line 2) and requests (line 9) provide instance‑wide data, whereas the applications object (line 13) mirrors /config/applications, covering processes and requests that specifically concern the application.

Lines 3–6 show the four categories of connections tracked by NGINX Unit: accepted, active, idle, and closed. The categories for processes are running, starting, and idle (lines 16–18). These categories reflect the internal representation of connections and processes, so now you know just as much about them as your server does.

Seems terse? That’s pretty much all there is to know for now. Sure, we’re working to expose more useful metrics; however, you already can query this API from your command line to see what’s going on at your server and even plug the output into a dashboard or your choice for a more fanciful approach. Maybe you don’t have a dashboard? Well, some of our plans include providing a built‑in one, so follow us to see how this plays out.

For more details, see Usage Statistics in the NGINX Unit documentation.

More Variables, More Places to Use Them

The list of variables introduced since version 1.24 is quite extensive and includes $body_bytes_sent, $header_referer, $header_user_agent, $remote_addr, $request_line, $request_uri, $status, and $time_local.

Most of these are rather straightforward, but here are some of the more noteworthy:

  • $request_uri contains the path and query from the requested URI with browser encoding preserved
  • The similarly named $request_line stores the entire request, such as GET /docs/help.html HTTP/1.1, and is intended for logging…
  • As is $status which contains the HTTP response status code

Did you notice? We mentioned responses. Yes, we’re moving into that territory as well: the variables in earlier Unit versions focused on incoming requests, but now we have variables that capture the response properties as well, such as $status and $body_bytes_sent.

Regarding new places to use variables, the first to mention is the new customizable access log format. Want to use JSON in NGINX Unit’s log entries? In addition to specifying a simple path string, the access_log option can be an object that also sets the format of log entries:

Thus, you can go beyond the usual log format any way you like.

A second noteworthy use case for variables is the location value of a route action:

Here we’re using $request_uri to relay the request, including the query part, to the same website over HTTPS.

The chroot option now supports variables just as the share option does, which is only logical:

NGINX Unit now supports dynamic variables too. Request arguments, cookies, and headers are exposed as variables: for instance, the query string Type=car&Color=red results in two argument variables, $arg_Type and $arg_Color. At runtime, these variables expand into dynamic values; if you reference a non‑existent variable, it is considered empty.

For more details, see Variables in the NGINX Unit documentation.

Extensive Support for the X-Forwarded-* Headers

You asked, and we delivered. Starting in version 1.25.0, NGINX Unit has offered some TLS configuration facilities for its listeners, including a degree of X-Forwarded-* awareness; now, you can configure client IP addresses and protocol replacement in the configuration for your listeners:

Note: This new syntax deprecates the previous client_ip syntax, which will be removed in a future release.

For more details, see IP, Protocol Forwarding in the NGINX Unit documentation.

The Revamped share Option

NGINX Unit version 1.11.0 introduced the share routing option for serving static content. It’s comparable to the root directive in NGINX:

Initially, the share option specified the so‑called document root directory. To determine which file to serve, Unit simply appended the URI from the request to this share path. For example, in response to a simple GET request for /some/file.html, Unit served /path/to/dir/some/file.html. Still, we kept bumping into border cases that required finer control over the file path, so we decided to evolve. Starting with version 1.26.0, the share option specifies the entire path to a shared file rather than just the document root.

You want to serve a specific file? Fine:

Use variables within the path? Cool, not a problem:

But how do you go about imitating the behavior you’re already used to from NGINX and previous Unit versions? You know, the document root thing that we deemed obsolete a few paragraphs ago? We have a solution.

You can now rewrite configurations like this:

as follows, appending the requested URI to the path, but explicitly!

Finally, the share directive now can accept an array of paths, trying them one by one until it finds a file:

If no file is found, routing passes to a fallback action; if there’s no fallback, the 404 (Not Found) status code is returned.

For more details, see Static Files in the NGINX Unit documentation.

Plans: njs, URI Rewrite, Action Chaining, OpenAPI

As you read this, we’re already at work on the next release; here’s a glimpse of what we have up our sleeves.

First, we’re integrating NGINX Unit with the NGINX JavaScript module (njs), another workhorse project under active development at NGINX. In short, this means NGINX Unit will support invoking JavaScript modules. Consider this:

After importing the module in NGINX Unit, you’ll be able to do some neat stuff with the configuration:

Also, we’re aiming to introduce something akin to the ever‑popular NGINX rewrite directive:

Our plans don’t stop there, though. How about tying NGINX Unit’s routing to the output from the apps themselves (AKA action chaining)?

The idea here is that the auth_check app authenticates the incoming request and returns a status code to indicate the result. If authentication succeeds, 200 OK is returned and the request passes on to my_app.

Meanwhile, we’re also working on an OpenAPI specification to define once and for all NGINX Unit’s API and its exact capabilities. Wish us luck, for this is a behemoth undertaking.

If that’s still not enough to satisfy your curiosity, refer to our roadmap for a fine‑grained dissection of our plans; it’s interactive, so any input from you, dear reader, is most welcome.

Automating Multi-Cluster DNS with NGINX Ingress Controller

Applications can’t serve their purpose if users can’t find them. The Domain Name System (DNS) is the Internet technology that “finds” apps and websites by translating domain names to IP addresses. DNS is so ubiquitous and reliable that most days you don’t even think about it. But when there are DNS problems, everything stops. Making sure DNS works is crucial for modern applications, especially in microservices architectures where services are constantly spinning up and down.

In a previous post, we talked about defining DNS records for two subdomains that correspond to applications running in the same cluster (unit-demo.marketing.net for the Marketing app and unit-demo.engineering.net for the Engineering app) and resolve to the same cluster entry point – namely, the external IP address of the cluster’s NGINX Ingress Controller. Server Name Indication (SNI) routing is configured on NGINX Ingress Controller to authenticate and route connections to the appropriate application based on the domain name requested by users.

But many organizations need to extend that use case and deploy applications in multiple Kubernetes clusters, which might be spread across cloud‑provider regions. For external traffic to reach new cluster regions, you need to create DNS zones that resolve to those regions.

In the past, this process required using a third‑party provider (such as GoDaddy or DNSExit) to manually create a domain registry and update host records appropriately. Now, the ExternalDNS Kubernetes project automates the process by making Kubernetes resources discoverable via public DNS servers. That means you use the Kubernetes API to configure a list of DNS providers.

With an integration between ExternalDNS and NGINX Ingress Controller, you can manage DNS A records such that DNS names are derived from hostnames declared in a standard Kubernetes Ingress resource or an NGINX VirtualServer custom resource. Developers and DevOps teams can leverage this integration in their CI/CD pipelines to automatically discover applications across different clusters, without involving the NetOps team (which typically owns DNS).

In this post, we show how to use sample configuration files from our GitHub repo to integrate ExternalDNS with NGINX Ingress Controller.

The Base Architecture

To implement ExternalDNS with NGINX Ingress Controller, we start with the base case where developers configure an Ingress controller to externally expose Kubernetes apps. Clients cannot connect to the apps until the configured domain name resolves to the public entry point of the Kubernetes cluster.

NGINX Ingress Controller interacts with the DNS provider through the intermediary ExternalDNS Kubernetes deployment, enabling automatic discovery of Kubernetes applications using external DNS records. In the diagram, the black lines represent the data path over which external users access applications in the Kubernetes cluster. The purple lines represent the control path over which app owners manage external DNS records with VirtualServer resources in the NGINX Ingress Controller configuration and External DNS accesses the DNS provider.

Diagram how ExternalDNS Kubernetes deployment interacts NGINX Ingress Controller with DNS provider

Integrating ExternalDNS and NGINX Ingress Controller

Perform the steps in the following sections to integrate ExternalDNS and NGINX Ingress Controller.

Prerequisites

  1. Create at least one registered domain. Substitute its name for <my‑domain> in the steps below. (There are many articles available on how to register a domain, including this guide from PCMag.)
  2. Deploy NGINX Ingress Controller using manifests or Helm charts. Add the equivalent of these command‑line arguments in the deployment specification:

    • -enable-external-dns – Enables integration with ExternalDNS.
    • -external-service=nginx-ingress – Tells NGINX Ingress Controller to advertise its public entry point for recording in A records managed by the DNS provider. The hostname of the public entry point resolves to the external service nginx-ingress.
  3. If you are deploying the Kubernetes cluster on premises, provision an external load balancer. We provide instructions for deploying NGINX as the external load balancer with BGP in our free eBook Get Me to the Cluster. Alternatively, you can use F5 BIG‑IP or MetalLB.
  4. If necessary, create a DNS zone in a provider supported by ExternalDNS. This command is for the provider used in the sample deployment, Google Cloud DNS.

    $ gcloud dns managed-zones create "external-dns-<my-domain>" --dns-name "external-dns.<my-domain>." --description "Zone automatically managed by ExternalDNS"

Deploy NGINX Ingress Controller and ExternalDNS

  1. Clone the GitHub repository for the sample deployment and deploy NGINX Ingress Controller.

    $ git clone https://github.com/nginxinc/NGINX-Demos.git && cd NGINX-Demos/external-dns-nginx-ingress/ 
    $ kubectl apply -f nginx-ingress.yaml && kubectl apply -f loadbalancer.yaml
  2. Update the following arguments in the ExternalDNS deployment specification (on lines 59–62 in external-dns-gcloud.yaml for the sample deployment):

    • --domain-filter – The name of the domain created in Step 4 of the previous section (in the sample deployment, external-dns.<my-domain>). Remove any existing values so that only this domain is used.
    • --provider – The DNS provider (in the sample deployment, google for Google DNS).
    • --google-project – The name of the Google project you’re using for the sample deployment (required only if you have more than one Google project).
    • --txt-owner-id – The ID you choose (unique to the sample deployment).

    Note: The arguments you need to include in the ExternalDNS deployment spec may vary depending on which DNS provider you choose. For a list of tutorials on deploying ExternalDNS to the cluster with different DNS providers, see the ExternalDNS documentation.

  3. Deploy ExternalDNS in the cluster and verify that the deployment runs successfully (the output is spread across two lines for legibility).

    $ kubectl apply -f external-dns-gcloud.yaml
    $ kubectl get pods -o wide
    NAME                                READY  STATUS    ...
    external-dns-4hrytf7f98f-ffuffjbf7  1/1    Running   ...
        ... RESTARTS   AGE
        ... 0          1m
    

Configure NGINX Ingress Controller

Next, we configure a VirtualServer resource with an Ingress load balancing rule that routes external connections into our Kubernetes applications.

  1. In app-virtual-server.yaml, set the host field (line 6):

     6    host: ingress.external-dns.<my-domain>

    The mapping between this value and the value of domain-filter on line 59 of external-dns-gcloud.yaml (set in Step 2 in the previous section) is what enables the automatic update of DNS records.

  2. Apply app-virtual-server.yaml and verify that the VirtualServer is correctly configured.

    $ kubectl apply -f app-secret.yaml && kubectl apply -f app-virtual-server.yaml
    $ kubectl get vs 
    NAME   STATE   HOST                              IP            
    cafe   Valid   ingress.external-dns.<my-domain>  34.168.X.Y
  3. Verify that a DNS type A record has been added to the DNS zone. In particular, the IP address in the DATA field must match the IP field in the output from the kubectl get vs command in the previous step (the external IP address of the service of type LoadBalancer which exposes NGINX Ingress Controller, or the equivalent in an on‑premises deployment).

    $ gcloud dns record-sets list --zone external-dns-<my-domain> -name ingress.external-dns.<my-domain> --type A
    NAME                               TYPE     TTL     DATA
    ingress.external-dns.<my-domain>.  A        300     34.168.X.Y
  4. To validate that the VirtualServer hostname can be resolved on the local machine, obtain the name servers assigned to the DNS zone (in this case my-ns-domains).

    $ gcloud dns record-sets list --zone external-dns.<my-domain> --name external-dns.<my-domain>. --type NS
    NAME                        TYPE      TTL     DATA
    external-dns.<my-domain>.   NS        21600   my-ns-domains
    
    $ dig +short @my-ns-domains ingress.external-dns.<my-domain>
    34.168.X.Y
  5. Use the DNS records retrieved in the previous step as dedicated name servers for your registered domain. This sets your registered domain as the parent zone of the DNS zone created in Step 4 of Prerequisites.
  6. Verify that you can access the VirtualServer hostname now that it’s exposed to the global Internet.

    $ curl -i --insecure https://ingress.external-dns.<my-domain>/tea
    HTTP/1.1 200 OK
    Server: nginx/1.23.0
    Date: Day, DD MM YYYY hh:mm:ss TZ
    Content-Type: text/plain
    Content-Length: 160
    Connection: keep-alive
    Expires: Day, DD MM YYYY hh:mm:ss TZ
    Cache-Control: no-cache

Scaling Out Multiple Kubernetes Clusters

You can quickly scale the architecture and automatically discover multiple clusters by automating the creation of external DNS records and resolving them to new cluster entry points (Kubernetes Cluster 1 and Kubernetes Cluster 2) in the diagram. Repeat the instructions in Deploy NGINX Ingress Controller and ExternalDNS and Configure NGINX Ingress Controller.

Diagram automating the creation of external DNS records and resolving them to new cluster entry points

You can also use Infrastructure-as-Code tools in your CI/CD pipeline to generate and expose new clusters to external traffic using ExternalDNS and NGINX Ingress Controller. Additionally, you can manage multiple DNS zones, or even multiple DNS providers depending on how discovery is enabled.

Conclusion

Balancing productivity with security measures that mitigate breaches can be difficult. Imposing restrictions on DevOps teams often causes friction between them and NetOps/SecOps teams. The ideal balance differs in each organization, and NGINX provides the flexibility to establish a balance that adheres to your priorities and requirements.

In the past, app owners relied on NetOps teams to connect their applications to external systems. By using the ExternalDNS integration with NGINX, developers and DevOps teams are empowered to deploy discoverable applications on their own, helping accelerate time to market for innovation.

For a full comprehensive guide on getting started with NGINX in Kubernetes, download our free eBook Managing Kubernetes Traffic with F5 NGINX: A Practical Guide.

You can also get started today by requesting 30-day free trial of NGINX Ingress Controller with NGINX App Protect WAF and DoS or contact us to discuss your use cases.

Make Your NGINX Config Even More Modular and Reusable with njs 0.7.7


Since introducing the NGINX JavaScript module (njs) in 2015 (under its original name, nginScript) and making it generally available in 2017, we have steadily continued to add new features and refine our implementation across dozens of version updates. Normally we wait for an NGINX Plus release to discuss the features in a new NGINX JavaScript version, but we’re so excited about version 0.7.7 that this time we can’t wait!

The significant enhancements in njs 0.7.7 help make your NGINX configuration even more modular, organized, and reusable:

To learn more about njs and review the list of use cases for which we provide sample code, read Harnessing the Power and Convenience of JavaScript for Each Request with the NGINX JavaScript Module on our blog.

For a complete list of all new features and bug fixes in njs 0.7.7, see the Changes documentation.

Declaring JavaScript Code and Variables in Local Contexts

In previous njs versions, you have to import your JavaScript code and declare the relevant variables – with the js_import, js_path, js_set, and js_var directives – in the top‑level http or stream context, the equivalent of declaring global variables at the top of a main file. But the directives that actually invoke the JavaScript functions and variables appear in a child context – for example, with the js_content directive in an HTTP location{} block and the js_access directive in a Stream server{} block. This creates two issues:

  1. To someone reading through the configuration, the declarations in the http and stream contexts are essentially noise, because there’s no indication where the associated code and variables are actually used.
  2. It’s not obvious in the child context where the code and variables have been imported and declared. Though we recommend including the http{} and stream{} blocks only in the main configuration file (nginx.conf) and using the include directive to read in smaller function‑specific configuration files from the /etc/nginx/conf.d and /etc/nginx/stream.d directories, NGINX configuration is flexible – you can include http{} and stream{} blocks in multiple files. This can be especially problematic in environments where multiple people work on your NGINX configuration and might not always follow established conventions.

In njs 0.7.7 and later, you can import code and declare variables in the contexts where they’re used:

Having all njs configuration for a specific use case in a single file also makes your code more modular and portable.

As an example, in previous njs versions when you added a new script you had to change both nginx.conf (adding js_import and possibly js_path, js_set, and js_var) and the file where the JavaScript function is invoked (here, jscode_local.conf).

In njs 0.7.7 and later, all the configuration related to the util function is in the one file, jscode_integrated.conf:

Modifying Behavior Depending on the Execution Context

Several new features in njs 0.7.7 enable you to modify the behavior of your JavaScript code depending on the context (processing phase) where it is executing.

The HTTP r.internal Property

The HTTP r.internal property is a Boolean flag set to “true” for internal requests (which are handled by location{} blocks that include the internal directive). You can use the r.internal flag to fork logic when a script uses a general event handler that can be called in both internal and non‑internal contexts.

The following classify as internal requests:

Improved s.send() Stream Method

In earlier njs versions, the Stream s.send() method is context‑dependent, because the direction in which it sends data is determined by the location (upstream or downstream) of the callback where the method is called. This works fine for synchronous callbacks – which s.send() was originally designed for – but fails with asynchronous functions such as ngx.fetch().

In njs 0.7.7 and later, the direction is stored in a separate internal flag, which s.send() can then use.

More Efficient File Operations with the New fs.FileHandle() Object

The file system module (fs) implements operations on files. The new FileHandle object in the fs module is an object wrapper for a numeric file descriptor. Instances of the FileHandle object are created by the fs.promises.open() method.

Use the FileHandle object to get a file descriptor, which can be further used to:

  • Perform functions like read() and write() on the file
  • Open a file and perform reads and writes at a specified location without reading the whole file

The following properties of FileHandle have been implemented (for information about the required and optional arguments for each property, see the documentation):

  • filehandle.fd
  • filehandle.read()
  • filehandle.stat()
  • filehandle.write()
  • filehandle.write()
  • filehandle.close()

These methods have been updated to support FileHandle (see the linked documentation for information about each method’s arguments):

Use njs to Enhance Your Configuration

With njs 0.7.7, we’ve made it easier for your teams to work on and share njs code. The extended contexts for njs directives make it even more straightforward to enhance NGINX configuration with custom JavaScript code. You can make the first move towards an API gateway, reverse proxy, or web server – and one that is more than just another middleware or edge component. You can make it part of your application through JavaScript, TypeScript, or third‑party node modules without adding another component in your stack. All you need is NGINX!

Have questions? Join the NGINX Community Slack and check out the #njs-code-review channel to learn more, ask questions, and get feedback on your njs code.

Updating NGINX for Vulnerabilities in the MP4 and HLS Video-Streaming Modules

Today, we are releasing updates to NGINX Plus, NGINX Open Source, NGINX Open Source Subscription, and NGINX Ingress Controller in response to recently discovered vulnerabilities in the NGINX modules for video streaming with the MP4 and Apple HTTP Live Streaming (HLS) formats, ngx_http_mp4_module and ngx_http_hls_module. (NGINX Open Source Subscription is a specially packaged edition of NGINX Open Source available in certain geographies.)

The vulnerabilities have been registered in the Common Vulnerabilities and Exposures (CVE) database and the F5 Security Incident Response Team (F5 SIRT) has assigned scores to them using the Common Vulnerability Scoring System (CVSS v3.1) scale.

The following vulnerabilities in the MP4 module (ngx_http_mp4_module) apply to NGINX Plus, NGINX Open Source, and NGINX Open Source Subscription.

The following vulnerability in the HLS module (ngx_http_hls_module) applies to NGINX Plus only.

Patches for these vulnerabilities are included in the following software versions:

  • NGINX Plus R27 P1
  • NGINX Plus R26 P1
  • NGINX Open Source 1.23.2 (mainline)
  • NGINX Open Source 1.22.1 (stable)
  • NGINX Open Source Subscription R2 P1
  • NGINX Open Source Subscription R1 P1
  • NGINX Ingress Controller 2.4.1
  • NGINX Ingress Controller 1.12.5

All versions of NGINX Plus, NGINX Open Source, NGINX Open Source Subscription, and NGINX Ingress Controller are affected. We strongly recommend that you upgrade your NGINX software to the latest version.

For NGINX Plus upgrade instructions, see Upgrading NGINX Plus in the NGINX Plus Admin Guide. NGINX Plus customers can also contact our support team for assistance at https://my.f5.com/.

Back to Basics: Installing NGINX Open Source and NGINX Plus

Today, NGINX continues to be the world’s most popular web server – powering more than a third of all websites and nearly half of the 1000 busiest as of this writing. With so many products and solutions, NGINX is like a Swiss Army Knife™ you can use for numerous website and application‑delivery use cases, but we understand it might seem intimidating if you’re just getting started.

If you’re new to NGINX, we want to simplify your first steps. There are many tutorials online, but some are outdated or contradict each other, only making your journey more challenging. Here, we’ll quickly point you to the right resources.

Resources for Installing NGINX

A good place to start is choosing which NGINX offering is right for you:

  • NGINX Open Source – Our free, open source offering
  • NGINX Plus – Our enterprise‑grade offering with commercial support

To find out which works best for you or your company, look at this side-by-side comparison of NGINX Open Source and NGINX Plus. Also, don’t be afraid to experiment – NGINX users often try and learn tricks they pick up from others or the NGINX documentation. There’s always a lot to learn about how to get the most out of both NGINX Open Source and NGINX Plus.

If NGINX Open Source is your choice, we strongly recommend you install it from the official NGINX repo, as third‑party distributions often provide outdated NGINX versions. For complete NGINX Open Source installation instructions, see Installing NGINX Open Source in the NGINX Plus Admin Guide.

If you think NGINX Plus might work better for your needs, you can begin a 30-day free trial and head over to Installing NGINX Plus for complete installation instructions.

For both NGINX Open Source and NGINX Plus, we provide specific steps for all supported operating systems, including Alpine Linux, Amazon Linux 2, CentOS, Debian, FreeBSD, Oracle Linux, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu.

Watch the Webinar

Beyond documentation, you can advance to the next stage on your NGINX journey by watching our free on‑demand webinar, NGINX: Basics and Best Practices. Go in‑depth on NGINX Open Source and NGINX Plus and learn:

  • Ways to verify NGINX is running properly
  • Basic and advanced NGINX configurations
  • How to improve performance with keepalives
  • The basics of using NGINX logs to debug and troubleshoot

If you’re interested in getting started with NGINX Open Source and still have questions, join the NGINXCommunity Slack – introduce yourself and get to know this community of NGINX power users! If you’re ready for NGINX Plus, start your 30-day free trial today or contact us to discuss your use cases.