NGINX.COM
Web Server Load Balancing with NGINX Plus

The modern market demands agility, flexibility, and above all, speed. The faster you crank out new applications and features, the better – and companies are taking note. As of 2018, 72% of enterprises planned to implement DevOps methodologies within the next year, and the DevOps market will reach $9.4 billion by 2023.

But as organizations put their DevOps visions into action, infrastructure teams are facing a new development‑centric reality, one in which they must work at the same pace as development teams to deliver the services and policies required across a complex web of data centers, cloud, and virtualized environments – all without getting in the way.

The Infrastructure Bottlenecks that Developers Dread

Many organizations feel the pain of balancing faster development with operational requirements like reliability, scalability, stability, and security. The rise of virtualization and containerization has helped infrastructure teams achieve more agility, shifting NetOps and SecOps teams left so they can automate infrastructure as part of the application development lifecycle. Still, infrastructure teams continue to be bottlenecks even as DevOps teams reach new heights.

The reality is that infrastructure is still moving too slow. According to research from F5 and Red Hat, just under half of all application infrastructure deployments are currently automated. In addition, infrastructure teams continue to deliver services through a ticket‑based approach – nearly half of DevOps and IT professionals report waiting up to a month for infrastructure access, while another 24% say it sometimes takes over a month.

Who wants to wait days – let alone weeks or months – to get moving? No one, and especially not developers driven by market expectations to deliver more and as quickly as possible. As a result, these bottlenecks (and how developers seek to avoid them) can pose serious risks not only to the reliability and security of applications but to the entire organization.

The Shadow IT that Infrastructure Teams Dread

DevOps’ fundamentals are to reduce time to market and improve online experiences through autonomy, collaboration, and iteration.

Many DevOps teams seem to be finding that the best path to productivity is using emerging techniques and tools (both open source and proprietary), whether their IT team approves them or not – the “shadow IT” so dreaded by infrastructure teams. For instance, automation tools like Ansible or Terraform make life easier by deploying infrastructure as code. Or maybe a DevOps team starts using a project on GitHub that makes testing or application updates faster and integrates with existing CI/CD tooling.

Why do enterprise developers turn to the dark (or at least shadowy) side? It’s because they’re focused on one end goal: releasing code fast. They often lack the context and visibility into the big picture they need in order to recognize the kinds of tool design and implementation weaknesses that can bring down mission‑critical apps or compromise customer data. That’s the thing about infrastructure – developers may not want to be slowed down by it, but everyone notices when something goes wrong. At the same time, curbing the freedom of developers can impair their ability to move quickly, impacting market competitiveness and revenues.

It’s a catch‑22. The market says move faster, but it also says be available, stable, and secure.

Self-Service Empowers DevOps to Run Safely

How can organizations provide development teams with the freedom they need while also ensuring that infrastructure teams can do their jobs?

Say a company has 30 different development teams working on 50 separate microservices. How do you let them provision services, test and deploy new features, and coordinate security changes on new code without them ending up waiting six weeks to get a green light? That’s where self‑service comes in.

Given their history with DevOps and shadow IT, infrastructure teams might well believe that self‑service only leads to chaos. When developers are left entirely to their own devices and adopt shadow IT, they sometimes leave a trail of high costs, duplicated effort, inconsistent policies, and incompatible platforms and standards in their wake.

In a previous post, we characterized this situation as developers running with scissors. In many organizations, infrastructure teams see it as their responsibility to take away the scissors and make developers walk, and developers end up resenting them for it. To eliminate this friction, infrastructure teams need to adopt a new goal – not to stop developers from running, but to provide different tools that are safe to run with.

Infrastructure teams need to offer app delivery and security services that integrate into CI/CD frameworks and work seamlessly with legacy apps and cloud‑native modern apps. This enables developers to consume infrastructure resources and security policies without ever having to file a ticket.

Three Components Of Self-Service Application Delivery

To provide self‑service application delivery and security, you need three primary components: a load balancer, a web application firewall (WAF), and self‑service portal. All three need to work in concert with each other, and be deployed as Infrastructure as Code. Given most developers work on multiple platforms, the components also need to be infrastructure‑agnostic – deployable across bare metal, virtual machine, and cloud platforms.

Here’s a rundown of the characteristics needed to make these components self‑serviceable.

Self-Service Component 1: A Lightweight, Software‑Based Load Balancer

As they roll out new features or deploy new services, application teams need to test code. They may choose to ramp up traffic slowly to the new code (canary testing), test how users react to the new code versus old code (A/B testing), provide zero down‑time rollover to the new code (blue‑green deployment), or provide a failover mechanism in case the new code doesn’t work as desired (circuit breaker pattern).

All of these testing patterns require a load balancer to direct users and traffic based on the developer’s desired outcome. In a self‑service environment, application teams configure app‑specific load balancers themselves in near real time, using a service portal or configuration API instead of filing a ticket with the infrastructure team. No more waiting hours, days, or even weeks to test the efficacy of the new code.

The self‑service load balancer sits in its own dedicated tier behind the primary, network‑based load balancer. Moreover, each application (or even service or microservice) gets its own dedicated load balancer instance in this tier. This ensures that each configuration change doesn’t need to be regression‑tested against all other applications.

Self-Service Component 2: An Integrated Web Application Firewall

A self‑service load balancer boosts developer productivity by eliminating processes that slow the release of new code. For the enterprise to minimize risk of exploits in this new code, however, a WAF is needed. But there’s a catch: WAFs are not necessarily easy to configure. In fact, many application teams see WAFs as an impediment they’d rather avoid.

That’s where an integrated WAF comes in. Just as with the load balancer, enterprises need a lightweight, software‑based WAF that can sit closer to the app – running near or on the same instance as the software load balancer. Think of the application as room in a house. A self‑service load balancer is the door to each of these rooms. The WAF is the lock on that door. In today’s zero‑trust environment, each door needs its own lock. Enterprises can no longer rely on a single security control for the whole house.

The self‑service WAF is one where security teams can configure each WAF with fine‑grained security controls that are unobtrusive to the developers’ work. The same CI/CD pipeline and Infrastructure-as-Code automation that enables your canary, A/B, blue‑green, and circuit‑breaker patterns can configure necessary security policies to ensure new code is protected against known exploits, denial-of-service attacks, and bot attacks.

Self-Service Component 3: An Application-Centric Portal with RBAC

Your lightweight, software‑based load balancer and WAF perform the heavy lifting at the data plane. However, to truly operate them in a self‑service environment you need a way to expose these capabilities via portals and with role‑based access control (RBAC). This requires additional control and management plane technologies layered atop the data plane.

Specifically, a control plane provides additional configuration and orchestration capabilities. This makes your infrastructure self‑serviceable by enabling new instances of load balancers and WAFs to be spun up and down as needed, as well as capable of fast configuration changes. All of this needs be exposed via an API so that it can automated and integrated into CI/CD pipelines.

On top of the control plane sits a management plane where you can create your self‑service portal and enforce RBAC policies. This way specific application teams only see the infrastructure that they have permission to configure. These portals need to be application‑centric (as opposed to infrastructure‑centric) so that the teams can focus on the policies, workflow, and traffic management specific to their app.

NGINX Empowers Infrastructure Teams to Deliver Self‑Service

So, how do you get there? Infrastructure teams can be heroes instead of villains in the eyes of their DevOps colleagues if they start by providing services and tools that offer top‑notch developer experiences. NGINX self‑service offers CI/CD‑friendly tools and self‑service app management to remove the friction between DevOps, NetOps, and SecOps.

Here’s why:

  • NGINX is the gold standard DevOps wants to use. NGINX has deep roots in open source and is backed by a strong community that already loves and uses our products. It’s also built on a proven, production‑ready platform that provides the stability and performance to keep infrastructure teams happy. Providing tools that development teams often choose on their own helps to reduce the chances of developers taking things into their own hands and going rogue.
  • NGINX balances development autonomy with infrastructure guardrails. NGINX allows infrastructure teams to release the application delivery and security technologies developers need for capabilities like A/B testing, canary and blue‑green deployments, and circuit‑breaker patterns. At the same time, infrastructure teams keep the authority to set up guardrails that ensure production applications remain consistent, reliable, and secure.
  • NGINX automates and integrates self‑service into CI/CD pipelines. NGINX Controller makes it easy to incorporate infrastructure capabilities into your CI/CD pipelines and automation tools – for example, using our collection of Red Hat Ansible roles – thanks to a declarative API that ensures your NGINX Plus infrastructure is extensible. Controller also provides a self‑service portal and RBAC to ensure that development teams have directs access to configure their own NGINX directives. Think of it as delivering a service catalog to make infrastructure provisioning as simple as public cloud providers do.
  • NGINX enhances security with internal WAF-as-a-service. NGINX offers NGINX App Protect as a native WAF built on F5’s market‑leading technology to provide self‑service capabilities and analytics that give security teams more control without slowing down developer productivity. NGINX Controller provides optional security provisioning add‑ons that bring the power of NGINX App Protect to a self‑service environment. The same declarative API, self‑service portal, and RBAC controls now ensure that security teams can deploy controls that are fully integrated into the CI/CD pipeline.

Get Started with NGINX Controller

You can get started providing self‑service app delivery and security infrastructure with NGINX Controller. Start a free 30-day trial today and get access to NGINX Controller App Delivery Module, NGINX Controller App Security add‑on for App Delivery, NGINX Plus, and NGINX App Protect. All can be deployed and configured via Controller’s graphic user interface or declarative API. Not sure how to setup a self‑service catalog? Work with NGINX experts to help your infrastructure teams setup application‑centric portals and role‑based access controls.

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Karthik Krishnaswamy

Director, Product Marketing for NGINX

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.