NGINX is now part of F5. See why we’re better together.

How to Augment Your F5 Hardware Load Balancer with NGINX

The way enterprises architect applications has changed. According to our recent user survey, 65% of applications in an enterprise portfolio are monoliths, where all of the application logic is packaged and deployed as a single unit. However, we see that the majority of new app development uses microservices architectures instead. Nearly 10% of apps are built net‑new as microservices (where different applications are broken up into discrete, packaged services), while the 25% in between are hybrid applications (a combination of a monolith with attached microservices, sometimes referred to as “miniservices”).

You can read more about this in our seminal blog series on microservices. It details the journey from monoliths to microservices, which has a profound impact on all aspects of application infrastructure:

  • People: Control shifts from infrastructure teams to application teams. AWS showed the industry that if you make infrastructure easy to manage, developers will provision it themselves. Responsibility for infrastructure then shifts away from dedicated infrastructure and network roles.
  • Process: DevOps speeds provisioning time. DevOps applies agile methodologies to app deployment and maintenance. Modern app infrastructure must be automated and provisioned orders of magnitude faster, or you risk delaying the deployment of crucial fixes and enhancements.
  • Technology: Infrastructure decouples software from hardware. Software‑defined infrastructure, Infrastructure as Code, and composable infrastructure all describe new deployment architectures where programmable software runs on commodity hardware or public cloud computing resources.

These trends impact all aspects of application infrastructure, but in particular they change the way load balancers – sometimes referred to as application delivery controllers, or ADCs – are deployed. Load balancers are the intelligent control point that sits in front of all apps.

Bridging the Divide Between NetOps and DevOps

Historically, a load balancer was deployed as hardware at the edge of the data center. The appliance improved the security, performance, and resilience of the hundreds or even thousands of apps that sat behind it. However, the shift to microservices and the resulting changes to the people, process, and technology of application infrastructure require frequent changes to apps. These app changes then require corresponding changes to load balancer policies.

The F5 appliance sitting at the frontend of your environment is doing the heavy lifting – providing advanced application services like local traffic management, global traffic management, DNS management, bot protection, DDoS mitigation, SSL offload, and identity and access management. Forcing constant policy changes to this environment can destabilize your application delivery and security, introducing risk and requiring your NetOps teams to spend time on testing of new policies rather than other, value‑added work.

The good news is that there is a better approach. Retain the F5 infrastructure at the frontend to provide those advanced application services to the large numbers of mission‑critical apps it’s charged to protect and scale. Augment that solution by placing a lightweight software load balancer from NGINX – with NGINX Controller managing instances of NGINX Plus load balancers – further back as part of your application stack. This empowers your DevOps and application teams to directly manage rule changes on the software load balancer, often automating them as part of a CI/CD framework.

The result? You achieve the agility and time-to-market benefits that your app teams need, without sacrificing the reliability and security controls your network teams require.

NGINX software load balancers are the key to bridging the divide between DevOps and NetOps. There are three common deployment models for augmenting your F5 BIG-IP infrastructure with NGINX:

  • Deploy NGINX behind the F5 appliance to act as a DevOps‑friendly abstraction layer
  • Provision an NGINX instance for each of your apps, or even for each of your customers
  • Run NGINX as your multi‑cloud application load balancer for cloud‑native apps

Because NGINX is lightweight and programmable, it consumes very few compute resources and imposes little to no additional strain on your infrastructure.

But augmenting your F5 appliances with NGINX software isn’t an overnight process. To help, we’ve curated a list of resources to help you research, evaluate, and implement NGINX.

Resources to Help You Augment F5 with NGINX

Stage 1: Researching NGINX as a Complementary Solution to F5

The first stage in the process is to understand the benefits of deploying NGINX as an additional load balancer. If you’re just getting started, we recommend you check out our:

Blog: Not All Software Load Balancers Are Created Equal – Many vendors offer a software load balancer, but they’re not like NGINX. This blog explains why.

Stage 2: Evaluating NGINX as a Complementary Solution to F5

Now that you understand the basics, it’s time to build the business case for NGINX. Understand the various ways you can deploy NGINX to augment your F5 solutions. Learn from customers and NGINX experts with our:

Case study: DataposIT Implements A Distributed, Scalable Architecture for NASCOP Using NGINX Plus And NGINX Controller – Read how Kenya’s National AIDS and STI Control Programme (NASCOP) augmented its F5 infrastructure to accelerate its hybrid cloud initiative.
Video: The TCO of the NGINX Application Platform – Learn about quantifying the ROI and total cost of ownership of NGINX. Contact us for a custom ROI calculation.

Stage 3: Implementing NGINX as a Complementary Solution to F5

After making the investment decision, it’s time to roll up your sleeves and deploy NGINX alongside F5. You’ll need to learn how to translate F5 BIG-IP iRules into NGINX configuration directives, to ensure continuity of application services spanning from the frontend BIG IP to the backend NGINX Plus instances. You’ll also want to learn the basics of using NGINX Plus and NGINX Controller.

Webinar: NGINX Controller: Configuration, Management, and Troubleshooting at Scale – Learn how to manage and monitor NGINX Plus software load balancers.
Webinar: NGINX: Basics and Best Practices – Learn how to get the most out of your NGINX Plus instances to complement the capabilities of your F5 investment.

Two Ways to Get Started with a Free Trial of NGINX Software

Getting started with NGINX is easy. We offer two automated trial experiences, based on your needs.

If you want to augment your F5 load balancer with NGINX Plus and don’t need central monitoring and management, request a free 30‑day trial of NGINX Plus. This is the best option for DevOps and infrastructure teams that plan to use automation and orchestration tools like Ansible to manage NGINX Plus, or already manage F5 via an API.

If you want a solution that includes additional monitoring, management, and analytics capabilities for NGINX Plus load balancers, request a free trial of NGINX Controller (it also includes NGINX Plus). This is the best option for infrastructure and network teams that do not manage network infrastructure via APIs or want to evaluate the NGINX Application Platform as a software ADC that more closely resembles an F5 BIG-IP.

We’d love to talk with you about how NGINX can help with your use case.

Cover image
Free O'Reilly Ebook: The Complete NGINX Cookbook

Updated for 2019 - Your guide to everything NGINX


No More Tags to display