NGINX.COM
Web Server Load Balancing with NGINX Plus

Today, many companies are on a cloud‑native journey or – as F5 puts it – a journey to adaptive applications. By “adaptive” we primarily mean that the app responds to changes in its operating environment by automatically scaling in response to the level of demand, detecting and mitigating problems, and recovering from failures, among other capabilities. But we also mean that it’s fairly easy to update the app to meet changing requirements in the business and technology landscape as traffic patterns shift, the underlying infrastructure develops, and cyberattacks get more numerous and sophisticated.

Sounds like a great outcome, doesn’t it? Well, the journey to adaptive apps doesn’t happen overnight. It takes time – and it’s okay if you’re still in the early stages. Your expedition can be made easier by simplifying crucial points along the way.

Most often, we think of adaptive apps as driven by microservices architectures, running in containers within an elastic cloud environment, a world that comes with a lot of complexity. As nodes and pods and containers spin up and down, you can’t keep up if you have to deal with every change manually. You need orchestration (in particular, container orchestration) to keep the chaos at bay.

At a minimum, adaptive apps need to respond to environmental changes across four factors:

  1. Performance
  2. Scalability (or flexibility)
  3. Observability
  4. Security

How do you seamlessly orchestrate and manage your adaptive apps – as a group – to facilitate fast, automated responses to regularly changing conditions? Well, it’s hard to talk about orchestration without talking about Kubernetes. That’s why Granville Schmidt and I discussed it recently at GlueCon. You can watch our conversation here:

Kubernetes is one of the most popular projects from the Cloud Native Computing Foundation (CNCF) – and for good reason. It provides a lot of amazing functionality (especially for enterprises that need to make continual adjustments) and is often the right tool for the heart of your scalable application because it bakes in the needed flexibility. When it was launched in July 2015, it was already a mature product (because of all the work Google had done on its precursor, the Borg project) and as it evolves, the ecosystem of supporting technologies needed to make it a complete solution continues to emerge. But while Kubernetes is a powerful tool, it doesn’t solve every problem and it can be complex and challenging.

Starting Your Cloud-Native Voyage

Unless you’re starting with a brand‑new code base, you likely already have applications in production. Not every application needs to be cloud native, especially if it already fulfills our requirements for modern apps. Also, just because Kubernetes has so many capabilities doesn’t mean you need to use it everywhere. Kubernetes has a time and place. But making the move to microservices and containers can definitely be advantageous.

So, how do you get started with your cloud‑native journey? How do you go from existing monoliths to a microservices‑based, cloud‑enabled world?

With so many resources out there, we want to break down the simplest way to go from monolith to cloud native with Kubernetes orchestration. Back in 2010, Gartner outlined five approaches to cloud migration: rehost, refactor, revise, rebuild, and replace. Microsoft has built on Gartner’s thought leadership to build a framework for what it calls cloud rationalization (in the process renaming the third approach to rearchitect). But you can also think of the approaches/framework components as steps on a journey – one that touches developers and operations, code, and process – and one that you might be just starting or already on. Here we’re focusing on three of the steps: rehost, refactor, and rearchitect.

Rehost (Lift and Shift)

The cloud‑native journey often starts with an existing monolith. But you don’t want to go from monolith to cloud native in one fell swoop. We recommend starting with rehosting, or encapsulating the infrastructure.

While we regularly hear about the splendors of microservices in the cloud, there are still a whole lot of monoliths out there. And why not? They work and they aren’t broken (yet). However, even if the monolith’s traditional three‑tier architecture is still in use, apps that use it are likely to be running up against scaling issues, or even forcing you to build hybrid models to connect with today’s users.

This first, rehosting step is also called lift and shift. In short, you make a copy of your existing application and “paste” it onto an appropriate cloud platform. Basically, you’re moving to “someone else’s computers” with as little impact on your application as possible.

This often happens in a virtual machine (VM) world, which gets you started with the management of apps in the cloud and helps identify the issues you need to deal with. By just lifting and shifting, you’re not really leveraging many of the advantages of adaptive apps, since you can scale only by duplicating the entire app, even if only one piece of functionality is the bottleneck. And having entered the cloud‑native world, you’re faced with unfamiliar issues that apply specifically to web apps and static assets dynamically running code.

Even though you’re still above the level of containers and orchestration, you’re on your way and ready for the next steps.

Refactor

In the next step, you refactor your app into pieces (usually implemented as modules) each of which performs a single function (or perhaps a set of closely related functions). At the same time, you might add new app capabilities or tie some functions to specific cloud services (like databases or messaging services). You’re also likely moving to containers, while retaining some of the VMs that make up your infrastructure, and orchestration is becoming more important.

Being farther along in the process, you’ve got more moving pieces than when you simply rehost. Your refactored services are probably in containers so that you can scale them up and down while retaining the loosely coupled communication paths and API calls required to make things work. Now, it’s time to bring in Kubernetes to orchestrate your new containers at the right level of control.

Of course, given the complexity incurred with Kubernetes, there are some challenges to consider. One of the big ones is Ingress, the Kubernetes resource that lets you configure HTTP load balancing for applications hosted in your Kubernetes cluster (represented by services) and deliver those services to clients outside of the cluster. With NGINX, you can use the NGINX Ingress Controller to support the standard Ingress features of content‑based routing and TLS/SSL termination. NGINX Ingress Controller also extends the standard Ingress resource with several additional features like load balancing for WebSocket, gRPC, TCP, and UDP applications.

Depending on your early refactoring work, your emerging cloud‑native app may need the flexibility to communicate in multiple ways. And since NGINX Ingress Controller itself runs in a Kubernetes pod, you’re well set for the next step, rearchitecting.

Open Source Refactoring with NGINX Unit

Using an open source tool like NGINX Unit can help make refactoring easier, with dynamic reconfiguration via API, request routing, and security features. As a next‑gen technology, NGINX Unit can also help you modernize your monolith by turning it into a cloud‑native monolith. NGINX Unit’s RESTful configuration method provides uniform interfaces and separates client concerns from server concerns. While your three‑tier monolith might already strive to do that, NGINX Unit makes cloud native approachable and leads to easier operations. This clarifies the lines of communication lines and helps identify the further steps required after refactoring.

The refactoring step often stumbles on application control. Since NGINX Unit is already a container‑friendly technology, as you refactor your app, its application‑control features (including dynamic reconfiguration) allow you to seamlessly add services as they are separated from the monolith. NGINX Unit provides application control from Layer 4 all the way into user space, including high‑performance HTTP, proxies, backends, and true end-to-end SSL/TLS. Also, the fact that NGINX Unit supports multiple languages, and can use the right language at the right time, becomes especially important in your early refactoring work.

Rearchitect

After refactoring to add and replace services in your initial application, you next need to rearchitect and look at a stable and sensible redesign for your microservices architecture. Duct tape and wire might work for a while, but in production systems stability is highly desirable.

In rearchitecting as we envision it, you continue to devolve the initial application so that all functions are performed by microservices (or serverless functions) that live in containers powered by Kubernetes. Here, communication is key. Each microservice is as small as necessary (which does not mean “as small as possible”) and is often developed by an independent team.

New issues and even some chaos inevitably arise when an app consists of discrete, independent microservices with loosely coupled communications. Remember that external Ingress issue? It’s back with a vengeance. Now, you’re dealing with a more complex collection of services needing Ingress and more teams you have to collaborate with.

Rearchitecting with NGINX Ingress Controller and NGINX Kubernetes Gateway

Rather than just relying on the standard Kubernetes Ingress resource, NGINX Ingress Controller also supports its own set of custom resource definitions (CRDs) which extend the power of NGINX to a wider range of use cases, such as object delegation using the role‑based access control (RBAC) capabilities of the Kubernetes API.

It is also worth looking into a controller that implements the Kubernetes Gateway API specification. A Kubernetes Gateway allows you to delegate infrastructure management to multiple teams and simplifies deployment and administration. Such a gateway does not displace an Ingress controller, but as the spec matures it may become the solution of choice. NGINX’s implementation of the Kubernetes Gateway API, NGINX Kubernetes Gateway, uses NGINX technology on the data plane and (as of this writing) is in beta.

In line with the Kubernetes Gateway API specification, NGINX Kubernetes Gateway enables multiple teams to control different aspects of the Ingress configuration as they develop and deliver the app. In the cloud‑native world, it is common for developers and site reliability engineers (SREs) to collaborate continually, and the gateway makes it easier to delegate different controls to the appropriate teams. The gateway also incorporates core capabilities that were formerly part of custom Kubernetes annotations and CRDs, providing even more simplicity to get your cloud‑native world up and rocking.

Cloud Native and Open Source

So, there you have it, a simple set of steps for moving from monolith to cloud native, driven by the power of containers and Kubernetes. You might find many other approaches suitable for your use cases, like retaining the initial app while building a new framework around it that combines the capabilities of NGINX Unit and Kubernetes. Or you might build a hybrid model. If you’d like to look at a reference model of NGINX in a Kubernetes world, check out our open source Modern App Reference Architecture (MARA) project.

No matter which approach to cloud native you choose, keep in mind that you always need control and communication capabilities in your Kubernetes cluster to achieve the performance and stability you want for your production systems. NGINX enterprise and open source technologies can help deliver all of that, from data plane to management plane, with security and performance in mind.

Learn how to get started with NGINX Unit in our installation guide. And if you’d like to try the enhanced functionality of the NGINX Plus-based NGINX Ingress Controller, start your 30-day free trial today or contact us to discuss your use cases.

Related Posts

Hero image
Managing Kubernetes Traffic with F5 NGINX: A Practical Guide

Learn how to manage Kubernetes traffic with F5 NGINX Ingress Controller and F5 NGINX Service Mesh and solve the complex challenges of running Kubernetes in production.



About The Author

Dave McAllister

Dave McAllister

Sr OSS Technical Evangelist for NGINX

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.