NGINX.COM
Web Server Load Balancing with NGINX Plus

Microservices is both an approach to software architecture that builds a large, complex application from multiple small components which each perform a single function (such as authentication, notification, or payment processing) and the term for the small components themselves. Each microservice is a distinct unit within the software development project, with its own codebase, infrastructure, and database. The microservices work together, communicating through web APIs or messaging queues to respond to incoming events.

In his book Building Microservices, Sam Newman succinctly defines microservices as “small, autonomous services that work together”. This encompasses three key aspects of microservices. A microservice’s codebase is small because it focuses on doing one thing well; the small size means an individual developer or small team is sufficient to create and maintain the code. Being autonomous means a microservice can be deployed and scaled as needed, and without consulting the teams in charge of other microservices when the internals of the microservice change. This is possible because as microservices work together, they communicate through well‑defined APIs or similar mechanisms that don’t expose the internal workings of the microservices.

Developing with Microservices

A microservices architecture is frequently adopted to solve the problems that arise with other architectural models as projects grow. Traditional, monolithic architectures might logically separate functions into component modules, but all modules are kept in a single codebase and there are usually complex interdependencies between them, which makes it difficult to change the code for one module without breaking others. Even if developers concentrate on just a few modules, they have to spend time and energy tracking changes across the entire codebase because changes in other modules might affect their modules. Hiring new developers to fuel growth yields quickly diminishing returns, because it takes a long time to master the huge codebase before they can safely add a feature or fix bugs.

Componentizing software functions into microservices makes it easier to scale up a project. With individual codebases for separate systems, it becomes easier for developers to reason about the effects of changing code. With individual deployments and infrastructure for different services, it becomes easier for DevOps teams to add more computing resources only where they’re needed.

Building microservices‑based applications requires understanding how the components of your application work together, and designing interfaces to those components that allow you to tease them apart. As Adrian Cockcroft, formerly the lead Cloud Architect at Netflix, explains, in a microservices architecture the goal is for the component microservices in an application to interact with one another with the same kind of loose coupling and independence as in interactions between your services and systems from an external service provider.

Managing Microservices

Microservices are frequently combined with some form of containerization, and most of the management tools for services are centered on managing and scaling containers. Common management tools like Kubernetes and Docker Swarm are designed with microservices in mind. Microservices are often deployed on a platform for managing containers or virtual machines, such as Amazon Elastic Container Service (ECS), but more commonly on managed Kubernetes cloud platforms such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE), or on Kubernetes PaaS platforms such as Red Hat OpenShift Container Platform and Rancher.

Deploying microservices is often one of the most challenging aspects of switching over from a monolithic architecture, because it requires taking into account API versions and integration testing across multiple domains, which are nonissues for a monolith. As such, automated monitoring is critical to microservices deployment to ensure that each component is working smoothly. Partial failures in microservices applications are much more common than in monoliths, and the system needs to be designed with fault management in mind.

How Can NGINX Help?

NGINX Plus and NGINX Open Source are the best-in-class load‑balancing solutions used by high‑traffic websites such as Dropbox, Netflix, and Zynga. More than 400 million websites worldwide rely on NGINX Plus and NGINX Open Source to deliver their content quickly, reliably, and securely.

As a software‑based application delivery controller (ADC), NGINX Plus is designed to provide the speed, configurability, and reliability that’s essential to modern microservices architectures:

  • NGINX Plus provides dynamic reconfiguration for simple service management and integrates easily with common microservices management tools such as Kubernetes. Leading companies like Netflix use NGINX at the core of their microservices deployments.
  • Operating at scale demands detailed monitoring. NGINX Plus offers robust live activity monitoring so you can see how services are responding to load and focus your resources where they’re needed.
  • As a software‑based load balancer, NGINX Plus is perfect for service discovery in multi‑service deployments where configurations are constantly changing.

NGINX Controller provides application delivery management for NGINX microservice solutions.

When containerization is part of your microservices journey, NGINX Ingress Controller and NGINX Service Mesh provide management solutions for your containerized microservices, as well as offering solutions to bridge between heterogeneous microservice environments.

Contact us today to learn how we can help you deliver modern apps.

Tags

No More Tags to display