NGINX.COM
Web Server Load Balancing with NGINX Plus

NGINX Open Source: Reflecting Back and Looking Ahead

Today we announced the availability of NGINX Plus Release 6 (R6). With this milestone event in our commercial business, I thought it would be a good time to reflect back on what we have accomplished as a community and to address where we are taking NGINX from here.

Igor Sysoev
Igor Sysoev, NGINX, Inc. Co‑founder & CTO

The last 12 months have been a hectic time at NGINX, Inc. We have had exceptional growth as a project, as a development team, and as a company. The number of sites using our software has grown enormously, and NGINX is now the most commonly used web frontend for the 10,000 busiest sites on the Internet. We now count some of the most innovative developers in the world as users and as contributors – from disruptive new businesses like Airbnb and Uber to Netflix. NGINX Open Source has benefited from well over 100 new features and updates, and NGINX Plus has matured into a highly capable application delivery platform.

Looking Ahead

What does the future hold for NGINX and the applications we power? It’s often said that history repeats itself, but not in a circular fashion. Each time round, things get a little better. We are currently seeing a resurgence of the concept of service oriented architectures, but in a more modern, loosely coupled, more easily developed way that’s referred to as microservices. In the fluid, turbulent, containerized environments in which many of us are building applications, NGINX is a stable and reliable foundation, both for hosting the services and for routing the traffic between them. This is shaping much of our thinking as we consider future use cases for NGINX.

The next 12 months herald some major new features for NGINX Open Source. The stories about NGINX and JavaScript will be realized – I have a working prototype of a JavaScript VM that is highly optimized for NGINX’s unique requirements and we’ve begun the task of embedding it within NGINX.

Our community of module developers is vital to the success of NGINX in the open source world. We know it’s not as easy as it could be to develop for NGINX, and to address that situation, we’re beginning the implementation of a pluggable module API in the next couple of months. Our goal is to make it simpler for our developer community to create and distribute modules for NGINX, giving users more choice and flexibility to extend the NGINX Open Source core. We’re also establishing a developer relations team to help support the community in this transition.

You may already have read our plan to support HTTP/2 in NGINX. We appreciate how important it is to our users that we continue to support the innovations that others are making in our space, and our HTTP/2 support will of course build on the successful SPDY implementation in use at a number of sites.

The Role of NGINX Plus

People sometimes ask about NGINX Plus and how it relates to NGINX Open Source. NGINX Plus was born from the desire to create a commercial offering that would extend the software’s capabilities and help fund the continued development of NGINX.

The two products have a lot of overlap; in fact, it’s possible to use various open source third‑party modules to implement much of the additional functionality in NGINX Plus. We’re completely comfortable with that. If users have the expertise, patience, and time required to maintain their own custom build of NGINX for comprehensive application delivery, then NGINX Plus is clearly not for them. However, if maintaining and supporting an application delivery platform is not your core competency or if you would prefer your technical resources directed at applications that more directly further your business, we’re here with NGINX Plus and a range of services to do that for you.

Thank you to everyone who purchased subscriptions to NGINX Plus. Not only have you received an application delivery platform that blows every commercial alternative out of the water on price/performance, you’ve also helped to support the growing engineering team in Moscow who maintain the very high standards of both NGINX and NGINX Plus.

We greatly appreciate your support, whether as a commercial user, a third‑party developer, a supporter of NGINX, or just an end user of NGINX. Together, we’re making the web a better place for developers, admins, and end users. We hope that you continue to use NGINX and NGINX Plus, and in return, we remain committed to providing the most powerful, lightweight, and high‑performance software to make your applications as great as they can be.

To try NGINX Plus, start your free 30-day trial today or contact us for a demo.

Announcing NGINX Plus R6 with Enhanced Load Balancing, High Availability, and Monitoring Features

We’re really pleased to announce the availability of NGINX Plus Release 6 (R6). This latest release of our application delivery platform gives NGINX Plus users even more to love, including:

Editor – For more details about key new features in NGINX Plus R6, see these related blog posts:

Our customers have expressed overwhelming interest in using NGINX Plus to replace legacy hardware and to further support the adoption of public and private clouds. With the release of R6, NGINX Plus exceeds the capability of traditional hardware load balancers and ADCs, while providing unlimited throughput at a lower cost than our competitors. We believe it is now the ideal choice for application delivery and load balancing, whether for modern web applications or for enterprise applications like relational databases and mail servers.

New “Least Time” Load‑Balancing Algorithm

The new Least Time load‑balancing algorithm monitors both the number of concurrent connections and the average response time from each node in the load‑balanced pool. It uses this information to select the most appropriate node for each request, with the goal of selecting faster and less‑loaded nodes in preference to slower and more heavily loaded ones.

Least Time outperforms other load‑balancing methods when nodes differ significantly in latency. One common use case is load balancing across nodes located in two separate data centers; local nodes tend to have very little latency compared to nodes in a remote data center. Least Time prefers the low‑latency nodes, but NGINX Plus’ health checks ensure failover to the slower nodes if the faster ones fail or go offline.

Least Time can base its load‑balancing decisions on either the time to receive the response headers from the upstream, or the time to receive the entire response. Two counters being added to the set of extended status statistics, header_time and response_time, present the rolling‑average measurements used as the basis for decisions.

Full‑Featured TCP Load Balancing

The TCP load balancing feature introduced in NGINX Plus R5 has been significantly extended to include TCP health checks, dynamic configuration of upstream server groups, full access logs, and SSL/TLS termination and encryption. Many new extended status counters have been added for TCP load balancing, providing the same level of reporting and visibility that you already enjoy for HTTP load balancing.

TCP load balancing has already been proven in a number of use cases, including load balancing and high availability of MySQL and load balancing and high availability of Microsoft Exchange.

High‑traffic TCP‑based services are not the only ones to benefit from TCP load balancing. Even low‑traffic services can benefit from high availability (using health checks and dynamic reconfiguration), improved security (using SSL/TLS wrapping) and improved visibility (using extended status counters and access logging).

Editor –

High Availability

NGINX Plus supports high‑availability clusters using a solution based on the Linux keepalived utility. You can easily create high‑availability pairs of NGINX Plus instances, using the Virtual Router Redundancy Protocol (VRRP) to assign traffic IP addresses to the primary NGINX Plus instance and transfer them automatically to the backup instance if the primary fails.

To enable and configure this feature, install the optional nginx‑ha‑keepalived package. After initial configuration, you can extend the configuration to implement more complex scenarios, including larger clusters of NGINX Plus instances and use of multiple virtual IP addresses.

For more details about the high‑availability package and its installation process, see the NGINX Plus Admin Guide and High Availability in NGINX Plus R6 on our blog.

Updated Dashboard for Live Activity Monitoring

NGINX Plus R6 includes a new, richer status dashboard that charts the health and activity of your NGINX Plus instance using a wealth of live activity monitoring information:

  • Key software information and high‑level alerts relating to the performance and operation of your load‑balanced cluster
  • Real‑time and historical (average) performance data – requests and bandwidth – based on server zones and applications that you define, for the HTTP and TCP services that you configure
  • Detailed performance and health information for each upstream load‑balanced group
  • Instrumentation and diagnostics on the operation of each content cache

As in earlier releases, the live activity monitoring data is provided in JSON format via a RESTful interface so that you can incorporate NGINX statistics directly into your own dashboards and other monitoring tools.

For a live demonstration, check out demo.nginx.com. For a more detailed exploration of the dashboard, see Keeping Tabs on System Health with NGINX Plus Live Activity Monitoring on our blog.

Support for Unbuffered Upload

You can now configure NGINX Plus for unbuffered upload, meaning that it streams large HTTP requests (such as file uploads) to the server as they arrive, rather than buffering and forwarding them only after the entire request is received.

This modification improves the responsiveness of web applications that handle large file uploads, because the applications can react to data as it is received, enabling them, for example, to update progress bars in real time. It also reduces disk I/O and can improve the performance of uploads in some situations. By default, NGINX buffers uploaded data to avoid tying up resources in worker‑based backends while the data arrives, but buffering is less necessary for event‑driven backends like Node.js.

SSL/TLS Enhancements

NGINX Plus R6 can provide a client certificate to authenticate itself when communicating with an upstream HTTPS or uwSGI server. This improves security, particularly when communicating with secure services over an unprotected network.

NGINX Plus R6 supports SSL/TLS client authentication for IMAP, POP3, and SMTP traffic.

Caching Enhancements

The proxy_cache directive now supports variables. This simple change means you can define multiple disk‑based caches and select a cache based on request data.

This feature is most useful when you need to create a very large content cache and use multiple disks to cache content. By creating one cache per disk, you can ensure that temporary files are written to the same disk as their final location and thus eliminate disk‑to‑disk copies.

Upgrade or Try NGINX Plus

If you’re running NGINX Plus, we strongly encourage you to update to Release 6 as soon as possible. You’ll pick up a number of fixes and improvements, and it will help us to help you if you need to raise a support ticket. Installation and upgrade instructions can be found at the customer portal.

If you’ve not tried NGINX Plus, we encourage you to try it out for web acceleration, load balancing, and application delivery, or as a fully supported web server with an API for enhanced monitoring and management. You can get started for free today with a 30‑day trial and see for yourself how NGINX Plus can help you scale out and deliver your applications.

Editor – For more details about key new features in NGINX Plus R6, see these related blog posts:

Save the Date: nginx.conf 2015, September 22-24, San Francisco

That time of year is fast approaching. We’re excited to announce the dates and location of nginx.conf 2015. Save the date – we hope to see you there!

We hosted our very first conference in 2014 to a sold-out crowd. Featuring a variety of sessions and activities, developers were able to discover trends and best practices for application performance, network with NGINX partners and sponsors, and participate in keynotes, breakout sessions, and hands-on training classes.

nginx.conf2014-quotes

Why should you go THIS year?

The rise of the web and the explosion of devices that connect us have fundamentally changed our daily lives. Today’s businesses face disruption and opportunity on a scale we have never seen before. Developers and technology professionals are building and delivering some of the most innovative applications and sites in the world.

Attend nginx.conf 2015 to:

  • Learn – Walk away with the tools to hack, build, and create from hands-on training, deep-dives, breakout sessions, and keynotes
  • Explore: – Discover how to deliver your sites and apps with performance, security, and scale
  • Connect: – Meet face-to-face and network with fellow NGINX users and experts

Stay tuned for more information on the lineup of speakers, evening events, and call for proposals.

For more details, join our mailing list to stay up to date on nginx.conf 2015 happenings.

Interested in becoming a sponsor? Email us and ask us how.

nginx.conf 2015 date location San Francisco September 22-24

12 Reasons Why NGINX is the Standard for Containerized Applications and Deploying Microservices

As developers and technology professionals, we are all feeling the pressure of having to innovate, adapt, and build extraordinary new products and experiences faster than our competition. Continuous development and integration, the rapid deployment and elasticity of containers and cloud services, and breaking our applications into interconnected microservices are emerging as the new normal.

With the rise of this new approach to application development and deployment, a whole new suite of tools is emerging. Today’s developer tools are overwhelmingly open source, cloud‑friendly, and place a premium on adaptability, performance, and scalability.

If you ask anyone building microservices-based applications or working with containers today which software they use the most, NGINX is usually one of the first names that they mention. For example, NGINX Open Source is the third most popular piece of software on Docker Hub. As of the writing of this piece, it has been downloaded more than three million times, compared to only a few thousand downloads for the next most popular web servers, load balancers, and caching tools.

Docker Repository

We’re incredibly proud of the broad adoption of NGINX and NGINX Plus and of our role as the application delivery platform for some of the world’s most innovative applications. But why are we so often paired with microservices and containerized applications?

There are a number of reasons why people select NGINX and NGINX Plus to proxy traffic to application instances on a distributed, containerized platform. They act as a ‘shock absorber’, filtering and smoothing out the flow of requests into an application. To reduce the load on applications, they can cache content and directly serve static content, as well as offload SSL/TLS processing and gzip compression.

NGINX and NGINX Plus perform HTTP routing, directing each request to the appropriate server as defined by policies that refer to values in the Host header and URI, followed up by load balancing, health checks, and session persistence. Application developers gain a huge degree of control over what traffic is admitted to their application, how rate limits are applied, and how requests are secured. NGINX and NGINX Plus also provide a layer of indirection between the client and the application, making it a vital point of control when you manage these applications. You can add and remove nodes, move traffic from one version of an application to another, and even perform A/B testing to compare two implementations.

But why should you care?

Below are 12 production‑proven reasons why we believe you should use NGINX and NGINX Plus to deliver your applications.

Reason #1 – A Single Entry Point

One of the advantages of a containerized platform is its fluidity: you can deploy and destroy containers as necessary. But at the same time you need to provide your end users with a single, stable entry point to access your services.

NGINX and NGINX Plus are perfect for that situation. For example, you can deploy a single cluster of NGINX Plus servers in front of your applications to load balance and route traffic, with a stable public IP address published in DNS. Clients address their requests to this reliable entry point, and NGINX Plus forwards them to the most appropriate container instance. If you add or remove containers, or otherwise change the internal addressing, you only need to update NGINX Plus with the new internal IP addresses (or publish them internally through DNS).

The single entry point provided by NGINX Plus bridges the reliable, stable frontend and the fluid, turbulent internal platform.

Reason #2 – Serving Static and Other Non‑Application Content

Not all of your microservices apps have APIs! It’s very likely that you will need to publish ‘static content’. In the mobile case, it’s the HTML5 framework, which creates the bare application in the device; in the more traditional web environment, it’s images, CSS, static web pages, and perhaps some video content.

An NGINX Plus or NGINX instance acts as an HTTP router, inspecting requests and deciding how each one should be satisfied. You can publish and deliver this content from the frontend NGINX and NGINX Plus servers.

Reason #3 – Caching

NGINX Open Source provides a highly capable cache for both static and dynamic content, with NGINX Plus adding even more features and capability.

There are many situations where caching dynamic content generated by your applications improves performance, such as content that is not personalized, or is updated on a predictable schedule – think news headlines, timetables, even lottery results. It’s very computationally expensive to route each request for this type of data to the microservice that generates it. A much more effective alternative is microcaching – caching a piece of data for a short period of time.

For example, if a resource is requested 10 times per second at its peak, and you cache it for just one second, you reduce the load on the backend infrastructure by 90%. The net result is that NGINX and NGINX Plus insulate your applications from spikes of traffic so that they can run smoothly and predictably, and you don’t need to scale resources on a second‑by‑second basis.

Reason #4 – SSL/TLS and HTTP/2 Termination

NGINX and NGINX Plus offer a feature‑rich, high‑performance software stack for terminating SSL/TLS and HTTP/2 traffic (Editor – when this blog was originally published, NGINX and NGINX Plus supported HTTP/2’s predecessor, SPDY). By offloading SSL/TLS and HTTP/2, NGINX and NGINX Plus provide three benefits to the microservice instances:

  • Reduced CPU utilization.
  • Richer SSL/TLS support. NGINX and NGINX Plus support HTTP/2, session resumption, OCSP stapling, and multiple ciphers – a more comprehensive range of functionality than many application platforms.
  • Improved security and easier management of SSL/TLS private keys and certificates, because they are stored only on the host where NGINX or NGINX Plus is running, instead of at every microservice instance.

Offloading SSL/TLS processing for client connections to NGINX and NGINX Plus does not prevent you from using SSL/TLS on your internal network. They maintain persistent SSL/TLS and plain‑text keepalive connections to the internal network, and multiplex requests from different clients down the same connection. This greatly reduces connection churn and the computational load on your servers.

Reason #5 – Multiple Backend Apps

Recall that we mentioned that an NGINX or NGINX Plus instance can act as an HTTP “router”? The configuration language is designed to express rules for traffic, based on the Host header and the URL, making it very easy and natural to manage traffic for multiple applications through a single NGINX and NGINX Plus cluster. This is precisely what cloud providers like CloudFlare and MaxCDN do – they use NGINX and NGINX Plus to proxy traffic for hundreds of thousands of individual HTTP endpoints, routing each request to the appropriate origin server.

You can load application delivery configuration into NGINX and NGINX Plus and update rules without any downtime or interruption in service, making your NGINX or NGINX Plus instance a highly available switch for large, complex sets of applications.

Reason #6 – A/B Testing

The A/B testing capabilities built into NGINX and NGINX Plus can help with the rollout of microservice applications.

NGINX and NGINX Plus can split traffic between two or more destinations based on a range of criteria. When deploying a new implementation of a microservice, you can split incoming traffic so that (for example) only 1% of your users are routed to it. Monitor the traffic, measure the KPIs (response time, error rate, service quality), and compare how the new and old versions handle real production traffic.

Reason #7 – Consolidated Logging

NGINX and NGINX Plus use the standard HTTP access log formats. So instead of logging traffic for each microservice instance separately and then merging the log files (which requires synchronizing timestamps with millisecond‑level precision), you can log web traffic on the NGINX front end.

This significantly reduces the complexity of creating and maintaining access logs, and is a vital instrumentation point when you are debugging your application.

Reason #8 – Scalability and Fault Tolerance

You can seamlessly scale your backend infrastructure, adding and removing microservices instances without your end users ever experiencing a failure.

The load balancing, health checks, session persistence, and draining features in NGINX Plus are key to building a reliable yet flexible application infrastructure. If you need more capacity, you can deploy more microservice instances and simply inform NGINX Plus that you’ve added new instances to the load‑balanced pool. NGINX Plus detects when a microservice instance fails (whether planned or unplanned), retries failed requests, and doesn’t route traffic to the failed server until it recovers. If you’re using NGINX Plus to manage end‑user sessions for a stateful HTTP application, the ‘connection draining’ feature lets you smoothly remove a server from service without disrupting client sessions.

Reason #9 – GZIP Compression

GZIP compression is a great way to reduce bandwidth and, in the case of high‑latency networks, improve response time. Structured data such as JSON responses is particularly compressible with ease on NGINX Plus and NGINX.

By using NGINX Plus and NGINX to compress server responses, you simplify the configuration and operation of the internal services, allowing them to operate efficiently and making internal traffic easier to debug and monitor.

Reason #10 – Zero Downtime

A fluid, frequently upgraded microservices‑based application is not the only part of your architecture to benefit from high availability delivered by NGINX Plus and NGINX, which are themselves fully available during major software upgrades.

You can perform on‑the‑fly binary upgrades of software, something that is simply not possible with legacy application delivery controllers. You can update software versions seamlessly, with no connections dropped or refused during the upgrade process.

Reason #11 – Your Application Does Not Need :80 Root Privileges

On a Linux host, services need ‘root’ or ‘superuser’ privileges to bind to ports 80 and 443, the ports commonly used for HTTP and HTTPS traffic. However, granting applications these privileges can also give them other privileges that might be exploited if the application has a bug or vulnerability.

It’s best practice to deploy applications with minimal privileges, for example by having them bind to high port numbers instead of privileged ports. NGINX and NGINX Plus can receive public traffic on port 80 or 443 and forward it to internal servers that use other port numbers.

Reason #12 – Mitigate Security Flaws and DoS Attacks

Finally, NGINX and NGINX Plus are very robust, proven engines that handle huge volumes of HTTP traffic. They protect applications from spikes of traffic and malformed HTTP requests, cache common responses, and deliver requests to the application smoothly and predictably. You can think of them as a shock absorber for your application.

NGINX and NGINX Plus can further control traffic, applying rate limits (requests per second and requests per minute) based on a range of user‑defined criteria. This allows administrators to protect vulnerable APIs and URIs from being overloaded with floods of requests. They can also apply concurrency limits, queuing requests so that individual servers are not overloaded.

But Don’t Take Our Word For It . . .

As the authors of NGINX and NGINX Plus we believe we have created the ideal platform for delivering containerized applications and the deployment of microservices, but don’t take our word for it! Try NGINX Plus for free today, or learn more about building and deploying containerized applications in this technical webinar from experts at Docker and NGINX.

Live Activity Monitoring of NGINX Plus in 3 Simple Steps


[Editor – This post has been updated to use the NGINX Plus API, which replaces and deprecates the separate extended Status module discussed in the original version of the post.

Introduced in NGINX Plus R13, the NGINX Plus API supports other features in addition to live activity monitoring, including dynamic configuration of upstream server groups (replacing the separate Upstream Conf module originally used for that purpose) and key‑value stores. The NGINX Plus dashboard was updated to use the API in NGINX Plus R14.

Because configuring the API enables several features, we have changed the name of the sample configuration file to nginx-plus-api.conf. We are also now distributing sample configuration files as GitHub gists instead of downloads from nginx.com.]

One of the most popular features in NGINX Plus is live activity monitoring, also known as extended status reporting. The live activity monitor reports real‑time statistics for your NGINX Plus installation, which is essential for troubleshooting, error reporting, monitoring new installations and updates to a production environment, and cache management.

We often get questions from DevOps engineers – experienced and new to NGINX Plus alike – about the best way to configure live activity monitoring. In this post, we’ll describe a sample configuration file that will have you viewing real‑time statistics on the NGINX Plus dashboard in just a few minutes.

The sample configuration file for the NGINX Plus API makes it even easier to set up many of NGINX Plus’ advanced features in your environment.

Note: These instructions assume that you use the conventional NGINX Plus configuration scheme (in which configuration files are stored in the /etc/nginx/conf.d directory), which is set up automatically when you install an NGINX Plus package. If you use a different scheme, adjust the commands accordingly.

Installing the Sample Configuration File

The commands do not include prompts or other extraneous characters, so you can cut and paste them directly into your terminal window.

  1. Download the sample configuration file.

    cd /etc/nginx/conf.d/
    wget https://gist.githubusercontent.com/nginx-gists/a51341a11ff1cf4e94ac359b67f1c4ae/raw/bf9b68cca20c87f303004913a6a9e9032f24d143/nginx-plus-api.conf

    You can also navigate in a browser to the Gist repo and either click the Download ZIP button, or click the Raw button in the title bar for the nginx-plus-api.conf file and copy the file contents.

  2. Customize your configuration files as instructed in Customizing the Configuration.

  3. Test the configuration file for syntactic validity and reload NGINX Plus.

    nginx -t && nginx -s reload

The NGINX Plus dashboard is available immediately at http://nginx-plus-server-address:8080/ (or the alternate port number you configure in Changing the Port for the Dashboard).

Customizing the Configuration

To get the most out of live activity monitoring, make the changes described in this section to both nginx-plus-api.conf and your existing configuration files. Equivalent instructions are included as comments in nginx-plus-api.conf.

Monitoring Servers and Upstream Server Groups

For statistics about virtual servers and upstream groups to appear on the dashboard, you must enable a shared memory zone in the configuration block for each server and group. The shared memory is used to store configuration and run‑time state information across all of the NGINX Plus worker processes.

If you don’t configure shared memory, the built‑in NGINX Plus dashboard reports only basic information about the number of connections and requests.

In existing configuration files where you define virtual servers, add the status_zone directive to the server configuration block for each server you want to appear on the dashboard. (You can specify the same zone name in multiple server blocks, in which case the statistics for those servers are aggregated together in the dashboard.)

server {
    listen 80;
    status_zone my_frontend;
    location / {
        proxy_pass http://backend;
    }
}

Similarly, in existing configuration files where you define upstream groups, add the zone directive to the upstream configuration block for each group you want to appear on the dashboard. The following example allocates 64 KB of shared memory to store the statistics of the servers in the my_backend upstream group. The zone name for each upstream group must be unique.

upstream backend {
    zone my_backend 64k;
    server 10.2.3.5;
    server 10.2.3.6;
}

Restricting Access to the NGINX Plus API

The default settings in the sample configuration file allow anyone on any network to access the dashboard. We strongly recommend that you configure at least one of the following security measures in nginx-plus-api.conf:

  1. Firewall. Configure your firewall to disallow outside access to the port for the dashboard (8080 on line 63 in the sample configuration file).

  2. Client certificates, which are part of a complete configuration of SSL or TLS. For more information, see the NGINX Plus Admin Guide.

  3. HTTP basic authentication. In the sample configuration file, uncomment the auth_basic and auth_basic_user_file directives (lines 66–67). Add the appropriate user entries to the /etc/nginx/users file (for example, by using an htpasswd generator). You can also reuse an existing htpasswd file (from an Apache HTTP Server deployment, for example).

    auth_basic "NGINX Plus API";
    auth_basic_user_file /etc/nginx/users;
  4. IP address‑based access control lists (ACLs). In the sample configuration file, uncomment the allow and deny directives (lines 70–71), and substitute the address of your administrative network for 10.0.0.0/8. Only users on the specified network can access the NGINX Plus API and dashboard.

    allow 10.0.0.0/8;
    deny all;

You can create further restrictions on write operations, to distinguish between users with read permission and those who can change configuration. Uncomment the sample limit_except block (lines 79–82) in the location api block. As detailed in the reference documentation, several authentication schemes are supported in addition to the HTTP Basic authentication used in the example.

limit_except GET {
        auth_basic "NGINX Plus API";
        auth_basic_user_file /etc/nginx/admins;
    }

Changing the Port for the Dashboard

To set the port number for the dashboard to a value other than 8080, edit the following listen directive (line 63) in the sample configuration file.

listen 8080;

Limiting the Monitored IP Addresses

If your NGINX Plus server is multi‑homed (has several IP addresses) and you want the NGINX Plus API and dashboard to be exposed on only one of them, edit the listen directive (line 63) so that it specifies its IP address and port, as in the following example.

listen 10.2.3.4:8080;

Demo NGINX Plus Dashboard

To preview all the features of the NGINX Plus dashboard, check out the live demo at demo.nginx.com.

Further Reading

To try NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

Meet the NGINX Team at Forrester’s Forum for Technology Leaders, April 27-28, 2015

NGINX is excited to sponsor Forrester’s Forum for Technology Leaders! Join us April 27th-28th in Orlando FL to hear from some of the industry’s top experts how to keep pace in the post-digital age by focusing on the needs of the customer.

How Can NGINX Help You Succeed in the Digital Era and Satisfy Your Customers?

Powering 1 in 3 of the world’s busiest sites, NGINX Open Source is the secret heart of the modern web. We help you deliver your sites and apps with performance, reliability, security, and scale. NGINX Plus adds enhanced features to provide a complete application delivery platform that combines web serving, load balancing, content caching, and media streaming in one easy to deploy and manage package.

With NGINX Plus, your web pages load faster and your customers spend less time waiting, which increases customer satisfaction, conversions, and revenue.

Have Questions for the NGINX Team? Drop by Booth 400 and Meet Us!

  • Learn why NGINX is the secret heart of the modern web.
  • See how NGINX makes it easy to improve site and app performance.
  • Chat with our tech experts about where NGINX Open Source and NGINX Plus fit best in your app architecture.
  • Or just come by to say hello! We’d love to meet you.

See you there!

Adopting Microservices: Getting Started with Implementation

We’ve been talking a lot about why organizations should adopt microservices and use a four-tier architecture when building applications and websites. Microservices enable architects, developers, and engineers to keep pace with the demand for new app functionality and better performance across distributed experiences and devices. They provide technology that is independent, flexible, resilient, easy to deploy, organizationally aligned, and easily composed.

Readings in Design, Development, and Adoption of Microservices

Before we begin talking about implementation of a microservices architecture, I’d like to share some reference books that I’ve found to be helpful. Although these books aren’t specifically about “microservices,” they explain the design and development processes that are core components of a microservices architecture and approach to modern application development.       

  • REST in Practice: Hypermedia and Systems Architecture by Jim Webber, Savas Parastatidis, and Ian Robinson

    This book explains and demonstrates how to use a REST API system to create elegant and simple distributed systems. Specifically, it provides examples, techniques, and best practices to solve infrastructure challenges as companies expand and grow rapidly.

  • Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions by Gregor Hohpe and Bobby Woolf

    This book explores best practices for planning and designing systems to deploy and continuously integrate applications. The authors use a technical vocabulary and visual notation framework to describe large-scale integration solutions across many technologies including JMS, MSMQ, TIBCO ActiveEnterprise, Microsoft BizTalk, SOAP, and XSL.

  • The Modern Firm: Organizational Design for Performance and Growth by John Roberts

    This book differs from the others in focusing on the business and team structures that are best suited for a microservices-oriented application development process. It explores routines, processes, and corporate cultures that contribute to performance and growth.

  • Release It! Design and Deploy Production-Ready Software by Michael T. Nygard

    One of the largest challenges companies have is that they wait too long to deploy new features or products. This book explains the concept of releasing new code and design once it’s production-ready through the use of modern best practices like microservices and continuous integration.

Microservices Processes and Tools

Your core microservices are only part of a complete application development and delivery architecture. You also need to choose tools for inter-service communication, traffic monitoring, failure detection, and other functions. Here are some types of software and specific tools that can help you transition to a microservices implementation.

Open Source Software

If you’re building microservices based applications, you will find that much of the best code is open source. Much of this code was written, or has significant extensions or contributions from  companies with top-notch technical talent, like Google, LinkedIn, Netflix, and Twitter. Because of the nature of these companies, these projects are usually built with scalability and extensibility in mind. All of this makes the software development landscape very different from ten or fifteen years ago, when you needed a big team and lots of money just to buy the software, let alone the hardware. There’s no long procurement cycle or waiting for vendors to incorporate the features you need. You can change and extend the software yourself.

External and Internal Communication Protocols

You’re going to build many microservices with APIs, so you need to consider from the start how the APIs are going to be consumed. When accessing edge and public services, people often use a browser, which can accept JSON and use JavaScript or other languages such as Python to consume and interact with your APIs. XML can take the place of JSON, but is more difficult to process and thus heavier weight. In any case, for edge and public services you want a stable API that has the communication protocol in the object.

For high-speed communication between microservices within the context of an application, however, neither JSON nor XML is efficient enough. Here you want more compact binary-encoded data. Commonly used tools include Protocol Buffers from Google, and Thrift and Avro from Apache. A newer protocol is Simple Binary Encoding from Real Logic Limited, which is reportedly about 20 times faster than Protocol Buffers.

Using binary encoding does mean that there has to be a library for consuming the microservice API. You might not want to write the library yourself, because you feel the API is already self-describing. The danger, though, is that the person who steps in and writes it (say, a developer of a consuming application) usually doesn’t understand the microservice as well as you and is less likely to get things like error handling correct. You end up being encouraged to adopt and maintain a library that you didn’t write.

Data Storage

If you currently have a monolithic data store behind your applications, you need to split it up (refactor it) as you transition to microservices. One source of guidance is Refactoring Databases: Evolutionary Database Design by Scott W. Ambler and Pramod J. Sadalage. You can use SchemaSpy to analyze your schemata and tease them apart. Your goal is to determine for each microservice one at a time the materialized views of the table that the microservice needs, and transfer them from the combined database into microservice-specific data stores. This isn’t always difficult as you might anticipate because often a monolithic database turns out to be a collection of distinct data sets, each of them accessed by just one service. In this case, it’s pretty easy to split up the database and you can do so incrementally.

You can also break up a database gradually over time, which is referred to as polyglot persistence. One tool for this is a Netflix OSS project called staash (a pun on STaaS, storage tier as a service). It’s a Java app that provides a RESTful API to the front end and talks to both Cassandra and mySQL databases. So you can interpose it as a standard prototype library for developing data access layers. You can add a new database into the back end and new HTTP into the front end, with staash as a single package that already incorporates the mySQL and Cassandra functionality and all the necessary glue.

If you’re concerned about the issue of consistency across your distributed data stores, learn about Kyle Kingsbury’s Jepsen tests, which are becoming the standard way to test how well distributed systems react to network partitions. Most distributed databases fail the tests, and interesting bugs are exposed. The tests can help you identify and eliminate practices that are common but not really correct.

Monitoring

Monitoring microservices deployment is difficult because the set of services changes so rapidly. There are constantly new services being added, new metrics to collect, new versions being deployed, scaling up and down of service instances, and so on. In such an environment, there’s not enough baseline data for an automated threshold analysis tool to learn what “normal” traffic looks like. The tool tends to generate lots of false alarms, particularly when you start up a new microservice that it’s never seen before. The challenge is to build systems that react appropriately to status changes in an environment where changes are so frequent that everything looks unusual all the time.

A microservices architecture also involves complex patterns of remote calls as the services communicate. That makes end-to-end tracking of request flows more difficult, but the tracking data is vital to diagnosing problems. You need to be able to trace how process A called B, which called C, and so on. One way to do this is to instrument HTTP headers with globally unique identifiers (GUIDs) and transaction IDs.

Continuous Delivery and DevOps

In a microservices architecture, you’re deploying small software changes very frequently. The changes that are most likely to break the system don’t involve deploying new code, but rather involve switching on a feature for all clients instead of the limited number who were using it during testing. For example, if a feature causes small performance degradation you might not notice ill effects during the test, but multiplying the slight delay by all clients suddenly brings the system down. To deal with a situation like this, you must very quickly both detect the problem and roll back to the previous configuration.

To detect problems quickly, health checks can run 5 to 10 seconds, not every 1 to 5 minutes as is common. At a frequency of once per minute, it might take 5 minutes before it becomes clear that the change you’re seeing in a metric really indicates a problem. Another reason to take frequent measurements is that most people have a short attention span (the amount of time they attend to a new stimulus before getting distracted). According to recent research, the average person’s attention span before getting distracted is 8 seconds, down from 12 seconds in 2000. The point is that for people to respond to an event in a timely way, the delay between the event occurring and being reported needs to be shorter than the average attention span. The upside of short attention spans, on the other hand, is that 10-second outages are less likely to be noticed by users either.

To make it easy to roll back to a working configuration, log the previous configuration in a central location before enabling the feature. This way anyone can revert to the working code if need be.

Conclusion

Adopting microservices requires some large changes in your code base, as well as the culture and structure of your organization. In this post I’ve shared some suggestions and best practices for external and internal communication protocols, open source software, data storage, monitoring, and continuous delivery and devops. Now is the time to begin the transition to a microservices architecture if you haven’t already started. Remember, the transition can be done incrementally and doesn’t have to happen overnight.

 

Join Us at the NGINX User Summit on March 30, 2015 in San Francisco

We’re hosting the first NGINX User Summit of 2015 in San Francisco on Monday, March 30!

This event is for you if you want to learn more about how NGINX Plus and NGINX Open Source can boost the performance of your applications. Spend the morning training with us, the afternoon learning from expert users, and the evening socializing with fellow members of the NGINX community.

New Summit Speakers Announced!

During the afternoon, we’ll kick off with keynotes from Chris Richardson (@crichardson), creator of the original CloudFoundry.com, and me (@sarahnovotny) – I’m the Technical Evangelist and Community Manager at NGINX.

Lightning talks will feature innovative technologists sharing tips and tricks for optimizing NGINX Plus and NGINX to maximize the performance and scale of your applications:

  • Andrew Stein (@steinalicious) – Co-founder and Chief Scientist at Distil Networks, where he has worked on everything from identity management to digital signage
  • Dustin Whittle (@dustinwhittle) – Developer Evangelist at AppDynamics, focused on helping organizations manage their application performance
  • Valentin V. Bartenev (@ngx_vbart) – Experienced NGINX developer, known for his work on the thread pools implementation and the SPDY module
  • Yossi Koren (@yossiko) – Solutions architect specializing in distributed architecture and following a service-oriented model to effectively integrate enterprise, web, and mobile services

Thanks to our Social Hour Sponsor, Solarflare!

During the social hour, sponsored by Solarflare, you’ll enjoy appetizers and drinks (beer!) as you have face time with your fellow attendees, learning how they’re making the most of their applications with NGINX Plus and NGINX, and sharing your story. You’ll also get the chance to speak directly with members of the NGINX team and have your say about future features.

Don’t Forget about Training

Want to improve your NGINX skills? The day kicks off with a half-day, instructor-led NGINX Fundamentals course, in which you’ll learn to install, configure, and maintain NGINX Plus and NGINX.

The phenomenal rise of NGINX is because of our supportive community. For us, the user summit is part of our commitment to connect with you, and if you are passionate about NGINX – as an NGINX Plus or NGINX user – we look forward to seeing you at the first NGINX User Summit of 2015 at RocketStudios @ RocketSpace in San Francisco!

Why Netflix Chose NGINX as the Heart of Its CDN

In the few years since its introduction, Netflix’s online video streaming service has grown to serve over 50 million subscribers in 40 countries. We’ve already shared some of the best practices that Netflix’s software development engineers adopted as they transitioned from a traditional monolithic development process to continuous delivery and microservices, in Adopting Microservices at Netflix: Lessons for Architectural Design and Adopting Microservices at Netflix: Lessons for Team and Process Design.

Gleb-Smirnoff-nginx.conf2014
NGINX developer Gleb Smirnoff at nginx.conf2014

In this post, we’ll discuss another core contributor to Netflix’s success: its content delivery network (CDN), Open Connect. We’re proud that NGINX runs on every Open Connect delivery appliance, playing a key role in Netflix’s ability to keep pace with the explosive growth of the video service. NGINX’s Gleb Smirnoff has worked alongside the Open Connect team for over two years, and last October at our user conference, nginx.conf2014, he explained why Netflix chose NGINX (along with FreeBSD) to power this crucial part of its business.

Why Netflix Built Its Own CDN

Netflix initially outsourced streaming video delivery to three large CDN vendors (Akamai, Level3, and LimeLight). As the service became more popular, Netflix decided that building and managing its own CDN made sense, for several reasons:

  • From a practical perspective, the CDN vendors were struggling to expand their infrastructure at a pace that matched the growth in customer demand for Netflix’s video streaming.
  • From a financial perspective, the expense of outsourcing was quickly becoming prohibitive as the volume of streamed video increased (a challenge experienced by many popular applications and web properties).
  • From a business perspective, it was clear that video streaming was replacing DVD lending as Netflix’s primary source of revenue, and it didn’t make sense to outsource a critical piece of the company’s main business.

Most importantly, Netflix built its own CDN in order to have greater control over application delivery and the user experience. To provide optimal streaming media delivery to customers, Netflix needed to maximize its control over the three basic components in the delivery chain:

  • The user’s video player. Netflix already controlled this component, because its developers write all the device-specific apps used by customers to view Netflix content.
  • The network between the user and the Netflix servers. There is no way to control this component directly, but Netflix minimizes the network distance to its customers by providing free video-streaming appliances to ISPs in exchange for rack space in the ISP’s data centers for housing the appliances. (Appliances are also placed at Internet exchange points [IXPs] to serve customers whose ISPs are not interested in housing third-party equipment.) Video streaming is particularly sensitive to the packet delay and loss, misordered arrival, and unpredictable (jittery) round-trip times inherent to TCP/IP, and minimizing the network distance reduces the potential exposure to these anomalies.
  • The video server (Open Connect itself). Running its own CDN gives Netflix freedom to tune the CDN software to compensate for Internet anomalies as much as possible. It can run custom TCP connection-control algorithms and HTTP modules. It can also detect server and network problems very quickly and reroute clients to alternative servers, then log in to the server hardware and troubleshoot “from inside.”

Netflix was able to optimize Open Connect for video streaming in a way that’s not possible with a generic CDN provided by a vendor. Open Connect enables Netflix to offer a superior user experience at a lower cost, and with greater visibility into the performance of the application around the world.

Why Netflix Chose NGINX and FreeBSD

From the start, Netflix’s goal was, as Gleb puts it, to “get more and more gigabits per second from a single box.” Specifically, Netflix needed to maximize the number of subscribers each appliance could serve concurrently. The Open Connect engineers anticipated needing to fine-tune the software to achieve this goal, so they decided to go with open source software for its unlimited extensibility.

As mentioned previously, Netflix places its video-streaming appliances in the data centers of its customers’ ISPs when possible. Because the software running on the appliances would be in the hands of third parties, Netflix chose projects that use a BSD-style license rather than the GNU Public License (GPL).

The specific open source projects Netflix chose were:

  • FreeBSD as the operating system, because it’s known to be fast and stable. The developer community is strong and willing to work with vendors.
  • NGINX as the streaming media server. Its proven speed and stability was important because Netflix wanted to launch Open Connect as quickly as possible, without the need to tweak it just to get going. Once the CDN was up and running, Netflix was able to examine traffic patterns and fine-tune the NGINX settings.

    Another benefit of NGINX is that although the open source software is distributed under a BSD-style license, all of its core developers are full-time employees of NGINX, Inc., which provides enterprise-class support for its commercial product, NGINX Plus. In this regard, it combines the best features of OSS and commercial software.

    NGINX’s flexible framework for running custom modules also appealed to Netflix, and the Open Connect team has created modules specific to its video streaming needs.

Combining FreeBSD and NGINX yields further benefits:

  • NGINX’s event-driven design is one of the keys to its outstanding performance, and FreeBSD’s kqueue event notification system call is one of best APIs for multiplexed I/O.
  • Without any modification required, NGINX can use the sendfile system call together with the aio_read system call. Together the calls avoid blocking on disk I/O, leading to outstanding performance.

NGINX Plus and NGINX Can Optimize Your Application Delivery, Too

From its inception, NGINX was designed to be adaptable and support every aspect of application delivery. To make applications similar to Netflix easier for our commercial customers to deploy, NGINX Plus combines web serving, load balancing, content caching, and media streaming in one easy to use package. Check out how our case studies to learn how other leading companies use NGINX Plus to deliver their applications with performance, security, and scale.

We enjoy working closely with customers, providing guidance on how to get the most out of NGINX Plus in their specific application delivery architectures. Our Support and Professional Services teams can help you with architectural guidance, installation, configuration, updates, and more. Contact us to learn more.

Video Recording

Save the Date: NGINX User Summit on March 30, 2015 in San Francisco

We’re excited to bring back NGINX User Summits, starting with our first NGINX User Summit of 2015 in San Francisco on Monday, March 30!

This event is for you if you want to learn more about how NGINX and NGINX Plus can boost the performance of your applications. Specifically, you can:

  • Get trained. The day kicks off with a half-day, instructor-led NGINX Fundamentals course, in which you’ll learn to install, configure, and maintain NGINX. This hands‑on workshop covers both NGINX Open Source and NGINX Plus, the extended and commercially supported product. Want to improve your NGINX skills? Then be sure to add this optional training to your registration.
  • Learn from innovative technologists using NGINX. For the second half of the day, we’ve got an amazing lineup of guest speakers. There will be keynotes from @sarahnovotny and @crichardson, plus compelling lightning talks from users who will share tips and tricks for optimizing NGINX to maximize the performance and scale of your applications.
  • Enjoy face time with your peers and members of the NGINX team. Throughout the day (which wraps up with a social hour featuring free beer!), you’ll be able to network with other NGINX users, learning how they’re making the most of their applications with NGINX and NGINX Plus, and sharing your story. You’ll also get the chance to speak directly with members of the NGINX team and have your say about future features.

You can customize your event experience by attending just the training, just the afternoon summit, or both.

There are an amazing number of things that make NGINX special, and our community is the heart of them all. We hope to see you at the 2015 NGINX User Summit at RocketStudios @ RocketSpace in San Francisco!