The blog posts are:
- Keynote: “Speeding Innovation”, Gus Robertson (video here)
- NGINX Product Roadmap, Owen Garrett (video here)
- Introducing NGINX Controller, Chris Stetson and Rachael Passov (video here)
- This post: Introducing NGINX Unit, Igor Sysoev and Nick Shadrin (video here, in‑depth demo here, integration with the OpenShift Service Catalog here)
- The Future of Open Source at NGINX, Ed Robinson and Owen Garrett (video here)
- NGINX Amplify is Generally Available, Owen Garrett (video here)
Modern applications have changed. They’ve grown from being very small websites, from simple applications designed to do one thing, to very large and complex web properties. Today, they consist of multiple parts: you create small services; you connect them together. With the new changes in infrastructure, you’re also able to scale them up and down easily. You create new containers — those new machines in the cloud — and connectivity to the application. And connectivity between the application’s parts is extremely important.
With NGINX, you already know how to connect to your application. You know how the application works and how the connectivity to it works. But with the Unit project, we went further and deeper, down to the application code. It provides you with the platform for running the application and for running the application code.
We looked at existing solutions and found that they lacked some fundamental technologies Many of them are big, slow things that are not designed for cloud-native applications.
NGINX Unit is built from scratch. It’s built using the core principles of NGINX engineering by the core engineering team. Unit is an essential part of the NGINX application platform. It fits the same way for monolithic applications as for microservices apps. It provides you with the way to migrate and separate services out of old-school applications. It gives you a uniform way to connect not only to the applications you’re building today, but to the applications that will be built tomorrow.
Let’s talk a bit about the functionality NGINX Unit gives you. First of all, it’s a fully dynamic application server designed for cloud-native apps. What does “dynamic” mean? With NGINX, you’re familiar with the well-known command for reload. You probably already reload frequently.
And when the reload is done right, you’re not losing connections; you’re fine; the application is working. You can continue making changes by reloading the whole server. However, reloads are sometimes taxing on the server resources, and many of our big users and customers can’t really reload as frequently as they’d like.
With Unit, the system doesn’t reload the processes, it only changes that part of its memory and the parts of the processes that are required for a particular change. What it gives you is the ability to make changes as frequently as you like.
The next thing is how it’s configured. It’s configured through a simple API. Today, everybody likes to do the API calls for configuration of servers. Every management system understands that, and we built a very easy-to-understand API that’s based on industry-standard JSON.
What’s very important is that this API is available remotely. What were you doing when you weren’t able to configure a server in a remote way? You were building a small agent — a sidecar of sorts — in order to perform those configuration steps.
With Unit, you can expose the API to your private networks and to your remote agents to have that configuration done in a very easy, native, and remote fashion.
Next, Unit is polyglot. It understands multiple languages. We support PHP, Python, and Go, and other languages are coming soon. What that gives you is the ability to run any of the code written in any of those languages at the same time, on the same server. But what’s even more interesting is that you can run multiple versions of the same language on the same server as well.
Have you ever migrated an old PHP application from PHP 5 to PHP 7? Now, it’s as easy as one API call. Have you ever tried running the same applications in Python 2.7 and Python 3.3? I see some people laughing in the audience. Yes, sometimes that doesn’t even work. Now, we’re giving you the same platform for running the application in the language and in the version of the language that this application understands. What’s interesting is how that’s made possible.
I’ll ask Igor Sysoev, the original author of NGINX, to the stage to talk about the architecture of NGINX Unit. Igor has an amazing quality: Igor builds applications in a fundamental way. He looks at a problem at a deeper level. He doesn’t take any preconceptions or compromises when he’s looking at how the application can be built.
Igor, please come up on stage. Let’s talk a bit about the architecture of NGINX Unit.
Good morning. My name is Igor Sysoev. I’m the original author of NGINX, co-founder of the NGINX company, and architect of our new product, Unit.
Here’s the architectural scheme of Unit: all the parts are separate processes in one system. The processes are isolated for security reasons; only the main process runs as root. Other processes can be run as non-privileged users.
The architecture is quite complex, so I’ll elaborate the most important parts.
The key feature of Unit is dynamic configuration. The performance is comparable to existing application servers.
What does “dynamic configuration” mean? It means that there is no particular configuration file at all. You interact with the Controller process with a RESTful JSON API on a UNIX domain socket or a TCP socket. You can upload the whole configuration at once, or just a part.
You can change or delete any part of the configuration and Unit will not reload entirely. It means that you can change your configurations as frequently as you want. When the Controller process accepts the operations request, it will update it and send the appropriate parts to the router and main processes.
The router process has several worker threads that interact with clients. They accept the clients’ requests, pass the requests to the application processes, get responses back from the applications, and send the responses back to the clients. Each worker thread polls epoll or kqueue and can asynchronously work with thousands of simultaneous connections.
When the Controller sends a new configuration to the router, the router worker threads start to handle new incoming connections with the new configuration, while old connections continue to be processed by the worker threads according to the previous configuration.
So the router worker threads can work simultaneously with several generations
of configuration without reloading.
When the router accepts new configurations from the Controller process, the worker threads start to handle new incoming connections with the new configuration, while old connections can continue to be processed by the threads according to the previous configuration. That is: router worker threads can work simultaneously with several generations of configurations. They’re allowed, without reloading.
When the router receives requests for applications that have not been started yet, it asks the main process to start the application. Currently, application processes are started only on demand. Later, we will add prefork capabilities.
So the main process forks a new process, dynamically loads the required application module in the new process, sets the appropriate credentials, and then starts the application code itself.
The module system allows you to run different types of applications in one server and even different versions of PHP or Python in one server.
Go applications are different animals. A typical Go application listens on the HTTP port by itself. And you have to build everything into the application, including all networking and the management features. With Unit, you can control your applications without this additional code.
In the case of a PHP or a Python application, you don’t need to change it at all. However, in Go applications, you have to change just two lines. Unit provides a special Go package, and you should build the Go application with the package.
When the main process needs to run a Go application, it forks a new process and executes
the statically-built Go application in the new process.
The unit package is compatible with the standard Go http package. Your application can be run as a standalone HTTP server or as part of the Unit server.
When a Go application is started by Unit, it does not listen on an HTTP port. Instead, the Unit router process handles HTTP requests and communicates internally with the Go application.
The router and application processes communicate via Unix socket pairs and several shared memory segments. The socket pair and shared memory segments are private for each application process, so if an application process exits abnormally, the router process will handle this failure gracefully, and no other processes and connections will be affected.
When the Go application is run by Unit, it will communicate with the router process. The router process will handle all HTTP requests and internally communicate with the application’s wire socket pairs and shared segment memory.
And now, Nick Shadrin will tell you more about Unit API configurations and our future roadmap.
Thank you, Igor.
Did I tell you that the API of NGINX Unit is easy? Yes, it is.
Right here, you can see a simple example of the Unit API. I want to talk a little bit about how it’s configured, and how to make changes to the environment using this API.
The first object you can define is the application object. You can give it a nice, user-friendly name, and define the type of the application as the language and the language version. Then you can define other parameters for the application that are related to the type of the application. PHP applications have some specific parameters. Go applications will have some other parameters.
You can also define the application with a different user name from the group name in the UNIX system so they would be separated for security reasons in your environment.
In addition to defining the applications, you’ll define the listeners, and listeners will be the IP addresses and/or ports for the application.
Then you specify how the particular listener binds to the application that you define. You can create many listeners and many applications, and bind them together the way you like.
Now, how do you make changes? The first and easiest way is to reload the server again. You probably don’t want to do that. You can put the whole JSON payload as a PUT request into the control socket of NGINX Unit, or you can make the changes one by one, accessing each of the objects and each of the strings separately by their own URLs. We‘re giving you flexibility on how you make changes.
That’s what you have now. Let’s talk a bit about the plans for this product.
Yesterday, we released NGINX Unit in open source. It’s available in public beta. We encourage everybody to try it and use it.
Our first priority right now is to give you a track record of stable releases and stable code, and we want you to be as confident in NGINX Unit as you already are with NGINX.
There’s a long list of new languages we’ll be adding to Unit, but the first languages we’re going to be working on are Java and Node.js. Once we get more languages and more contributions of different languages from the community, you’ll see that it’s really easy to extend NGINX Unit to support the application language you prefer.
Next, we’ll be adding the functionality around HTTP/2 and more features about HTTP functionality. And for service mass communication and service-to-service communication, we’ll add proxy features and networking features directly into NGINX Unit.
Yesterday, we uploaded the code to Github and released it publicly for everybody. We already see hundreds of comments in social media. We’re the top story on Hacker News with this product.
We have hundreds of stars. We already have PULL requests and issues created in the Github repo. The response from the community is overwhelming, and it’s only been 24 hours since the product release.
We encourage you to go to Github to start going through the code, to read it, and to contribute to it. We’ll be making this software together with you, and the NGINX Unit core engineering team will work with you on the PULL requests and Github issues.
Now, let’s see what other resources we’ve prepared for you to start working on NGINX Unit.
We uploaded the documentation at unit.nginx.org, and the code is also available in our standard Mercurial repository. At the Unit repository, you can contribute to the code either by using the well-known process — the way you were already contributing to the NGINX product — or you can use the Github process.
Today, at 11:00 am, just after the break, we’ll have a deep-dive session in NGINX Unit in this room. In the deep-dive session, we won’t have any slides. We started working on a live demo of the product, and we found that in order to show you all of the functionality and to show you how to work with it, the demo will actually take the whole session.
Be prepared to see a lot of command line output and a lot of new ways of running multiple PHP, Python, and Go applications in the same server. If you want to work with us in a mailing list, the mailing list is firstname.lastname@example.org. It’s already available, and you can subscribe to it either on the Web or just by sending an email to Unit. They’re subscribed at nginx.org.
What’s even more amazing for our NGINX Plus commercial users is that they already have this amazing channel of communication — the technical table at NGINX, which is the NGINX support. And if you have questions on NGINX Unit, you can ask them using the same support channel you already know.
That’s what I have for you about NGINX Unit today. Let’s build the software together, let’s see how it works out, and let’s see how we can run new applications using Unit. Thank you.