DISCOVER and share trends and best practices in building faster websites and applications using NGINX.
Build the Future of the Modern Web
Discover how to deliver your sites and apps
with performance, security, and scale.
The conference agenda includes sessions and activities where you can:
DISCOVER and share trends and best practices in building faster websites and applications using NGINX.
MEET others interested in tools and technologies that accelerate application delivery and improve web performance.
PARTICIPATE in keynotes, breakout sessions, and training classes led by top industry leaders and expert NGINX users.
Former VP of Operations
Co-Founder & CTO
Senior Principal Engineer
Dragos Dascalita Haut
Renoir Boulanger is an application developer fascinated with web technologies. His current position is Developer operations engineer on the WebPlatform.org project as a W3C Team member. Renoir has been building websites and web applications for over fifteen years. His experience also includes server management and he worked for several communications agencies in the province of Quebec.
Sean Cribbs is a distributed systems and web architecture enthusiast, currently building innovative cloud and infrastructure software at Comcast. Previously, Sean spent five years with Basho Technologies contributing to nearly every part of Riak including client libraries, CRDTs and tools. In his free time, he has ported Basho's Webmachine HTTP server toolkit from Erlang to Ruby, created a popular parser-generator for Erlang, and has contributed to many other open-source projects, including Chef, Homebrew, and Radiant CMS.
Solution Architect for Adobe's API Platform, adobe.io, building a high scale distributed API Gateway running where else but in the Cloud where all APIs wanna be. Working with a fantastic international team out of Romania, India and US and with the emphasis on performance and security, we're supporting the API go-to-market strategy for Adobe Creative and Marketing Clouds.
Derek DeJonghe has been a long time supporter and advocator of Nginx. Working for RightBrain Networks Derek has been consulting on best practices on Amazon Web Services with a focus on architecture and working on large scale migrations to the cloud. Derek consulted with a large scale university on a project to build a reverse proxy for all of the universities API's using Nginx. Derek has also helped a large tax and financial data corporation with their aggressive adoption of AWS, by teaching classes on site and providing documentation around best practices for moving existing on premise applications to the cloud. RightBrain Networks manages and migrates applications to the cloud, the right way, every day.
John DiGiglio is a Software Marketing Manager for the Network Platforms Group of Intel, a business group working with the industry to transform the network infrastructure.
John has held a variety of marketing positions during his 14 years at Intel with a focus on consolidating networking workloads on Intel® architecture which delivers high performing, open standard solutions built on NFV and SDN technologies.
Prior to Intel, John was employed by AT&T Bell Labs, Digital Sound, and Crystal Voice Communications, which he co-founded. John holds a Bachelor of Science in E.E./C.S. from the Polytechnic Institute of NY and a MBA from Rutgers University.
Scott works on Shopify's production engineering team, and loves debugging really hard problems. He's contributed performance patches to the Ruby interpreter and over the past few months has been primarily focused on resiliency and performance tuning.
Kelsey has worn every hat possible throughout his career in tech and enjoys leadership roles focused on making things happen and shipping software.
Kelsey is a strong open source advocate focused on building simple tools that make people smile. When he is not slinging Go code you can catch him giving technical workshops covering everything from Programming, System Administration, and his favorite Linux distro (CoreOS).
Stephan is VP of Products in Wallarm where he works on security solutions which can be used in extremely high load environments with continuous integrations and frequent code deployments. Wallarm uses NGINX for filtering nodes making them easy to scale and deploy. Before joining Wallarm team, Stephan was editor-in-chief of an information security magazine. He is the author of more than 500 publications covering modern technologies and information security.
Zi Lin works at CloudFlare as a System Engineer in Security Engineering. He is a core contributor to CF-SSL, CloudFlare's open source SSL tool kit. He is interested in contributing to various SSL innovations in ngx_lua.
Yingqi Lu is a senior software performance engineer at Intel and has worked on various projects including virtualization, cloud, webserver performance and big data applications.
Mark McClain is the Chief Technical Officer of Akanda Inc, a member of the OpenStack Technical Committee and a core reviewer for several OpenStack teams. Mark was the Program Technical Lead for the OpenStack Networking during the Havana and Icehouse cycles. In addition to his technical work, Mark is co-organizer of the Atlanta OpenStack Meetup group and frequent speaker on OpenStack Networking. Formerly of DreamHost and Yahoo!, Mark is a graduate of the Georgia Institute of Technology.
Gabriel Monroy is CTO at Engine Yard and the creator of Deis, the leading Docker PaaS. As an early contributor to Docker and CoreOS, Gabriel has deep experience putting containers into production and frequently advises organizations on PaaS, container automation and distributed systems. Gabriel has spoken recently at DockerCon, CoreOS Fest and QConSF on cluster scheduling and deploying containers at scale.
Software Engineer with 10+ years of broad work experience. Worked to build web systems, APIs, live video streaming and lately a video player.
Sarah Novotny is a technical evangelist and community manager for NGINX. Novotny has run large scale technology infrastructures as a systems engineer and a database administrator for Amazon.com and the ill-fated Ads.com. In 2001 she founded Blue Gecko, a remote database administration company with two peers from Amazon. Blue Gecko was sold to DatAvail in 2012. Sarah has also curated teams and been a leader in customer communities focused on high availability web application and platform delivery, for Meteor Entertainment and Chef.
She regularly talks about technology infrastructure and geek lifestyle. Sarah is additionally a program chair for O'Reilly Media's OSCON. Her technology writing and adventures as well as her more esoteric musings are found at sarahnovotny.com.
Brian builds traffic management systems at Dropbox, with a focus on performance and security. He previously developed the CDN caches and Layer 7 load balancers at Facebook.
A reformed data scraper, Andrew now spends his time building out Distil Networks' bot blocking technology. With a focus on speed and effectiveness, Andrew is now helping make sure that billions of requests are served to humans and blocked from bots.
Nick Sullivan is a leading cryptography and security expert. He founded and built the security team at CloudFlare, one of the world's leading web security companies. He is a digital rights management pioneer in his work building Apple's multi-billion dollar iTunes store. He is the author of over a dozen computer security patents and holds an MSc in Cryptography and a BMath in Pure Mathematics.
Programmer since early age. Grew up with a Commodore 64 and took the path from 80286 via Digital Alpha and up to the current Intel platforms, coding in all different languages from assembler, C, C++ to Java and Python. For the last few years I've been a software developer at ING Bank in Amsterdam.
Matt Williams is the DevOps Evangelist at Datadog. He is passionate about the power of monitoring and metrics to make large-scale systems stable and manageable. So he tours the country speaking and writing about monitoring with Datadog. When he's not on the road, he's coding. You can find Matt on Twitter at @Technovangelist.
John Willis has worked in the IT management industry for more than 35 years. Currently he is Director of Ecosystem Development at Docker Inc. Prior to Docker, Willis was the VP of Solutions for Socketplane (sold to Docker) and Enstratius (sold to Dell). Prior to to Socketplane and Enstratius, Willis was the VP of Customer Services at Opscode(Chef) where he formalized the training, evangelism, and professional services functions at the firm. Willis also founded Gulf Breeze Software, an award winning IBM business partner, which specializes in deploying Tivoli technology for the enterprise. Willis has authored six IBM Redbooks for IBM on enterprise systems management and was the founder and chief architect at Chain Bridge Systems.
Harald Zeitlhofer has 15+ years of experience as an architect and developer of enterprise ERP solutions and web applications with a main focus on efficient and performant designs, implementations and usability. As a Performance Advocate he influences the Dynatrace product strategy by working closely with customers and driving their performance management and improvement at the front line. He is a frequent speaker at conferences and meetup groups around the world. Follow him at @HZeitlhofer
Yan is a security engineer at Yahoo, mostly working on End-to-End email encryption and improving TLS usage. She is also a Technology Fellow at EFF, where she developed the NGINX plugin for Let's Encrypt. Yan has held a variety of jobs in the past, ranging from hacking web apps to composing modern orchestra music. She got a B.S. from MIT in 2012 and is a proud PhD dropout from Stanford.
Yan is also a member of the W3C Technical Architecture Group and a core developer of various open source security projects like HTTPS Everywhere and SecureDrop. She is @bcrypt on Twitter.
Christopher Brown has held the positions of VP of Operations at Fastly, CTO & VP of Engineering at Opscode/Chef and Director of Engineering for the Microsoft Edge Computing Network. Prior to Microsoft, Christopher was a founding lead developer and architect for Amazon.com's Elastic Compute Cloud ("EC2"). He holds several patents in the areas of internet routing, VM/runtime hosting, content delivery and cloud computing.
NGINX is a high performance, open source web application accelerator that helps over 37% of the world's busiest websites deliver more content, faster, to its users. NGINX Fundamentals is a hands-on course in which you will learn to install, configure, and maintain NGINX. Not only will you learn to set up NGINX, but you will learn about common use cases like secure downloads, GeoIPLookups, and load balancing to name only a few. In the spirit of learning by doing, the course will take you step-by-step through the setup process and in the end you will finish up with a complete basic NGINX Setup. This course is hands-on so bring your laptop and prepare to go to work. This workshop covers both the open source and commercial versions (NGINX Plus) of NGINX.
NGINX Fundamentals is aimed at System Administrators who are new to NGINX, but have a foundational understanding of web server set up and configuration.
NGINX Advanced builds on the topics covered in the NGINX Fundamentals course. In order to get more out of your NGINX installation the advanced course introduces variables and dives into the core, proxy, and rewrite modules as well as offering an introduction to content manipulation. NGINX Advanced also spends a lot of time on topics related to running an efficient web server. These include advanced location routing, load balancing, security and traffic control. You will also learn how to monitor your NGINX instance using custom access logging, the status page, and dynamic reconfiguration. Prepare to delve deeper into NGINX and become even more proficient at configuring and monitoring NGINX. This course is hands-on so bring your laptop and prepare to go to work.
NGINX Advanced is aimed at technologists who have installed and are using NGINX, but want to learn more advanced use cases for optimizing and monitoring their NGINX web server.
Welcome and announcements by Sarah Novotny, Head of Developer Relations at NGINX, Inc.
Containers are the focus of so much attention from developers, administrators and investors alike. Christopher will give a personal assessment of strengths and weaknesses of the current state.
The easiest way to performance test your web application is to fire up a load tester and bombard it with traffic. This may have been an effective performance test years ago, but it no longer gives the whole story with modern web applications designed around microservices. In this session, we'll focus on turning these old, incomplete, one-off tests into fully built-out, recorded and automated testing systems for every level of your application. Additionally we'll walk through using tools like Jmeter and Zabbix to evaluate your scaling needs, as well as helping to unlock information in your NGINX logs.
NGINX has had the ability to build in custom modules since early in 2005 but this required users to compile a custom build because there wasn’t dynamic modules support.
With NGINX's growth as a web delivery platform, demand for dynamic modules is growing too.Ruslan will talk about the last few months of planning, challenges and approaches to our dynamic modules implementation and how it will affect anyone who is considering building a new NGINX module.
Let’s talk about readily deployable technologies as well as future technologies to increase NGINX performance. Learn how to improve SSL termination performance with Intel® QuickAssist Technology and NGINX. Your users will benefit from the best security processing available with today’s hardware and software. This session will also review ongoing work between Intel and the OpenSSL Foundation to increase security performance through the new OpenSSL Asynchronous extensions. Lastly, we will preview early joint Intel and NGINX investigation into combining OpenSSL Asynchronous with a User Space TCP/IP Stack built with the Data Plane Development Kit for use in a future NGINX release.
Metrics are cool. They are a common language for developers, testers, operators, DBAs and even business.
Used properly, metrics allow deep insight into what’s happening in your environment, revealing performance hotspots and errors along all stages of the development cycle. But as modern architectures become increasingly complex, and more and more subsystems are delivering metrics, correlating them can be quite a challenge. The status data provided by NGINX's API can be used perfectly in combination with APM (application performance management) to draw the complete picture of your environment.
In this session you will learn about different relevant metrics in a web application, about transactional measures starting with the user’s click in the browser all the way down to the database, about host and process metrics, and how to correlate them properly to gain the maximum benefit from monitoring in development, testing and production.
We all know that OpenStack and NGINX are two hugely popular open source projects, but did you know they make a great pair? In this session, we'll show how NGINX integrates with OpenStack Networking's load balancer service via the open source Akanda project. The combination benefits users via instant access to a managed NGINX installation. Using the current Akanda and OpenStack releases, we'll demonstrate the simple process for developers to build upon NGINX when deploying on OpenStack. We'll also see how OpenStack users can access the powerful features of NGINX+ and close with a sneak peek to the new features in the upcoming OpenStack Liberty release (due in October).
Understanding how to run Microservices at scale is becoming a key success factor for organizations. Today's technologies are offering simple solutions to create Microservices, to containerize them, and to deploy them in the cloud.
As the number of Microservices increase the inter-communication between them becomes more complicated, and we soon realize we have new questions awaiting our answers: how do Microservices authenticate? How to monitor their usage? How to protect them from attacks? How to set throttling and rate limiting rules across a cluster? How to control which service allows public access and which one is private?
Come and learn how NGINX can integrate with Docker and Mesos in order to help you design scalable architectures for exposing Microservices in the cloud. If containers are shaping the way we think of packaging Microservices, Mesos is shaping the way we think of running them in the cloud and NGINX is providing solutions to the questions above. During this session you will also learn how Adobe's API Platform is solving this problem, where it is today and what it envisions to do with NGINX going further.
Enabling TLS session resumption--correctly--can be complex and time consuming. Today there are two standard approaches to resume a TLS session: via a session ticket or via session ID. Both approaches come with their own complexities. Session ticket resumption, by default, does not guarantee forward secrecy--meaning the same ticket key will be reused without rotation. Meanwhile, there are still a large portion of web clients that don’t support session ticket resumption. The session ID approach requires synchronized caching of session data between hosts, which isn’t possible today with the NGINX code base. Supporting both--not one over the other--is the the best approach to enabling TLS cross-host session resumption. CloudFlare’s Zi Lin will share a new way to completely enable TLS resumption without doubling efforts.
This presentation will cover how forward secrecy can be easily implemented through new directives in ngx_lua without needing to do complex rewrite of nginx source code. Zi will also share another new set of features that enable across-host session caching that significantly minimizes lines of code. Overall, Developers will learn simple and intuitive new features in ngx_lua, that will make life much easier and the Internet more secure.
NGINX rocks at helping the web go faster... but the reality is that NGINX can't make the internet faster all by itself. The front end is only one part of the performance puzzle and to really understand performance you need a more comprehensive view. The challenge is that data such as logs, metrics and events are typically silo'd by the various tools used to manage each. However unifying all that data in one analytics platform can provide that end-to-end correlation. Once data is unified, users can instantly see the affect of a code deploy on system performance or application error codes or even user signup rates. Making use of all that unified data also requires a platform flexible enough to handle all the various workflows. In this talk, I will cover how we have solved these challenges.
Ansible is an Open source IT configuration management, deployment and orchestration tool that is extremely easy to configure and use. In this demo we will look at the configuration files and required steps needed to install and deploy NGINX Plus using Ansible.
Open source is a culture of learning and sharing. Our engineers work on other FOSS projects beyond the edges of NGINX. Join some of our team as they talk about other projects they work on including VLC, FreeBSD and libAttachSql.
When architecting an enterprise Java application, you need to choose between the traditional monolithic architecture consisting of a single large WAR file, or the more fashionable microservices architecture consisting of many smaller services. But rather than blindly picking the familiar or the fashionable, it's important to remember what Fred Books said almost 30 years ago: there are no silver bullets in software. Every architectural decision has both benefits and drawbacks. Whether the benefits of one approach outweigh the drawbacks greatly depends upon the context of your particular project. Moreover, even if you adopt the microservices architecture, you must still make numerous other design decisions, each with their own trade-offs.
A software pattern is an ideal way of describing a solution to a problem in a given context along with its tradeoffs. In this presentation, we describe a pattern language for microservices. You will learn about patterns that will help you decide when and how to use microservices vs. a monolithic architecture. We will also describe patterns that solve various problems in a microservice architecture including inter-service communication, service registration and service discovery.
Join us after sessions on the last day of nginx.conf for a Happy Hour. Enjoy the last get-together at the event with other attendees – discuss ideas, projects, or skills that are important to you, and make those last connections before leaving the conference.
How we are delivering service is dramatically changing for some leading edge organizations. This started a few years ago with the Netflix ( one of the largest NGINX customers) "Building with Legos" approach where they created an immutable infrastructure pipeline for service delivery. Continuos Delivery (CD) has been all the rage over the past few years; but, now organizations are making clever usage of containerization to make the complete delivery pipeline immutable ("Immutable" delivery"). First off, the delivery pipeline is all containerized components with some teams doing containers-in-containers (this is also referred to as DinD - Docker in Docker) for build slaves. Furthermore using binary artifacts (container images) developers can push full service stacks throughout the CD pipeline knowing that the stack is immutable at every step of the process. If the process is "green" through the flow then the dev org knows that the base OS, middleware and application are bit for bit identical. Docker and a few organizations like Yelp, Capital One and Gilt we be discussed as model examples of companies that are delivering service in an immutable fashion.
In this talk, we will describe globo.com's live video stream architecture, which was used to broadcast events such as the FIFA World Cup (with peak of 500K concurrent users), Brazilian election debates (27 simultaneous streams) and BBB (10 cameras streaming 24/7 for 3 months) .
Nginx is one of the main components of our platform, as we use it for content distribution, caching, authentication, and dynamic content. Besides our architecture, we will also discuss the Nginx and Operational System tuning that was required for a 19Gbps throughput in each node, the open source Cassandra driver for Nginx that we developed, and our recent efforts to migrate to nginx-rtmp.
Mobile is been a driver of technological change for organizations everywhere, no matter what the industry.
Requirements for handling traffic from different device types have often been driven by marketing or business functions, and addressed at the web application layer rather than at an infrastructural level. This results in inefficiencies and unnecessary costs in bandwidth and server specifications. With device awareness now available at the server layer, administrators are directly empowered to provide the ability to handle different device traffic in a highly efficient and scalable way for their organizations.
Afilias Technologies CTO, Ronan Cremin, will share a case study with real world results achieved by deploying an NGINX module for device awareness for a network of hundreds of thousands of sites, bringing significant benefits not only in infrastructure cost reductions, but also in improved UX and performance for site visitors. Learn how Web Ops can make a real difference to an organizations mobile and online strategy.
With over 160,000 merchants, Shopify is one of the biggest commerce providers in the world. A critical piece of our technology stack is NGINX. Using a custom request router, built with NGINX and Lua, we’re able to dynamically control how requests are routed through the Shopify platform. This allows us to regularly perform zero-downtime failovers between data centres, as well as allow for fine-grained resource isolation. This talk will delve into the details of how we built and tested our router, and how Shopify is using NGINX and Lua to enable high performance, with minimum latency and maximum resiliency.
Deployment strategies are a large focus for a lot of teams right now as more and more companies begin to migrate services to public cloud. All of these teams have one goal in common, Zero Downtime. The most common strategy for this is the Blue Green Deployment, and while it seems simple enough to do it can contain a few caveats, and even more for larger stacks.
In this talk we will explore a case study around moving a large stack of over 30 application to Amazon Web Services and why Nginx was the right choice for load balancing and the deployment. The Goal, release code with out dropping a single packet and ensure all applications get their new code version at the exact same time.
With the use of Zookeeper for service discovery, new nodes in the environment are automatically added to Nginx upstream pools. When new code is released and applications nodes ready, Nginx is alerted and directs traffic to new pools serving the latest release. Using a header to tag requests with the current release Nginx will continue to direct the request to the correct release while it makes it way through the stack, ensuring that each request will hit the appropriate application version during and after the traffic switch.
NGINX is a multi-faceted tool, and if you’re just getting your feet wet--or need a buffer--you’ll want to check this presentation out first. One popular use for NGINX is as a HTTPS reverse proxy--enabling services to support encrypted protocols that don’t normally do so. Why should you be an expert on this? Putting NGINX in front of HTTP-based websites and services allows them to be fully HTTPS compliant, taking advantage of NGINX’s state-of-the-art encryption technology.
This session will start by introducing the basics of HTTPS and web encryption. Attendees will learn how to get a proper HTTPS certificate from a certificate authority (“CA”) for browser-facing services, and an internal CA for internal services. The session will also cover the ngx_http_ssl_module, and what it takes to configure it to the industry standard. This is your spark notes session for getting that A+ in security.
Your online service is growing, and a lot of your users now live far away from your datacenters. Network round trip latency is a serious performance problem, and the speed of light isn't getting any faster. Caching can speed up static content delivery, but what about your dynamic services? In this session, we discuss Dropbox's experience building an edge network of NGINX proxies to accelerate both dynamic web services and bulk traffic uploads. We discuss the theory of edge termination for TCP and SSL, plus practical considerations of configuration, performance, and security.
Even in 2015, most websites don't use HTTPS by default, which is the bare minimum for web security. This talk is about a new project, Let's Encrypt, that will hopefully make it easy for anyone to set up and maintain TLS (for free!) on an NGINX server in less than 5 minutes.
Let's Encrypt is (1) a new certificate authority created by EFF in collaboration with Mozilla, Cisco, Akamai, IdenTrust, and a team at the University of Michigan and (2) a package that runs on your server to do the annoying work of managing certificates and configuring TLS. The CA will issue certificates for free, using a new automated protocol called ACME for verification of domain control, certificate issuance, and certificate renewal.
This talk will mostly cover how Let's Encrypt works on NGINX (though it also works on Apache) and the exciting nuances of automated TLS configuration. I'll also discuss ways that the NGINX community can help Let's Encrypt become more useful to server operators so that we move closer to encrypting the entire web.
Containers are being used to package and distribute modern web applications at a whole new scale. Kubernetes, the container cluster manager from Google, provides a way to deploy, manage and scale these application containers, while providing rich features such as automated service discovery and self-healing. But how do you expose your applications running in Kubernetes to the world in a way that works across cloud and bare-metal environments?
The answer is NGINX Plus.
In this session you will learn how NGINX Plus can be used to provide robust load balancing across a Kubernetes cluster while leveraging deep integration with the Kubernetes API and built-in service discovery mechanisms. We might even open source a new plugin to make it happen.
More and more websites rely on NGINX as a performance booster. But it can make applications more secure! By examining GET and POST requests as well as how the application responds, it is possible to filter out illegitimate traffic from legitimate website visitors! Nginx-based proxies/load balancers already have traffic — so they just need the proper tools and modules to analyze it!
What will you learn:
IBM WebSeals and F5 BigIP's are used within ING as 'the' reverse proxy and loadbalancers for our internet facing applications offering standard authenticating proxy and loadbalancing functionalities. But we also wanted additional features like fine grained access control, better monitoring, event publishing, cross-datacenter persistent cookie jars and dynamic loadbalancing from a service discovery service which IBM WebSeal and F5 BigIP did not offer out-of-the-box. We started to create additional modules for nginx to address access control, monitoring, event publishing, cookie jar, loadbalancing and lateron we will add dynamic service discovery and cookie jar persistency. In this story I want to share our journey of how we build a new DMZ with nginx and take you through our continue delivery pipeline using Jenkins, Docker, Nolio and Python to automate all testing from unit testing, static code analysis, valgrind testing and performance testing up to...
Building out a web server is easy, but things get much more complicated as you add the load balancers and caching servers. Optimal configuration for these require considerable expertise in the area and ensuring high performance as the site grow is increasingly complicated. In this session, Matt Williams, the Evangelist at Datadog will show you:
We recently replaced a proprietary API management solution with an in-house implementation built with NGINX and Lua that let us get to a continuous delivery practice in a handful of months. Learn about our development process and the overall architecture that allowed us to write minimal amounts of code, enjoying native code performance while permitting interactive coding, and how we leveraged other open source tools like Vagrant, Ansible, and OpenStack to build an automation-rich delivery pipeline. We will also take an in-depth look at our capacity management approach that differs from the rate limiting concept prevalent in the API community.
Deis is the leading Docker PaaS with over 130 contributors and nearly a million downloads. With NGINX as its routing mesh, Deis is used at companies like Mozilla, Coinbase and The RealReal to power container-based microservice architectures. Join us for an in-depth technical journey into the past, present, and future of an NGINX-powered PaaS.
When you have a highly scalable and efficient Web Server platform like NGINX you will need a highly scalable and efficient tool to protect the Web applications that run on that platform. Since no two Web applications are the same, the most flexible and cost effective WAF is needed to protect those applications. In this session, the Trustwave team will describe the new features and functions of this essential, powerful, mature and secure proven production environment and how the customer can implement this significant level of security with NGINX and how ModSecurity has become the most widely deployed WAF on the planet.
Now ModSecurity version 3.0 no longer has the overhead of Apache and natively supports NGINX and NGINX Plus environments in an effective and efficient way. In addition, ModSecurity can be augmented with more than 19,000 commercial rules (can be updated daily) providing PCI compliance, and additional protection to your web application including: Virtual Patches for more than 5500 applications (such as WordPress and Joomla), IP Reputation, Botnet Attack Detection, Web-based Malware Detection, Webshell/Backdoor Detection, HTTP Denial of Service (DoS) Attack Detection, Anti-Virus Scanning of File Attachments, and also protection to OWASP Top Ten.
Today’s server core counts are dramatically increasing in data centers, providing larger numbers of available simultaneous processes and threads. Taking full advantage of the computational power of the modern CPU can be challenging. In this presentation, we will share how Intel worked together with NGINX to take advantage of the SO_REUSEPORT feature recently introduced in newer operating systems to enable greater scalability and performance of the NGINX Web Server. This performance optimization removes the 1:1 socket-to-port binding that can create lock contention and improves performance up to 4x throughput on a multi-core Intel-based server platform. We will cover:
Join us for the nginx.conf opening celebration! Enjoy tasty refreshments while mixing and mingling with the people who are passionate about delivering better application and web performance. See you there!
The adoption of the NGINX open source project has been incredible, and it has fueled tremendous innovation in the developer community. In this keynote, hear about the current and future state of NGINX.
Love NGINX but wish it could autoscale just like ELB? In this talk, you'll learn how to build an NGINX load balancing cluster that handles dynamic upstreams, high availability, failover, and autoscaling. The talk will end with the most important NGINX and linux configurations for high performance load balancers that can handle 10,000+ connections per second.
This presentation by Rick Nelson (Head of Pre-sales at NGINX) will demo NGINX Plus and Docker automatically scaling an application based on system load. It will show you how you can used the NGINX Plus status API and the upstream configuration API together to add and remove application containers on-the-fly in response in response to the amount of incoming traffic.
NGINX keeps up-to-date with modern technologies and latest industry trends. And it was one of the pioneers who started to support the SPDY protocol. But SPDY, being almost the same as HTTP/2, had the "experimental" status, that was one of the obstacles for its wide usage. Now we have HTTP/2, the SPDY successor. In some minor details HTTP/2 is better than SPDY, but at some points it's worse. Anyway, 2016 will be the year of switching from SPDY to HTTP/2 and NGINX is going to be in the frontline there to bring the best possible support of HTTP/2 to your web applications. In this short talk, Valentin Bartenev (Core Developer at NGINX) will cover most of the questions about the new protocol and its implementation in NGINX.
Nick Shadrin (Technical Solutions Architect at NGINX) will use a very simple but well known web application for this live demo session. It runs as a containerized web server and it's written in Go. This application does not know anything about security, access control and audit. We will take NGINX and apply it in the role of reverse proxy and load balancer with advanced security. The live demo will show how to configure NGINX with two-factor authentication, set up role based access control, SSL offloading and audit features.
Containers, microservices, immutable infrastructure, continuous integration… is your head spinning?
Let’s take it back a notch. If you’re standing around saying “I just wanna build a web app and unleash it on the world”, this talk is for you. We’ll work through getting your very own web app up and running on NGINX. With that todo done, you can go back to wondering how long it is until your environment might need kubernetes.
Owen Garrett, Head of Products at NGINX, will expand on his keynote and present a detailed product roadmap and take questions from the audience.
NGINX is the secret heart of the modern web. In this keynote, you'll learn about the latest product features, as well as what's on the roadmap.
This session will present an overview of some of the latest security and performance challenges facing Web companies especially those who use NGINX as the fundamental platform. John Graham-Cumming from CloudFlare will talk about some of the latest denial of service attacks (DDoS) and how CloudFlare delivers DDoS mitigation and enhanced performance. Bruce Tolley from Solarflare will outline some recent customer case studies that show how user level networking (OS bypass) technology that can increase NGINX performance and enhance network and server security.