Did you know that securing your website with HTTPS can increase your ranking in Google search results? Google announced on their Online Security Blog that their search algorithm now considers HTTPS a ranking signal when returning search results.
HTTP, the Hypertext Transfer Protocol, is the foundation of communication on the Internet. However, it’s insecure because the communication is unencrypted. HTTPS, the secure version of HTTP, uses SSL or the more modern TLS cryptographic protocol to encrypt the data flow. Security‑conscious companies like Dropbox use HTTPS to address a variety of security and privacy challenges.
SSL and TLS Can Create Serious Performance Issues
The search engine optimization (SEO) benefit from securing your website is attractive but for many it’s not that exciting. The reality is that many applications still rely on inefficient architectures in their software stacks and already struggle from an array of performance challenges. Adding SSL or TLS can make applications even slower and more resource hungry.
The SSL/TLS handshake that makes HTTPS secure can impact performance significantly. The handshake is a series of communications between the web browser and server that verifies the connection is trusted. It is a CPU‑intensive process and leads to more round trips between your users and your servers.
Fortunately, modern web servers, such as NGINX and NGINX Plus, address these challenges and allow companies to scale applications tremendously well. NGINX and NGINX Plus provide a number of ways you can alleviate the performance impacts of SSL/TLS, including session caching, session tickets or IDs, OCSP stapling, and the experimental SPDY protocol.
When you include the
ssl_session_cache directive in the configuration, NGINX and NGINX Plus cache the session parameters used to create the SSL/TLS connection. This cache, shared among all workers when you include the
shared parameter, drastically improves response time for subsequent requests because the connection setup information is already known. Assign a name to the cache and set its size (a 1‑MB shared cache accommodates approximately 4,000 sessions).
ssl_session_timeout directive controls how long the session information remains in the cache. The default value is 5 minutes; increasing it to several hours (as in the following example) improves performance but requires a larger cache.
ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h;
Session Tickets and IDs
Session tickets store information about specific SSL/TLS sessions. When a client resumes interaction with an application, the session ticket is used to resume the session without renegotiation. Session IDs are an alternative; an MD5 hash is used to map to a specific session stored in the cache created by the
ssl_session_cache directive. Both mechanisms can be used to shortcut the SSL/TLS handshake.
Another way to improve HTTPS performance is with OCSP stapling, which decreases the time of the SSL/TLS handshake. Traditionally, when a user connects to your application or website via HTTPS, the browser validates the SSL/TLS certificate against a certificate revocation list (CRL) or uses an Online Certificate Status Protocol (OCSP) record from a certificate authority (CA). These requests add latency and the CAs can be unreliable. With NGINX and NGINX Plus you can cache the OCSP response to your server and eliminate costly overhead.
ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/nginx/cert/trustchain.crt; resolver 18.104.22.168 22.214.171.124 valid=300s;
Specifying optimized cipher suites – the algorithms that encrypt network communications – is sometimes said to increase performance. You can use the using the
ssl_prefer_server_ciphers directive for this purpose, but the reality is that the default settings are good enough for most cases. This setup, and best practice, works equally well for old, current, and (one hopes) future ciphers. We recommend that you select specific ciphers only to fulfill particular security and performance requirements.
Editor – SPDY was the basis for the HTTP/2 standard published in May 2015. Support for SPDY is deprecated – and HTTP/2 is fully supported – in NGINX Plus Release 7 and later, and NGINX 1.9.5 and later.
SPDY is an experimental protocol that attempts to reduce latency and round‑trip times for HTTP traffic. The protocol creates a tunnel between the web browser and application server. Through HTTP multiplexing, SPDY enables concurrent traffic streams over a single TCP connection, reducing the need for additional connections and SSL negotiations. The protocol also compresses request and response HTTP headers, resulting in fewer bytes transmitted. It’s important to note that SPDY provides no benefits if you are sharding domains.
listen 443 ssl spdy; spdy_headers_comp 1;
Editor – With HTTP/2 support, the
http2 parameter replaces the
spdy parameter to the
listen directive shown in the snippet above. There is no equivalent to the
spdy_headers_comp directive for HTTP/2.
# In NGINX Plus R7 and later, and NGINX 1.9.5 and later listen 443 ssl http2;
The Benefits of Using SSL/TLS are Greater Than Ever Before
Search engine optimization is a focus for many companies with an Internet presence. In fact, Google might make HTTPS a more influential ranking factor in the near future. It’s exciting that there are options available that allow any webmaster to meet the demands of user security efficiently while also delivering the performance users love.
At the end of the day, you shouldn’t be securing your website only for the SEO benefit – but it’s a nice incentive from Google that we hope will motivate you to reconsider your application security. Who doesn’t want to improve the experience and safety of users while also receiving an SEO boost?
To get started, and learn more about how to set up and optimize HTTPS, check out NGINX SSL Termination in the NGINX Plus Admin Guide.