nginx Tuning
nginx is fast, really fast, but you can adjust a few things to make sure it's as fast as possible for your use case. Santa doesn't like it when you spend your hard earned money on extra server resources you don't really need.
The easy stuff
The easiest thing to set in your configuration is to setup the right number of workers and connections. This tells nginx to only have a single worker process. This might be appropriate for a lower traffic site, where you have nginx, database, and your app all running on the same server.
worker_processes 1;
But if you have a higher traffic site or a dedicated instance for nginx, you want this to be set to one worker per CPU core. nginx provides a nice option that tells it to automatically set this to one worker per core like this:
worker_processes auto;
nginx is much more efficient about handling connections than many other web servers, so it's usually a safe bet to set the connections to a high value. We'll also set nginx to use epoll to ensure we can handle a large number of connections optimally and direct it to accept multiple conncetions at the same time. Our config should now look like this:
worker_processes auto;
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
Assuming a system with 4 cores, this would allow us to have 4096 simultaneous connections. If you adjust your kernel's open file limits up you can easily bump up worker_connections higher to 2048 or 4096 allowing more connections than marketing can likely drive traffic to your site.
Since we will likely have a few static assets on the file system like logos, CSS files, Javascript, etc that are going to be commonly used across your site it's quite a bit faster to have nginx cache these for short periods of time. Adding this outside of the events block tells nginx to cache 1000 files for 30 seconds, excluding any files that haven't been accessed in 20 seconds, and only files that have 5 times or more. If you aren't deploying frequently you can safely bump up these numbers higher.
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
Socket Stuffers
Since we're now setup to handle lots of connections, we should allow browsers to keep their connections open for awhile so they don't have to reconnect to as often. This is controlled by the keepalive_timeout setting. We're also going to turn on sendfile support, tcp_nopush, and tcp_nodelay. sendfile optimizes serving static files from the file system, like your logo. The other two optimize nginx's use of TCP for headers and small bursts of traffic for things like Socket IO or frequent REST calls back to your site.
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
}
As we mentioned in yesterday's article on Front End Performance, nearly every browser on earth supports receiving compressed content so we definitely want to turn that on. These also go in the same http section as above:
gzip on;
gzip_min_length 1000;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
Walking in a WSGI Wonderland
CI bells ring, are you listening?
In the rack, servers are glistening
We’re deploying tonight
Walking in a WSGI wonderland
What the Elves sing during deployments
Many of our readers use Python and Django, so you're very likely using nginx to reverse proxy to WSGI processes. This is pretty easy to configure, but most of the time people don't optimize this bit at all.
You'll need to adjust this for your particular site, but using something like this:
location / {
proxy_buffers 8 24k;
proxy_buffer_size 2k;
proxy_pass http://127.0.0.1:8000;
}
proxy_buffering is turned on by default with nginx, so we just need to bump up the sizes of these buffers. The first directive, proxy_buffers, is telling nginx to create and use 8 24k buffers for the response from the proxy. The second directive is a special smaller buffer that will just contain the HEAD information, so it's safe to make that smaller.
So what's this do? Well when you're proxying a connection nginx is playing the middle man between the browser and your WSGI process. As the WSGI process writes data back to to nginx, nginx stores this in a buffer and writes out to the client browser when the buffer is full. If we leave these at the defaults nginx provides (8 buffers of either 4 or 8K depending on system), what ends up happening is our big 50-200K of HTML markup is spoon fed to nginx in small 4K bites and then sent out to the browser.
This is sub-optimal for most sites. What we want to have happen is for our WSGI process to finish and move on to the next request as fast as possible. To do this it needs nginx to slurp up all of the output quickly. Increasing the buffer sizes to be larger than most (or all) of the markup size of your apps pages let's this happen.
Safety
Another thing to think about is keeping people on Santa's naughty list from causing too much trouble. The way to do this is with rate limiting. While this isn't sufficient protection from something like a DDoS, it's enough to keep your site protected from smaller floods of traffic.
Rate limiting in nginx is pretty easy to setup and fairly CPU/memory efficient so good elves turn it on. Our friend Peter over at Lincoln Loop has a great write up on rate limiting with nginx to get you started.
We hope this helps eek out a bit more performance from nginx for you!