December 11, 2015

TCP load balancing with Nginx (SSL Pass-thru)

^ Ad space to help offset hosting costs :D

Learn to use Nginx 1.9.* to load balance TCP traffic. In this case, we'll setup SSL Passthrough to pass SSL traffic received at the load balancer onto the web servers.Nginx 1.9.3+ comes with TCP load balancing. Prior to this, Nginx only dealt with the HTTP protocol. However, now Nginx can work with the lower-level TCP (HTTP works over TCP).

SSL Pass-thru

With SSL-Pass thru, Nginx is dealing in encrypted TCP traffic - it does not decrypt it, and cannot read information about the HTTP request. It's job is merely to send TCP packets to other servers based on it's load balancing configuration. This has some side affects - notably that Nginx can't figure out what server to send traffic to based on the Host header (although SNI can get around that - that's a topic for another day).

You may be more used to SSL-Termination. In that scenario, traffic is decrypted at the load balancer. This lets Nginx read the HTTP headers and do fancy things like adjust headers, add headers, see the Host header to route to different servers, etc.

When to use Pass-Thru

Pass-through SSL traffic is encrypted all the way to the end web server. Conversely, with SSL-Termination, traffic between the load balancer and web servers is not encrypted. Pass-through therefore can be seen as more secure (although you can combine the two - terminate at the load balancer, and re-encyrpt the traffic before sending to the web servers).

SSL Pass-through "balances" the CPU cycles needed to decrypt traffic amongst the web servers. You can decide if you'd like your load balancer to bear the brunt of SSL decryption CPU cycles, or make the web servers distribute that load amongst themselves. SSL Termination is more common - the configuration is overall simpler. You can decide for yourself which is better (I have zero metrics on performance).

Personally, I don't think it's useful to care about SSL Termination vs Pass-thru from a performance point of view unless you are "at scale", where this can actually affect end users. In other words, chances are you shouldn't worry about that too much - instead worry about if you want traffic encrypted end-to-end or not.

SSL Termination is often "OK", as the decrypted traffic going between the load balancer and web servers is often on a private network amongst servers in the same data center.

For SSL Pass-thru, we miss out on the opportunity to add any information about the traffic being load balanced (chiefly, the X-Forwarded-* headers).

Installing Nginx

Let's get started - we'll install Nginx from the MAINLINE branch, as this feature isn't yet in the STABLE branch of Nginx:

sudo add-apt-repository -y ppa:nginx/development
sudo apt-get update
sudo apt-get install -y wget curl tmux unzip \
    nginx

We'll need two servers to test this out. First, let's configure the load balancer.

Load Balancer (Server A, at 52.90.130.140):

Edit /etc/nginx/nginx.conf. We'll add an include statement outside of the http block. We do this because we need to include configuration for the stream block, which signals to Nginx to expect TCP traffic. We can't use stream inside of an http block, which is where the include statement normally resides for including the /etc/nginx/conf.d/*.conf and /etc/nginx/sites-enabled/* files.

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
    #...
}

http {
    # ...
}

# Add this include statement:
include /etc/nginx/tcpconf.d/*;

Then make that directory to include configurations:

sudo mkdir -p /etc/nginx/tcpconf.d

Finally, we'll create a new stream configuration in a new file at /etc/nginx/tcpconf.d/lb:

stream {
    upstream web_server {
        # Our web server, listening for SSL traffic
        # Note the web server will expect traffic
        # at this xip.io "domain", just for our
        # example here
        server 52.23.215.245.xip.io:443;
    }

    server {
        listen 443;
        proxy_pass web_server;
    }
}

Then enable it / test that new configuration:

# Test it
sudo service nginx configtest

# Reload Nginx if that reports it's OK
sudo service nginx reload

To reiterate: The reason we have to edit nginx.conf is because it includes items from conf.d and sites-available within the http block. However, with TCP traffic, we need configuration to be within the stream block. "Stream" tells Nginx to expect TCP traffic rather than HTTP traffic.

Web Server (Server B, at 52.23.215.245):

Start by creating a self-signed SSL certificate. We'll just use a self-signed one for this example, but in production, this would be a paid one (or one creatd via https://letsencrypt.org/):

cd /etc/ssl
sudo mkdir example
cd example
sudo openssl genrsa -out example.key 2048
sudo openssl req -new -key example.key -out example.csr
sudo openssl x509 -req -days 365 -in example.csr -signkey example.key -out example.crt

Then we can do a more familiar configuration for the web server - just as if we're setting up a normal server.

Create (or edit) /etc/nginx/sites-available/default:

server {
    # Not listening for port 80 traffic,
    # we expect all traffic to come from our load balancer
    # which will send over port 443
    listen 443 ssl default_server;

    # Configuration taken from H5BP Nginx Server configs for SSL traffic
    ssl on;
    ssl_certificate     /etc/ssl/example/example.crt;
    ssl_certificate_key /etc/ssl/example/example.key;
    ssl_protocols              TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers                ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
    ssl_prefer_server_ciphers  on;
    ssl_session_cache    shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
    ssl_session_timeout  24h;
    keepalive_timeout 300; # up from 75 secs default

    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;

    server_name _;

    location / {
        try_files $uri $uri/ =404;
    }
}

Save that and test/reload Nginx:

# Test it
sudo service nginx configtest

# Reload Nginx if that reports it's OK
sudo service nginx reload

Now you should be able to head to the load balancer over port 443 (https in the browser) to test it out! Since this uses a self-signed certificate, you'll be asked to click through the invalid-ssl warning. The SSL certificate is getting terminated on the web server instead of at the load balancer!

Resources

All Topics