WebSockets Over SSL With Node.js and Nginx

June 5th, 2013 Permalink

Back before Nginx introduced support for websockets in version 1.3 I wrote a couple of posts to outline how to serve both SSL-encrypted websocket and web traffic on the same port and same server with Node.js in the back end and either an HAProxy or Varnish and Stunnel front end. The typical scenario here is that you are setting up servers for a single page web application that uses Express.js to serve content and Socket.IO to manage websocket connections. This might be a chatroom application, for example, the ubiquitous project that everyone hacks together to illustrate how to use Socket.IO. It is often the case that your single page application is a part of a larger site, and that site may or may not be served by Express.js. So it's important to be able to direct traffic to multiple back ends.

Now that Nginx has support for websockets, I can write what is hopefully the last post in this series to show how to set up a secure Node.js web and websocket server with an Nginx front end.

Overview

The goal here is to produce a server that can handle normal page requests and websocket traffic over the same port and domain - no oddball websocket games with different ports or different hostnames for the websocket traffic. So the intended web application may serve a mix of websockets, static content, dynamic pages, and so forth.

  • Nginx listens on ports 80 and 443.
  • Nginx redirects all HTTP traffic to HTTPS.
  • Nginx identifies traffic intended for Node.js, terminates the SSL connections, and passes unencrypted HTTP traffic to Node.js processes.
  • Nginx serves static files and other non-Node.js pages.
  • Node.js processes running HTTPServer with Socket.IO listen on ports 10080, 10081, and 10082.

This arrangement will be created on an Ubuntu 12.04 server - you will have to adjust package installation and configuration file locations accordingly if working with another branch of Unix.

Set up Your Node.js Processes

For the purposes of this post we'll assume that you are running a clustered set of Node.js server processes based on Express.js and Socket.IO that listen for HTTP and websocket traffic on ports 10081, 10082, and 10083. You always want to run more than one Node.js process on a given server to (a) take advantage of multiple processor cores, and (b) provide redundancy in case of failure.

If you look back in the archives, you'll find a post on how to set up Node.js processes as services using Forever in Ubuntu.

Install Nginx

You probably want to go with a package installation on Ubuntu, as it's a pain to build Nginx from source while ensuring that all the various service, configuration, and other files end up in the right places and with the right contents. To install Nginx 1.4.1 on Ubuntu 12.04 (as of Q2 2013), log in as root and enter the following commands to add a suitable repository and run the installation:

add-apt-repository ppa:nginx/stable
apt-get update
apt-get install nginx

Configure Nginx

Replace the contents of /etc/nginx/sites-available/default with the following configuration:

#
# Ngxinx configuration file for secure websocket applications.
#
# - Listens on 80 (HTTP) and 443 (HTTPS)
# - Redirects all port 80 traffic to port 443
# - Manages load balancing across Node.js upstream processes.
#

upstream node {
  # Directs to the process with least number of connections.
  least_conn;
  # One failed response will take a server out of circulation for 20 seconds.
  server 127.0.0.1:10080 fail_timeout=20s;
  server 127.0.0.1:10081 fail_timeout=20s;
  server 127.0.0.1:10082 fail_timeout=20s;
}

server {
  # Listen on 80 and 443
  listen 80;
  listen 443 ssl;
  # Self-signed certificate.
  ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
  ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
  # Certificate chained with a certificate authority bundle.
  # ssl_certificate /etc/ssl/certs/example.com.chained.crt;
  # ssl_certificate_key /etc/ssl/private/example.com.key;

  # Redirect all non-SSL traffic to SSL.
  if ($ssl_protocol = "") {
    rewrite ^ https://$host$request_uri? permanent;
  }

  # Split off traffic to Node.js backends, and make sure that websockets
  # are managed correctly.
  location /served/by/node {
    proxy_pass http://node;
    proxy_http_version 1.1;
    proxy_set_header Upgrade websocket;
    proxy_set_header Connection upgrade;
  }

  # Send everything else to a local webroot.
  root /var/www;
  index index.html index.htm;
  location / {
    try_files $uri $uri/ index.html;
  }
}

Create the SSL Certificate File

Generating a snakeoil SSL certificate is easy enough. The following commands will put a key and certificate into the locations specified in the Nginx configuration file above.

apt-get install ssl-cert
make-ssl-cert generate-default-snakeoil --force-overwrite

If you purchase a certificate, you'll need to make sure that you append any chain certificates from the provider to the end of your certificate file. See the docs for the SSL module in Nginx for notes on that. For example:

cd /etc/ssl
cat certs/example.com.crt certs/provider-chain.crt > example.com.chained.crt

Set up Static Content

Static content resides in /var/www and will be served by Nginx directly rather than Node.js. This might even be PHP or similar code rather than simply static assets. Make sure this webroot and its contents have permissions that allow Nginx to read it.

Restart Nginx

Now restart Nginx to pick up the configuration.

service nginx restart

Notes on Nginx Versus HAProxy as the Front End

Both HAProxy and Nginx are fast and reliable. HAProxy isn't a web server and thus you have to add in a web server process as a back end if you want to serve assets from the file system via something other than Node.js. For example, you might choose to use Nginx as a back end for that purpose. Nginx isn't a dedicated proxy and therefore when used as a front end proxy it offers only limited options for traffic management and health checks of back end servers. The main functional difference between the set up here and that outlined in the HAProxy post is that this one has a worse health check mechanism: at least one failure must occur in order to stop sending traffic to the failing back end, and you have to guess at how long to wait before sending traffic again. In comparison HAProxy can be configured to run very frequent health checks, and so it's possible to have far fewer failures. If you like the idea of using Nginx to avoid having to use both Nginx and HAProxy in your stack, then you might be fine with that trade-off, but either way be aware that it exists.