WebSockets Over SSL: HAProxy, Node.js, Nginx

December 8th, 2012 Permalink

A little while back I bemoaned the state of server technology for secure websockets, as I was at the time putting in work on a Socket.io side project that needed to be HTTPS only. In due course that led a later post to outline a server setup that involved Stunnel and Varnish as the frontend proxies. It was a little Rube Goldberg, but worked nonetheless.

I had written off HAProxy as a frontend option pretty early on because it didn't have SSL support. A little while after publishing the posts above, however, the HAProxy maintainer emailed me out of the blue to note that as of 1.5-dev12 HAProxy does in fact have native SSL support. So here I'll redo the Stunnel-Varnish-Node.js-Nginx post with HAProxy in place of Stunnel and Varnish.

Overview

The goal here is to produce a server that can handle normal page requests and websocket traffic over the same port and domain - no oddball websocket games with different ports or different hostnames for the websocket traffic. So the intended web application may serve a mix of websockets, static content, dynamic pages, and so forth.

We'll be making use of HAProxy, and Nginx in addition to Node.js. HAProxy is the frontend because it can correctly proxy websocket traffic while Nginx cannot (at least prior to 1.3, per the roadmap). An outline of the server setup is as follows:

  • HAProxy listens on ports 80 and 443.
  • HAProxy redirects all HTTP traffic to HTTPS.
  • HAProxy terminates SSL connections and passes unencrypted traffic to either Node.js or Nginx.
  • Nginx listens on port 8080. It serves static files and other non-Node.js pages.
  • A Node.js HTTPServer with Socket.IO set up listens on port 10080.

This arrangement will be created on an Ubuntu 12.04 server - you will have to adjust package installation and configuration file locations accordingly if working with another branch of Unix.

Install Packages

We'll assume that Node.js is already installed - set up and running as a service on port 10080 - and dive right into the rest of the setup. First install the necessary packages, which includes those that you will need to build HAProxy from source:

apt-get install nginx libpcre3 libpcre3-dev libssl-dev build-essential

Build HAProxy

Download the HAProxy source for 1.5-dev12 or later. Here we're going with 1.5-dev14. If you are reading this much after 2012, it should hopefully be the case that version 1.5 is out and in the package repositories with SSL support - if that's the case, just install HAProxy via apt-get and skip to the configuration step.

wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev14.tar.gz
tar -xf haproxy-1.5-dev14.tar.gz
cd haproxy-1.5-dev14
make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
make install

Put the following script into /etc/init.d/haproxy - this is taken from the existing HAProxy package for earlier versions:

#!/bin/sh
### BEGIN INIT INFO
# Provides:          haproxy
# Required-Start:    $local_fs $network $remote_fs
# Required-Stop:     $local_fs $remote_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: fast and reliable load balancing reverse proxy
# Description:       This file should be used to start and stop haproxy.
### END INIT INFO

# Author: Arnaud Cornet <acornet@debian.org>

PATH=/sbin:/usr/sbin:/bin:/usr/bin
PIDFILE=/var/run/haproxy.pid
CONFIG=/etc/haproxy/haproxy.cfg
HAPROXY=/usr/local/sbin/haproxy
EXTRAOPTS=
ENABLED=0

test -x $HAPROXY || exit 0
test -f "$CONFIG" || exit 0

if [ -e /etc/default/haproxy ]; then
	. /etc/default/haproxy
fi

test "$ENABLED" != "0" || exit 0

[ -f /etc/default/rcS ] && . /etc/default/rcS
. /lib/lsb/init-functions

haproxy_start(){
	start-stop-daemon --start --pidfile "$PIDFILE" \
		--exec $HAPROXY -- -f "$CONFIG" -D -p "$PIDFILE" \
		$EXTRAOPTS || return 2
	return 0
}

haproxy_stop(){
	if [ ! -f $PIDFILE ] ; then
		# This is a success according to LSB
		return 0
	fi
	for pid in $(cat $PIDFILE) ; do
		/bin/kill $pid || return 4
	done
	rm -f $PIDFILE
	return 0
}

haproxy_reload(){
	$HAPROXY -f "$CONFIG" -p $PIDFILE -D $EXTRAOPTS -sf $(cat $PIDFILE) \
		|| return 2
	return 0
}

haproxy_status(){
	if [ ! -f $PIDFILE ] ; then
		# program not running
		return 3
	fi
	for pid in $(cat $PIDFILE) ; do
		if ! ps --no-headers p "$pid" | grep haproxy > /dev/null ; then
			# program running, bogus pidfile
			return 1
		fi
	done
	return 0
}

case "$1" in
start)
	log_daemon_msg "Starting haproxy" "haproxy"
	haproxy_start
	ret=$?
	case "$ret" in
	0)
		log_end_msg 0
		;;
	1)
		log_end_msg 1
		echo "pid file '$PIDFILE' found, haproxy not started."
		;;
	2)
		log_end_msg 1
		;;
	esac
	exit $ret
	;;
stop)
	log_daemon_msg "Stopping haproxy" "haproxy"
	haproxy_stop
	ret=$?
	case "$ret" in
	0|1)
		log_end_msg 0
		;;
	2)
		log_end_msg 1
		;;
	esac
	exit $ret
	;;
reload|force-reload)
	log_daemon_msg "Reloading haproxy" "haproxy"
	haproxy_reload
	case "$?" in
	0|1)
		log_end_msg 0
		;;
	2)
		log_end_msg 1
		;;
	esac
	;;
restart)
	log_daemon_msg "Restarting haproxy" "haproxy"
	haproxy_stop
	haproxy_start
	case "$?" in
	0)
		log_end_msg 0
		;;
	1)
		log_end_msg 1
		;;
	2)
		log_end_msg 1
		;;
	esac
	;;
status)
	haproxy_status
	ret=$?
	case "$ret" in
	0)
		echo "haproxy is running."
		;;
	1)
		echo "haproxy dead, but $PIDFILE exists."
		;;
	*)
		echo "haproxy not running."
		;;
	esac
	exit $ret
	;;
*)
	echo "Usage: /etc/init.d/haproxy {start|stop|reload|restart|status}"
	exit 2
	;;
esac
:

Update the service definitions:

cd /etc/init.d
chmod a+x haproxy
update-rc.d haproxy defaults

Create a user and a configuration directory:

useradd haproxy
mkdir /etc/haproxy

Create a file /etc/default/haproxy with the following contents:

# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1
# Add extra flags here
# EXTRAOPTS="-de -m 16"

Configure HAProxy

Create a new configuration file /etc/haproxy/haproxy.cfg based on the following example. This is sparse, and probably unsuited to any specific application you might have in mind - tailor it appropriately to your use case:
global
  log 127.0.0.1 local1 notice
  maxconn 4096
  user haproxy
  group haproxy
  daemon
  ca-base /etc/ssl
  crt-base /etc/ssl

defaults
  log global
  maxconn 4096
  mode http
  # Add x-forwarded-for header.
  option forwardfor
  option http-server-close
  timeout connect 5s
  timeout client 30s
  timeout server 30s
  # Long timeout for WebSocket connections.
  timeout tunnel 1h

frontend public
  # HTTP
  bind :80
  # Redirect all HTTP traffic to HTTPS
  redirect scheme https if !{ ssl_fc }

  # HTTPS
  # Example with CA certificate bundle
  # bind :443 ssl crt cert.pem ca-file bundle.crt
  # Example without CA certification bunch
  bind :443 ssl crt snakeoil.pem

  # The node backends - websockets will be managed automatically, given the
  # right base paths to send them to the right Node.js backend.
  #
  # If you wanted to specifically send websocket traffic somewhere different
  # you'd use an ACL like { hdr(Upgrade) -i WebSocket }. Looking at path works
  # just as well, though - such as { path_beg /socket.io } or similar. Adjust your
  # rules to suite your specific setup.
  use_backend node if { path_beg /served/by/node/ }
  # Everything else to Nginx.
  default_backend nginx

backend node
  # Tell the backend that this is a secure connection,
  # even though it's getting plain HTTP.
  reqadd X-Forwarded-Proto: https

  balance leastconn
  # Check by hitting a page intended for this use.
  option httpchk GET /served/by/node/isrunning
  timeout check 500ms
  # Wait 500ms between checks.
  server node1 127.0.0.1:10080 check inter 500ms
  server node1 127.0.0.1:10081 check inter 500ms
  server node1 127.0.0.1:10082 check inter 500ms
  server node1 127.0.0.1:10083 check inter 500ms

backend nginx
  # Tell the backend that this is a secure connection,
  # even though it's getting plain HTTP.
  reqadd X-Forwarded-Proto: https

  balance leastconn
  # Check by hitting a page intended for this use.
  option httpchk GET /isrunning
  timeout check 500ms
  # Wait 500ms between checks.
  server nginx1 127.0.0.1:8080 check inter 500ms
  server nginx1 127.0.0.1:8081 check inter 500ms

# For displaying HAProxy statistics.
frontend stats
  # HTTPS only.
  # Example with CA certificate bundle
  # bind :1936 ssl crt cert.pem ca-file bundle.crt
  # Example without CA certification bunch
  bind :1936 ssl crt cert.pem
  default_backend stats

backend stats
  stats enable
  stats hide-version
  stats realm Haproxy Statistics
  stats uri /
  stats auth admin:password

Note that HAProxy uses syslog, which means that you have to make some changes to the default configuration in order to enable it. Uncomment these lines in /etc/rsyslog.conf:

# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
Then create /etc/rsyslog.d/30-haproxy.conf and place the following content into it:
local1.* -/var/log/haproxy_1.log
& ~

Log rotation for the HAProxy log file is set up by creating /etc/logrotate.d/haproxy with the following content:

/var/log/haproxy*.log{
    rotate 4
    weekly
    missingok
    notifempty
    compress
    delaycompress
    sharedscripts
    postrotate
        reload rsyslog >/dev/null 2>&1 || true
    endscript
}
You must restart the service for this to take effect:
restart rsyslog

Create the SSL Certificate File

Your SSL key and certificate and key must be concatenated into a single file. The order is important: key then certificate. For the default snakeoil certificate referenced in the example configuration above:

apt-get install ssl-cert
make-ssl-cert generate-default-snakeoil --force-overwrite
cd /etc/ssl
cat private/ssl-cert-snakeoil.key certs/ssl-cert-snakeoil.pem > snakeoil.pem

Configure Nginx

The assumption is that Nginx will be set up here to serve both normal website content and static files. This could be anything not served by Node.js - PHP, flat HTML, static assets such as Javascript and images, proxying traffic to other backends, and so on. So configure Nginx for your specific use case.

Restart Everything and Test

Restart the services after you are done with configuration:

service haproxy restart
service nginx restart

Now test your Node.js / Socket.IO application - it should work just fine.