Setting up a Secure Ghost Blog in a Docker Container


In this tutorial I'm going to set up a Ghost blog from scratch. It's going to run inside a Docker container, on a server from DigitalOcean, and serve all its traffic over HTTPS using an SSL certificate provided by letsencrypt.

Although all this information is available from various sources on the web, these technologies are all changing pretty quickly. I found it frustrating to find articles from different time periods where certbot acted differently, or where someone was referencing the pre-1.0 version of ghost. I hope someone will find it useful to have it all consolidated here in one place. Here are the versions of all the software that I'm using:

  • Ubuntu 16.04
  • Nginx 1.10.3
  • Certbot 0.19.0
  • Ghost 1.17.0
  • Docker 1.12.6

The Steps

  1. Create a DigitalOcean Droplet
  2. Secure the New Server
  3. Set Up DNS For Your Domain
  4. Install Docker and Start a Ghost Container
  5. Install Nginx and Configure
  6. Install certbot and Generate a Certificate
  7. Rejoice

Create a DigitalOcean Droplet

A droplet on DigitalOcean is an SSD-backed server that spins up in under a minute, it's pretty impressive. While there are cheaper options out there, I have found that they are cheaper for a reason (I'm looking at you, Scaleway). On the surface these two providers appear to have similar features, and Scaleway is cheaper, but with scaleway I've had:

  • Servers that just refuse to spin up, and don't provide any errors.
  • Messages like this when trying to use their storage service (so much for having the same features):
  • A petty frustration everty time I login that their login page opens in a new tab.

To be perfectly fair, I haven't tried contacting Scaleway to resolve any of this, perhaps their customer support is stellar. But frankly I don't want to deal with customer service, I just want things to work. With DigitalOcean I've had fantastic performance and reliability, so I highly recommend them. No, they didn't pay me to write this, I actually just love them. And I want you to sign up through my referral link so I get free credits . And hey, you'll get $10 in free credits to start too!

I'll be using a 1GB droplet running ubuntu 16.04 for this tutorial, though the 512MB droplet at $5/month would likely be sufficient as well.

Secure the New Server

The internet is a scary place, don't hang your data out there for anyone to see.

While the finer points of server security are outside of both the scope of this post and my area of expertise, here's my Ansible playbook for automating the most common-sense security measures on a new server: https://github.com/daneverson/ansible

Again, I'm not an authority on these matters though I am constantly trying hard to learn more. Advice is always appreciated and caution is always advised.

Set up DNS For Your Domain

Again, DigitalOcean makes this pretty easy. Just head to the Networking tab in your DigitalOcean dashboard, add your domain, and create a couple A records, one for the bare domain (e.g. example.com) and one with the www prefix (e.g. www.example.com).

If you want to make sure this is working, or check that your DNS settings have propagated out into the wild, do the following:

$ nslookup example.com

and verify that you see the IP address of your recently created droplet in the return.

Install Docker and Start a Ghost Container

While Ghost has a snazzy CLI that makes installing the required packages for running Ghost very straightforward, I still prefer to run my blog in a Docker container. I'd rather not clutter my server with the many node packages that are installed -- or any other packages, for that matter -- because as a hobbyist I often find myself using a single server for multiple experiments, some of them short-lived. Experiments are fun! But they often leave behind clutter in the form of installed packages, configuration files, and other cruft. I like to keep this cruft inside a container and be very explicit about what directories on the host I share with that container, that way I can keep the box itself as pristine as possible. There aren't many things in this world that sound like less fun than trying to figure out what configuration change I inadvertently made to my box 6 months ago that is making my blog work on one server, but not another.

Installing docker is simple:

$ sudo apt-get install docker.io

And starting a container running Ghost is almost as easy. But before starting up the container, decide where you want all of your Ghost blog's content to live on your local server. This is the directory that you'll want to back up on a regular basis, as it will contain the SQLite database, and all the images, themes, and contentfor your blog. Here's where I put it, if you're not feeling creative:

$ sudo mkdir -p /var/lib/ghost/personal_blog

Now start up the container, and mount that directory:

$ sudo docker run -d --name personal_blog -p 2368:2368 -v /var/lib/ghost/personal_blog:/var/lib/ghost/content -e url=http://yourdomain.com --restart-always ghost:1.17-alpine

I recommend pasting this command into a file, and then running it as a one-line shell script, unless you're some kind of savant with remembering CLI flags with the passage of time.

This command will:

  • Run the container detached from your console (that's the -d)
  • Name the container personal_blog
  • Map port 2368 on your local machine to the exposed port 2368 of the docker container
  • Mount the directory /var/lib/ghost/personal_blog to the proper place inside the container
  • Provide Ghost with your root URL configuration (yourdomain.com)
  • Instruct Docker to restart the container if it dies for some reason

At this point there is a ghost blog running on your server, but no nginx configuration to allow accessing it from the web. You can still check that it's working, though. Try checking the logs being produced by the docker container to see if ghost is reporting that it started up alright:

$ sudo docker logs personal_blog

Install Nginx and Configure

Installing nginx is as simple as:

$ sudo apt-get install nginx

If you followed my instructions above for securing your server, you'll need to poke a hole in the firewall for Nginx at this point:

$ sudo ufw allow 'Nginx Full'

Then you'll just need to do the basic configuration to get nginx talking to your docker container, without SSL. Either start with the default nginx site file at /etc/nginx/sites-available/default, or create your own. Of course, replace yourdomain.com with your actual domain.

server {
	listen 0.0.0.0:80;
	listen [::]:80;

	server_name yourdomain.com www.yourdomain.com;

	access_log /var/log/nginx/personal_blog.log;
	error_log /var/log/nginx/personal_blog.log;
	
	# This will allow certbot to do its thing, and generate an SSL cert.
  location ~ /.well-known {
        allow all;
        break;
  }

	location / {
		proxy_set_header Host $http_host;
	        proxy_set_header X-NginX-Proxy true;
	        proxy_set_header X-Real-IP $remote_addr;
	        proxy_set_header X-Forwarded-Proto $scheme;
        	# Proxy all requests to Ghost
	        proxy_pass http://127.0.0.1:2368;
        	# Avoid nginx doing funky stuff with redirects
	        proxy_redirect off;
	}
}

Check that your shiny new nginx configuration is valid, and then reload nginx:

$ sudo nginx -t
... (output, hopefully saying you didn't mess up your config)
$ sudo service nginx reload

You should now have a working Ghost blog! All that's missing is an SSL certificate to serve traffic over HTTPS.

Install certbot and Generate a Certificate

HTTPS everywhere is happening, and that's a good thing. You can no longer have any input forms on your HTTP-only site without getting warnings from browsers. Luckily, letsencrypt will give you a free certificate and a tool to install it on your server that will get you an 'A' rating from the Qualys SSL test.

I've found the rate of change of the certbot tool to be very high, at least around the time of publishing this article. It might behoove you to go check out their docs to see what's what. I'm also using a feature of certbot here that is marked as Alpha at the time of writing, and it will automatically fiddle with your existing nginx configuration. A wise web developer would double-check this configuration file after the automated fiddling is complete.

Let's install certbot using the official Ubuntu PPA:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx 

Now run it! I'm going to go ahead and let it mess with my nginx configuration, but then check afterwards to see what kind of havok was wreaked. I generally have low expectations for automated tools writing my configuration files.

This little command actually does a lot, so after running it be prepared for some interactive bash prompts that guide you through the process. In my case, the tool seemed concerned about not being able to find /etc/nginx/sites-enabled/default and I got some scary red text about it all, but it didn't actually have any trouble finding my intended config file at /etc/nginx/sites-enabled/personal_blog. It asked which domains I wanted certs for, generated the certs, then asked if I wanted to redirect all HTTP requests to HTTPS. I was like, "yup".

$ sudo certbot --nginx

That was fun. Let's see how badly it mangled the nginx configuration:

server {
        listen 0.0.0.0:80;
        listen [::]:80;

        server_name danieleverson.com www.danieleverson.com blog.danieleverson.com;

        access_log /var/log/nginx/personal_blog.log;
        error_log /var/log/nginx/personal_blog.log;

        # This will allow certbot to do its thing, and generate an SSL cert.
        location ~ /.well-known {
                allow all;
                break;
        }

        location / {
                proxy_set_header Host $http_host;
                proxy_set_header X-NginX-Proxy true;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-Proto $scheme;
                # Proxy all requests to Ghost
                proxy_pass http://127.0.0.1:2368;
                # Avoid nginx doing funky stuff with redirects
                proxy_redirect off;
        }

    listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/danieleverson.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/danieleverson.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot




    if ($scheme != "https") {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    # Redirect non-https traffic to https
    # if ($scheme != "https") {
    #     return 301 https://$host$request_uri;
    # } # managed by Certbot

}

Not bad! Aside from some funky indenting, or lack thereof, and that mysterious redundant commented-out block at the bottom, it actually looks very reasonable. Checking the syntax with nginx -t agrees that it is valid. It added a comment after each line that was added by the tool as well, which will probably be nice when I look at this in 6 months and wonder who did all this.

Taking a peek inside that file that it included, /etc/letsencrypt/options-ssl-nginx.conf, I see that certbot saved me from figuring all this crazy crypto stuff out:

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";

Maybe if that's your jam it doesn't look insane, but I for one am glad I didn't have to type any of that out. I'm a donor to the EFF, so I'll just call that outsourced understanding and move on with my life.

Now, most importantly, let's see if this worked. My expectation was that I'd have to restart the docker container, since the url that was provided to ghost as http should now be https but upon visiting the site, it seems to just work... Perhaps the browser takes care of those 301 redirects that our nginx configuration is now returning... Hit up the comments below if you know what's going on here, but I'm skipping straight to rejoice.

Rejoice

That wasn't so bad! Now I can share this with the whole world!

On that note, if you know more than me about something I wrote about above and I look stupid in front of the whole world, please let me know in the comments. I'm happy to take advice and update the tutorial accordingly.