Setting up a blog with Ghost, Caddy and Docker

I've been a theoretical proponent of self-hosting for a long time, and I've had all kinds of websites over the years. So, I finally decided create my professional blog as a Ghost application instead of a statically generated site. This is the story of that beginning.

I've been a theoretical proponent of self-hosting for a long time, and I've had all kinds of websites over the years. For some time now, I've been a fan of SSGs, and Eleventy in particular, but lately I got excited at the work the Ghost team has been doing in developing an ActivityPub integration. So, I decided to get ahead of the curve and create my professional blog as a Ghost blog instead of a statically generated site.

I've been looking for a place to host an application cheaply—with 5 kids and the cost of rent these days, every dollar counts—and landed on slicie, which is a small outfit in Arizona that offers Virtual Private Servers on a Pay For What You Use basis, which means that I don't pay for unused compute, memory, storage, or traffic capacity. Since I don't expect to get very many visitors, I decided to give them a shot. Back when I built WordPress sites, I was getting a similar deal from NearlyFreeSpeech.net, but while they are a venerable and reliable institution, they only offer shared hosting. Shared hosting is really excellent for PHP-based websites, but it can be very difficult to set up more custom deployments. So as of the time of writing I'm on the 60-day trial at slicie, and we'll see how it goes.

At first, I was trying to set this up like I would on AWS, with a private subnet and Internet gateway. But I was running into connectivity issues, and as the owner explained, the setup is more complicated than what I want to deal with. "In general," he wrote, "we don't have any sort of 'niceties' for making private networking easy, as frankly, very few people care about that on our platform. People are either doing basic web hosting with things like Wordpress or they're relying on kubernetes for the overlay network between nodes/VMs." I don't need a Kubernetes cluster, thank God, but a basic Docker compose setup with Ghost, MySQL, and Caddy as a reverse proxy seems like a good compromise, and I'll be able to easily upgrade it to a swarm if I unexpectedly become the number one Internet destination for tech tutorials and dad jokes (unlikely, as I have only one dad joke, and it's the one on the site homepage).

I set this up on a VPS running Ubuntu 20.4, which is a nice ripe LTS version. There are a few issues with this setup, and a few pieces missing (I'll go into more detail at the end), but it's a working website, which is more than I had a few days ago. With this setup I'll be able to easily add the other website projects I've been working on, and work on improving it at my leisure. So let's get started.

Prerequisites

First of all, before you can start typing, you need:

  • a virtual private server running Ubuntu 20.4,
  • a domain name with an A record pointing at your VPS,
  • an SMTP mail provider (I am using Mailgun, since it's what Ghost integrates with, but I'm planning to try some other providers too later on), and
  • a local machine with an ssh client and SSH key created. I could include a tutorial on creating SSH keys but this post is long enough already.

Secure the SSH Login

To make our host a little harder to compromise, we'll disable logging in as root via SSH, and require the use of key-based login only.

SSH into the remote machine:

[marc@laptop ~]$ ssh root@<server ip address>
root@<server ip address>'s password: 

I got the process from this tutorial on nixCraft, and I'm reproducing a condensed version that worked for me. If you run into any issues, you can go there for more details and some troubleshooting information.

Create user account

We'll create a user account and add it to the sudo group so that we never have to use the root account at all:

root@gause:~# useradd -m -s /bin/bash marc
root@gause:~# passwd marc
New password: 
Retype new password: 
passwd: password updated successfully
root@gause:~# usermod -aG sudo marc

Now we'll verify that it works and end the SSH session:

root@gause:~# su - marc
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

marc@gause:~$ id marc
uid=1000(marc) gid=1000(marc) groups=1000(marc),27(sudo)
marc@gause:~$ logout
root@gause:~# 
logout
Connection to <server ip address> closed.
[marc@laptop ~]$

Configure public key and disable password login

First, copy your public key to the remote machine, and login. I run Linux on my personal machine, the process may be a little different if you use Windows or Mac:

[marc@laptop ~]$ ssh-keygen -t ed25519
[marc@laptop ~]$ ssh-copy-id -i ~/.ssh/id_ed25519.pub marc@<server ip address>
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/alter_kaker/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
marc@<server ip address>'s password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh -i /home/marc/.ssh/id_ed25519 'marc@<server ip address>'"
and check to make sure that only the key(s) you wanted were added.

[marc@laptop ~]$ ssh -i /home/marc/.ssh/id_ed25519 'marc@<server ip address>'

marc@gause:~$

Success! Now that we're able to login with ssh keys, we'll disable password login.

We will need to set the following sshd login configuration parameters to no: ChallengeResponseAuthentication, PasswordAuthentication, UsePAM, andPermitRootLogin. Instead for searching for them in the main config file, we will simply create a new file that will be included.

First, ensure that Include is enabled in sshd_config:

root@gause:/home/marc# cat /etc/ssh/sshd_config | grep Include
Include /etc/ssh/sshd_config.d/*.conf

Then, create a login_security.conf file that will contain the required configurations using this cool trick:

marc@gause:~$ sudo cat > /etc/ssh/sshd_config.d/login_security.conf <<EOF
> ChallengeResponseAuthentication no
> PasswordAuthentication no
> UsePAM no
> PermitRootLogin no
> EOF
marc@gause:~$ sudo systemctl reload ssh

Ensure that the configurations have been correctly applied:

marc@gause:~$ sudo sshd -T | grep -E -i 'ChallengeResponseAuthentication|PasswordAuthentication|UsePAM|PermitRootLogin'
usepam no
permitrootlogin no
passwordauthentication yes
challengeresponseauthentication no

It turned out that on my server, there's a conflicting configuration file that overrode passwordauthentication to yes, so just I deleted it. Now let's restart the ssh daemon and check again:

marc@gause:~$ sudo rm /etc/ssh/sshd_config.d/50-cloud-init.conf 
marc@gause:~$ sudo systemctl reload ssh
marc@gause:~$ sudo sshd -T | grep -E -i 'ChallengeResponseAuthentication|PasswordAuthentication|UsePAM|PermitRootLogin'
usepam no
permitrootlogin no
passwordauthentication no
challengeresponseauthentication no

Great! Now let's log out and validate:

marc@gause:~$ exit
logout
Connection to <server ip address> closed.
[marc@laptop ~]$ ssh root@<server ip address>
root@<server ip address>: Permission denied (publickey).
[marc@laptop ~]$ ssh marc@<server ip address> -o PubkeyAuthentication=no
marc@<server ip address>: Permission denied (publickey).
[marc@laptop ~]$ ssh marc@<server ip address>
Last login: Mon Jan  6 11:43:02 2025 from 24.152.150.63
marc@gause:~$ 

We've now made our SSH login a little more secure and easier to use.

Set up Docker and configure Docker Compose

This will be the main portion of the work. We'll prepare our host to run Docker containers, and prepare all the configuration files needed by Docker Compose to run our stack.

Install Docker Engine:

marc@gause:~$ sudo apt-get update
...
marc@gause:~$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
curl: (23) Failed writing body (0 != 3817)
marc@gause:~$ sudo install -m 0755 -d /etc/apt/keyrings
marc@gause:~$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
marc@gause:~$ sudo chmod a+r /etc/apt/keyrings/docker.asc
marc@gause:~$ echo \
>   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
>   $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
>   sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
marc@gause:~$ sudo apt-get update
...
marc@gause:~$ 

For more details and troubleshooting information, go to the Docker website.

Networking

We'll set up two networks: web for outwards-facing services (for now just Caddy), and private for internal services (Ghost and MySQL).

networks:
  private:
  web:

docker-compose.yml

The MySQL service

Generate a strong, random password and place it in a file on the remote server at ./secrets/db_root_password. Don't type the password directly into the console, because console history is saved in clear plain text. Either use a text editor to edit the file, or copy the file over ssh.

I use sshfs to mount the remote folder via ssh, so that I can easily copy, save, and edit files using my favorite file manager and text editor, all in the comfort of my local machine. If you do the same, be sure to give it the IdentityFile option so that it works with public key authentication, otherwise you'll get an inscrutably laconic read: Connection reset by peer error.

[marc@laptop ~]$ sudo sshfs -o allow_other,default_permissions marc@marctrius.net:/home/marc/ /mnt/gause -o IdentityFile=/home/marc/.ssh/id_ed25519

We'll also need to create a volume to persist data between restarts, and set up the password as a Docker secret. Then we'll attach the MySQL service to the private network.

services:
  mysql:
    image: mysql:8.0
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
    volumes:
      - mysql:/var/lib/mysql
    networks:
      - private

volumes:
  mysql_data:
secrets:
  db_root_password:
    file: ./secrets/db_root_password

docker-compose.yml

The Ghost service

Before we can bring up the Ghost container, we will need to create the database and user for our Ghost instance. To do this, we'll create a directory for sql scripts and add a script to it.

CREATE DATABASE IF NOT EXISTS marctrius_net_db CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
CREATE USER IF NOT EXISTS 'marctrius_net_mysql_user'@'%' IDENTIFIED WITH mysql_native_password BY RANDOM PASSWORD;

USE marctrius_net_db;
GRANT ALL PRIVILEGES ON * TO 'marctrius_net_mysql_user'@'%';

./db_scripts/init_db_marctrius_net.sql

services:
  mysql:
    ...
    volumes:
      ...
      - ./db_scripts:/db_scripts
    ...

docker-compose.yml

We also need to define the service for the blog itself, attached to both the private and web networks, and its own volume for data:

services:
  marctrius_net:
    image: ghost:5-alpine
    restart: always
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: marctrius_net_mysql_user
      database__connection__password: ${MARCTRIUS_NET_DB_PASSWORD}
      database__connection__database: marctrius_net_db
      mail__transport: "SMTP"
      mail__options__host: "smtp.mailgun.org"
      mail__options__port: 587
      mail__options__auth__user: "postmaster@marctrius.net"
      mail__options__auth__pass: ${SMTP_AUTH_PASS}
      url: https://marctrius.net
      NODE_ENV: production
    volumes:
      - marctrius_net_data:/var/lib/ghost/content
    networks:
      - private
      - web
volumes:
  marctrius_net_data:

docker-compose.yml

You can see how certain configuration options are supplied by setting environment variables on the container. This is not ideal for passwords, but the alternative is for Ghost to just store them in clear plain text in its configuration.{development|production}.json files. You can see that the database connection password is supplied with an environment variable on the host—we'll address that later on, when we get the password generated.

The Caddy service

The Caddy service will connect to the web network only, and have the http and https ports mapped directly to the same ports on the host. This way it will be able to correctly proxy connections to containers on the web network.

services:
  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    volumes:
      - ./caddy:/etc/caddy
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - web
    ports:
      - 80:80
      - 443:443
volumes:
  caddy_data:
  caddy_config:

docker-compose.yml

For now we will just configure Caddy to serve only the Ghost website. This should be the contents of ./caddy/Caddyfile on the remote machine:

https://marctrius.net {
  reverse_proxy marctrius_net:2368
}

Caddyfile

Copy the configuration

This is the final version of docker-compose.yml:

services:
  mysql:
    image: mysql:8.0
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
    volumes:
      - mysql_data:/var/lib/mysql
      - ./db_scripts:/db_scripts
    networks:
        - private
    secrets:
      - db_root_password
  marctrius_net:
    image: ghost:5-alpine
    restart: always
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: marctrius_net_mysql_user
      database__connection__password: ${MARCTRIUS_NET_DB_PASSWORD}
      database__connection__database: marctrius_net_db
      url: https://marctrius.net
      mail__transport: "SMTP"
      mail__options__host: "smtp.mailgun.org"
      mail__options__port: 587
      mail__options__auth__user: "postmaster@marctrius.net"
      mail__options__auth__pass: ${SMTP_AUTH_PASS}
      NODE_ENV: production
    volumes:
      - marctrius_net_data:/var/lib/ghost/content
    networks:
      - private
      - web   
  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    volumes:
      - ./caddy:/etc/caddy
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - web
    ports:
      - 80:80
      - 443:443
volumes:
  mysql_data:
  marctrius_net_data:
  caddy_data:
  caddy_config:
networks:
  private:
  web:
secrets:
  db_root_password:
    file: ./secrets/db_root_password

docker-compose.yml

Remember to update the blog url, service name, volume name, etc to your needs.

Now you will have to copy the required files to your server, unless you were editing them directly via ssh or sshfs. Ensure the following structure is present in the user home directory on the remote server:

/home/marc
├── .env
├── caddy
│   └── Caddyfile
├── db_scripts
│   └── init_db_marctrius_net.sql
├── docker-compose.yml
└── secrets
    └── db_root_password

Stand the services up!

First, we'll start the MySQL service and run the init script:

marc@gause:~$ sudo docker compose up -d mysql
marc@gause:~$ sudo docker exec -it marc-mysql-1 mysql -p
Enter password: 

mysql> source init_db_marctrius_net.sql
Query OK, 1 row affected (0.00 sec)

+--------------------------+------+----------------------+-------------+
| user                     | host | generated password   | auth_factor |
+--------------------------+------+----------------------+-------------+
| marctrius_net_mysql_user | %    | <password>           |           1 |
+--------------------------+------+----------------------+-------------+
1 row in set (0.01 sec)

Database changed
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

mysql> exit

Save the randomly generated password in the .env file, and bring up Ghost and Caddy:

marc@gause:~$ touch .env
marc@gause:~$ echo "MARCTRIUS_NET_DB_PASSWORD=<password>" | tee -a .env
MARCTRIUS_NET_DB_PASSWORD=<password>
marc@gause:~$ sudo docker compose up -d marctrius_net
marc@gause:~$ sudo docker compose up -d caddy

Now navigate to the new site, it should be up and running!

Next Steps

Now when you navigate to the /ghost/ subdirectory you should get the admin interface. The basic functionality of Ghost should be working, except for newsletters. That's it for this post! Being able to step back and say, "that's it for today" doesn't come easily to me, but you have to do it, otherwise you never get anything done. However, there are a few things that still need to be addressed as soon as possible:

  • Implement responsive images. I noticed that even the big image I uploaded for the homepage isn't optimized for resolution. It's a 2mb image, and it stays 2mb for a big screen desktop computer with fast internet, and for a small phone with mobile data. That's really unacceptable. I'm not sure yet how to implement this in Ghost, whether it's done at the theme level or at the CDN level, but this will be the very next thing I look into. The site isn't truly usable and accessible until it performs well on a low-end device with a poor connection.
  • Add caching. While the site is getting almost no visitors, caching is not a big issue. But even a single share to a social media platform, and especially to Mastodon, where many instances are going to fetch the link to cache it locally, can dramatically drive up traffic. To save costs and reduce the risk of problems, we need to add a caching layer in front of Ghost, either as another Docker service or externally to the VPS.
  • Develop custom theme. The default theme looks nice, but it is performing poorly, and doesn't work properly when JavaScript is turned off. I would like to develop a theme that follows Progressive Enhancement guidelines and is fast on any device or connection. This theme would also have features for Fediverse sharing, such as adding author attribution and website verification.
  • Automate standing the services up if the server is restarted, services upgraded, additional services added, etc. I don't have any specific plans to be turning the server on and off, but I'd like to automate deployments, without me having to go in there and manually run sql scripts or type docker compose up. We'll also need to provide healthchecks and call out dependencies in the docker-compose.yml.
  • Improve the security of the Ghost instance. I reached out to a friend who specializes in security to see what best practices may be available to improve the security of the Ghost instance. I really don't like storing passwords, even passwords for limited MySQL users that only have access to one database, in the clear. I have questionable capacity for open-source contributions (5 kids), but if I run out of interesting projects, fun leisure activities, and Torah to learn, as well as get all the sleep I want, I might consider looking into what can be done to improve the way Ghost stores sensitive configuration data.
  • Improve the security of the host itself. We made a basic first step in hardening the host when we disabled password logins, but there are many other things that we can do. I expect this to be a very fun and rewarding learning process, although here too it will be important to know when to say, "good enough."
  • Add monitoring and log management. You don't really need logging, until something breaks—and then you really need it! This deployment has the potential to become pretty complex, so it's really important to be able to monitor, and to be able to dig in the logs for triage if needed.

Updates:

  • Thanks to @bitpirate@mas.to for pointing out that I shouldn't bind the Caddyfile as a single file, but to bind a whole directory, because it will lead to issues when updating the Caddy configuration in-flight. More information in this Medium post.
  • As @wobweger@mstdn.social mentioned, the right way to maintain the docker-compose.yml and any other configuration files is with a Git repository. I guess I was too lazy to set up Git and more ssh keys on the remote machine, but I'm guessing that it'll be part of the changes to automate deployments anyway!