Docker WordPress in a subdirectory

Moving a standard WordPress installation to a different host is a minor pain – I only do this occasionally, so every time I need to consider the configuration of the original environment and how this translates to the new server. Nothing too challenging, but tedious and prone to error.

So I figured Docker containers are the way to go and, sure enough, Docker Hub has more than enough images for my needs. The only issue is that I don’t dedicate my server to WordPress – it’s in a ./wordpress subdirectory of the web root. Docker’s official WordPress image keeps reinstating the WordPress files if they’re not found in the web root.

TL;DR – create the directory wordpress in the container’s web root and add -w="/var/www/html/wordpress" to the docker run (or create) command. This sets the current working directory for docker-entrypoint.sh to work in, and it will install the wp-* and .htaccess files there.

The rest of this post documents my setup, more than anything it’s just a future reference for myself. I’ll start with articles that were references when I started setting this up.

The first is How to Install WordPress with Docker on Ubuntu which is a clearly written tutorial that goes further and uses Nginx as a reverse proxy to the container (I’ve chosen not to do for now). The second is Install fail2ban with Docker which describes what’s required to get fail2ban configured to read a container’s logs. Although I don’t document anything further on this, it’s really useful to inhibit brute-forcing of the server.

Let’s Encrypt Verification via DNS

For me, the least invasive way to verify domain ownership for  SSL/TLS was via DNS TXT records. This avoids the need to integrate with web servers and bind to/forward from a public port, however it does mean that I have to edit my DNS zone to add the required TXT records as instructed by certbot.

certbot certonly --manual --preferred-challenges=dns\
  -d example.com\
  -d www.example.com

At the end of this process, the certificates are placed in /etc/letsencrypt/live. Apache’s SSL virtual host can be configured with the following directives, after the WordPress container is created as detailed below.

SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem

The /etc/letsencrypt directory can be brought with the container to a new host and the certificates will be made visible to the WordPress container using a bind-mount, as detailed below. If you only want to transfer the above files in /etc/letsencrypt/live, then remember that they’re links into ../archive/, so they must be dereferenced if an archive tool is used (e.g. option --dereference in the  tar command).

Note that renewing certificates requires re-running the above certbot certonly ... command and, if the list of domains is the same, certbot assumes that a manual renewal is required. The certbot renew command, as far as I understand it, is used only for automated (e.g. cron) renewals, and requires some hooked in code to respond to the challenges that the server generates (e.g. to install the TXT records that it asks for). Renewed certificates are not automatically picked up by Apache, so a docker container restart wp-apache2 is required.

Creating the containers

A container is created from the Docker Hub wordpress image, and also from the mariadb image. These run with bind-mounts to expose host directories inside the containers. The directory structure on the host has /opt/wordpress/html for the WordPress container, /opt/wordpress/database for the MariaDB container.

mkdir -p /opt/wordpress/databasemkdir -p /opt/wordpress/html/wordpress

I also mount /etc/letsencrypt/live as a read-only directory for Apache SSL/TLS. This directory must exist on the host, with files susa.net/cert.pem and susa.net/fullkey.pem (Apache needs these for default-ssl.conf).

The following commands assume that these directories have been created, and that valid certificates have been created (see above for notes if this hasn’t been done yet). First, we create the MariaDB container for our WordPress data.

docker run -e MYSQL_ROOT_PASSWORD=<mysqlrootpw>\
 -e MYSQL_USER=wpuser -e MYSQL_PASSWORD=<wpuserpw>\
 -e MYSQL_DATABASE=wordpress_db\
 -v /opt/wordpress/database:/var/lib/mysql\
 --name wp-mariadb -d mariadb

Next we create the WordPress container. This links to the wp-mariadb container we have just created to expose the host as mysql. It also exposes the wp-mariadb container’s environment to our wp-apache2 container, so we unnecessarily divulge, for example MYSQL_ROOT_PASSWORD, to a public Internet server. This is not ideal (e.g. wp-apache2 has no need to know MYSQL_ROOT_PASSWORD), and is probably why --link is being deprecated in favour of Docker networks.

docker run -w="/var/www/html/wordpress"\
 -e WORDPRESS_DB_USER=wpuser\
 -e WORDPRESS_DB_PASSWORD=<wpuserpw>\
 -e WORDPRESS_DB_NAME=wordpress_db\
 -p 80:80 -p 443:443\
 -v /opt/wordpress/html:/var/www/html\
 -v /etc/letsencrypt/live:/etc/letsencrypt/live:ro\
 --link wp-mariadb:mysql --name wp-apache2 -d wordpress

It should now be possible to access Apache on port 80 and port 443, and WordPress should be on the path /wordpress/.

Note the -d flag that detaches the process and returns control back to the calling shell. This is essential if the containers are to run in the background, and can be omitted to keep the process in the foreground, useful when you want logs to be reported to stdout.

Some commands used while setting up the wp-apache2 container.

Most of the commands below should really be brought into to a DockerFile configuration, but it’s convenient for my use case to simply build a baseline image that can ultimately be committed.

docker exec -it wp-apache2 bash  # This takes us to bash in the container
a2enmod ssl   # These commands are run on bash in the container
a2ensite default-ssl
apt-get install vim
vi /etc/apache2/sites-enabled/default-ssl.conf

For wp-mariadb, I often find it useful when testing to get WordPress to respond to a different siteurl, so I created a bash script update_wp_siteurl.sh in the root directory.

#!/bin/bash

if [ "${1}" == "" ]; then
  echo "Usage ${0} URL (such as http://www.susa.net/wordpress)"
  exit 0
fi
mysql -u${MYSQL_USER} -p${MYSQL_PASSWORD} ${MYSQL_DATABASE} <<EOSQL
  update wp_options set option_value = '${1}'
   where option_name in ('home', 'siteurl');
  select option_name, substr(option_value, 1, 60) as option_value
   from wp_options
   where option_name in ('home', 'siteurl');
EOSQL

Managing the images and migration

The following commands will commit images to the repository and save those images and gzipped tar files. I use the tag to denote the host on which the container’s image was created, in my case here I my host is kakapo.

docker commit wp-apache2 wordpress:kakapo
docker commit wp-mariadb mariadb:kakapo
docker image save wordpress:kakapo |gzip > wordpress-kakapo_image.tgz
docker image save mariadb:kakapo |gzip > mariadb-kakapo_image.tgz

The last two lines can be condensed into a single command for convenience.

docker image save\
  wordpress:kakapo\
  mariadb:kakapo | gzip > wordpress-mariadb-kakapo_images.tgz

The saved files can be copied to a remote host and loaded into the new host’s Docker repository.

kevin@kakapo:~$ scp wordpress-mariadb-kakapo_images.tgz newhost.example.com:

The the new host, load the images with

kevin@newhost:~$ zcat wordpress-mariadb-kakapo_images.tgz | docker image load

The newly loaded images can be used to create containers on the new host as described above, only using the images wordpress:kakapo and mariadb:kakapo instead of pulling the official images.

The bind-mounts

Remember that, before creating the containers on a new host, the /opt/wordpress/ and /etc/letsencrypt/ directories have to be transferred and accessible to docker in the location specified by the -v (or --mount) parameter. Depending on the environment, something like rsync or tar then scp should suffice.

On the new host, make sure that ./wp-content/* is writeable by the user and/or group www-data. I usually run Debian, so the host UID/GID for www-data is the same as in /etc/passwd in the WordPress container. Therefore it’s enough to simply chown -R www-data:www-data wp-content.

Generally, when using bind mounts, the permissions have to be considered from the container’s point of view. The directory can be seen both from the host, and from within the container environment. The UID/GID of the files will be interpreted according to the environment that’s reading the filesystem. If a file is writeable only by UID 1000 on the host (whoever that may be), then only processes running as UID 1000 in a container will be able to write the file.

Docker Volumes should really be used instead. They can be managed directly using Docker, so as long as I can also get access to the files from the host, then volumes would be a better way to go. Something to do in future.

Leave a Reply

Your email address will not be published. Required fields are marked *