MySQL on Docker Swarm

For several applications around the house, I need a MySQL backend. The biggest database I run is about 60Mb of data for my Kodi media players. By using centralized storage like this I only need to update one of them when I add new media. Also convenient if I watch a movie in the living room, pause it, and then want to continue in the bedroom. A few years ago we actually did this with our apartment in San Francisco and our other apartment in Sweden. So this has been battle-proven over the years for me.

Docker Swarm

In the past, I ran MySQL on a Raspberry Pi, simple enough. A while ago I decided to redo all the server Raspberry Pis in my home to a Docker Swarm instead. By running all the different systems in docker containers I can easily deploy, backup and test different system configurations without re-installing the Raspberry Pis all the time to clean them up.

Deploy MySQL to the swarm

I use Swarmpit as a frontend to my docker swarm which makes it really easy to deploy new services from the web UI. For this instance, I use the hypriot/rpi-mysql:latest image. It’s customized for the Raspberry Pi and works very well.

Before you deploy there are a few things to consider. The data will not be persistent if the node goes down and the container is started on another node or even the same one. To achieve persistent data we need to do a bind mount. This means binding a local folder on the node itself to the container. This will constrain the container to one node but that is not a problem since we need it constrained either way to keep the IP-address the same for the Kodi boxes to connect. This is easy to achieve in a docker swarm by using placement restrictions like node.hostname == dockernode3 for example. This will make sure this container is always deployed on dockernode3.

SSH into the node and create a folder for the MySQL data. I created the directory /var/docker/mysql_data on the node. Then you just bind the containers /varlib/mysql to that path. This will make sure that whenever the container redeploys after downtime it will have persistent data to work with.

All other config is well documented on the rpi-mysql docker hub page.

Backups

All the data in this database is easy enough to recreate. Just rescan the media library and it will download all the artwork etc again. I will, however, lose all the data about what I have watched, etc. Also scanning my media that I have built up over almost 10 years takes forever. So, therefore, we need a good backup.

Remote backups

I have a redundant NAS where I store all my data that in turn is based up to the cloud, so that would be the ideal place to store the backups. The NAS has full support for NFS so it works well with the Raspberry Pi as well. The only problem is that my NAS is encrypted, so if I get a power outage the NAS will startup but is not usable until I input the encryption key. This means that I have to prepare the NAS and then reboot all systems having an fstab mount against it. Sure I can SSH into each and every box and run the mount command but both options are messy.

The better way to do this is to use Autofs to automatically mount the backup share whenever it’s needed. So it doesn’t matter if the NAS was up and running when the node started or not for the share to properly mount when needed.

Setting up the Autofs is easy enough:

  1. SSH into the node dedicated for the MySQL container and install Autofs:
    sudo apt-get install autofs
  2. Create a backup folder in /mnt
    sudo mkdir /mnt/nfs
  3. Create the auto.master config file:
    sudo nano /etc/auto.master
    Then add the line:
    /mnt/nfs /etc/auto.nfs
    Save and exit.
  4. Create the auto.nfs file:
    sudo nano /etc/auto.nfs
    Then add:
    backups 192.168.6.5:/nfs/Backups
    Save and exit.
  5. Test the configuration by first reloading the Autofs configuration:
    sudo service autofs reload
    Then run list the directory and you should see the contents of the share:
    ls /mnt/nfs/backups

If you run into issues with this please refer to the Autofs wiki.

Backup script

We can reuse the script from the post Kodi central db backup. We need to modify it some since we are running in a docker container now. We could do a “one size fits all” script to dump all the databases to a single file like the import/export function of MySQL workbench. I would, however, recommend against that since that will create one big file and prevent you from doing an easy restore of one of the databases if needed. But it will be a hassle to update the script each time we create a new database.

First we need to find out the container ID of the MySQL container. Since it has different names depending on when it started and with what task id we need to search for it. By running this command:

sudo docker container ls -qf "NAME=^mysql"

This command asks docker to list it’s containers. The -q means quiet and will only return the ID. -f means to filter and then we ask for containers who’s name starts with mysql. More information on this command in the docker documentation.

We now have the container name and can execute a command inside the docker container. If we run:

sudo docker exec 2ec5a405efe3 ls

We will see a directory listing of the docker container. We now need to execute command against MySQL. This can be done in one of two ways.

  1. We add –user and –password to each command and provide an account with permissions on all databases.
  2. We give root full access without a password as long as the request comes from loalhost. This can be done by running the following command against the MySQL instance, easiest is via MySQL Workbench from your own system.
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost';

This assumed that the main admin account for MySQL is named root. Now we can execute commands against the MySQL server. So we create a script that loops all the databases, perform a backup and clear old backups.

# Check that we found the container
if [ ! -z $1 ]; then
	# Get container ID
	CONTAINER_ID=$(sudo docker container ls -qf "NAME=^$1")

	# Check that we got an ID
	if [ ${#CONTAINER_ID} -lt 1 ]; then
		echo "ERROR: Container not found!" ; exit
	else
		echo "Container ${CONTAINER_ID} found!"
	fi
fi

# Check that we have a path
if [[ ! -d $2 ]]; then
	echo "ERROR: Backup path not found!" ; exit
else
	echo "Backing up to $2"
fi

# Store now
_now=$(date +"%Y.%m.%d")

# List databases
docker exec $CONTAINER_ID mysql -N -e 'show databases' |
while read dbname; do # Loop the result
	# Backup the database
	echo "Starting backup of database $dbname"
	docker exec $CONTAINER_ID mysqldump --complete-insert --routines --triggers --single-transaction "$dbname" > $2/"$dbname".$_now.sql.bak
	echo "Backup of $dbname completed!"
	
	# Check for retention paramter
	if [ ! -z $3 ]; then
		# Clear old backups
		find $2/"$dbname".* -mtime +$3 -exec rm {} ;
	fi
done

The script takes three arguments like this example:
./mysql_ackup.sh mysql /mnt/nfs/docker_mysql 5

  1. The first parameter is mysql in the docket swarm it will have a name including task id and other things looking like mysql.1.udpn71blsszqv2obdib6w2urf. So this parameter will match against the beginning of the container name.
  2. The path where the backups should be dumped.
  3. How many days of retention you want. All backups in the folder older then this will be deleted.


So let’s look at the script in some detail.
Line 1-12: Checks that the parameter for the beginning of the container name exists and that the container is found.
Line 15-19: Checks that the path parameter is pointing to a path that exists.
Line 25-37: This is the actual backup. First, we execute a show databases command against the MySQL server inside the docker container. We then loop that output and for each database name, we execute mysqldump inside the docker container and pipe the output to a text file in the backup folder. We then check for older backups of that database and delete them if they are older then the number of retention days specified. This is only done if the third parameter is provided.

Schedule the backup

To schedule the backup we run crontab -e and add the line:
0 2 * * * sudo /home/pi/mysql_backup.sh mysql /mnt/nfs/backups/docker_mysql 5
This will run the script each day at 2 am and keep the backups for 5 days.

Script available on my Github.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: