Jan-Lukas Else

Thoughts of an IT expert

Automatically Backup Docker Volumes

Published on in 👨‍💻 Dev
Updated on
Short link: https://b.jlel.se/s/ea
⚠️ This entry is already over one year old. It may no longer be up to date. Opinions may have changed. When I wrote this post, I was only 19 years old!

Update

I changed my setup because the Docker image used in this post got deprecated and is no longer maintained. Read about my new setup using restic to automatically backup Docker volumes.

👉 New setup

Original post

For my server needs, I rent a small VPS at Hetzner Cloud. It has two vCPUs, 4 GB of RAM, 40 GB of storage and I can use 20 TB of outgoing traffic each month (the incoming traffic is free and unlimited) and it only costs me 5,83€ each month, a lot cheaper than DigitalOcean, Linode or even AWS.

In addition to the pure VPS, Hetzner offers a backup service. For 20% of the price of the VPS (for me it’s ~1,17€), you get 7 backup slots and can configure automatic daily backups of your server. It was a great feature I used, but never really needed. It was more for some kind of safety feeling, because it only allowed you to recreate a server from the backups, but you weren’t able to just restore a specific file.

I addition to the VPS, I also rent a so called Storage Box at Hetzner, currently with 100 GB, but they are available with up to 10 TB. It’s a cheap online storage, you can access via different protocolls (WebDAV, FTP, SSH etc.). I use that as storage backend for my NextCloud instance, so all my photos and documents are on there. And now my backups are too.

Because I have a Docker-exclusive setup (using Rancher OS everything runs in Docker), Docker Volumes are basically everything I need to backup and it’s also the reason why I needed a Docker container to make the backup. Here’s how I achieve that:


I use this Docker image (blacklabelops/volumerize), which internally uses the command line program duplicity. duplicity is awesome because it supports almost every backup target (from the internal file system over WebDAV to S3) and also uses delta backups, so once you did a full backup, it saves just the changes.

The interesting part of my docker-compose.yml file looks something like this:

backups:
    container_name: backups
    image: blacklabelops/volumerize
    restart: unless-stopped
    environment:
        - TZ=Europe/Berlin
        - VOLUMERIZE_SOURCE=/source
        - VOLUMERIZE_TARGET=webdavs://user:password@user.your-storagebox.de/backups
        - VOLUMERIZE_JOBBER_TIME=0 20 */12 * * *
        - VOLUMERIZE_FULL_IF_OLDER_THAN=7D
        - JOB_NAME2=RemoveOldBackups
        - JOB_COMMAND2=/etc/volumerize/remove-older-than 7D --force
        - JOB_TIME2=0 0 * * * *
    volumes:
        - gitea-data:/source/gitea:ro
        - othervolumes:/source/othervolumes:ro
        - ...
        - backup-cache:/volumerize-cache

It basically mounts all volumes, which are important enough to backup (some cache-volumes aren’t). The backup gets executed every 12 hours, 20 minutes after the full hour, so at 00:20 and 12:20. It usually does delta backups unless there’s no or a corrupt full backup or the last full backup is older than 7 days.

I added some environment variables, so that the time is the time from my timezone and it also deletes old backups, which are older than 7 days, so it should just keep one full backup and every following delta backup, because I don’t have unlimited backup space and I also don’t need so many backups.

The Docker image also provides a lot of other functionality and also allows you to use any duplicity parameter. It’s definitely worth to check out.

Tags: , ,

Jan-Lukas Else
Interactions & Comments