in Technology

S3Backup Docker container and unRAID

I use a file server software called unRAID, which supports redundant disks with parity and other great features such as Docker support and a great community which creates plugins for any service imaginable. This is where I keep my main photo library, and I want to be certain that I don’t lose my photos — ever.

All photos are backed up using CrashPlan which runs great in headless mode on the server. While migrating the CrashPlan app to a Docker container a while back during a large unRAID 5 to 6 upgrade, I got a bit weary that it’s the only real full backup. Things always happen and it would be a shame that if I were to do something wrong during a restore1, all my photos could potentially just vanish in an instant. I need a write-only redundancy option which can never fail2.

Amazon S3 and Glacier and excellent candidates for archival storage of photos, since it’s relatively inexpensive and the reliability is fantastic3. Given that unRAID has recently acquired support for Docker with their 6.0 release, it would be a great way to write and deploy a software like this.

I’m using a tool called s3cmd for the syncing engine, so the only left is the mapping of data and scheduling of the backup. This is why I created the joch/s3backup Docker image. It takes the hassle out setting up everything for backup.

The first step is to register for Amazon AWS and create an S3 bucket4. To use Glacier in conjunction with S3, you can create a lifecycle policy within the S3 bucket, which moves the uploaded files to Glacier after a grace period of your choosing. Once you have the bucket URL, just launch the Docker image with something like this:

docker run -d -v /home/user/Documents:/data/documents:ro -v /home/user/Photos:/data/photos:ro -e "ACCESS_KEY=YOURACCESSKEY" -e "SECRET_KEY=YOURSECRET" -e "S3PATH=s3://yours3bucket/" -e "CRON_SCHEDULE=0 3 * * *" joch/s3backup

S3Backup will back up everything under the /data folder, so using the Docker volumes feature, just mount all files and folders which are subject to be included in the backup. The other variables are pretty self-explanatory and can be found with in the Amazon AWS console.

If you are using unRAID, you can simply add my template repository5, which will give you access to the Docker image directly within the user interface. If you are using the Community Applications plugin for unRAID, you can simply search for s3backup and it should be ready to go!

  1. When reinstalling CrashPlan, or when moving to a new computer, you are given the option to adopt another computer. While this is a great feature, if you accidentally deselect a folder, it will be gone forever as it hasn’t yet been synced. 

  2. Never is a strong word, but it should be a service with a great track-record and having built-in redundancy options and be fairly idiot proof. 

  3. I don’t have any hard figures on the reliability, but Amazon claim some impressive numbers. 

  4. A bucket is basically a collection of files and folders