How I automated my backups to Amazon S3 using rsync and s3fs.
Written by John Eberly on
The following is how I automated my backups to Amazon S3 in about 5 minutes. I lot has changed since my original post on automating my backups to s3 using s3sync. There are more mature and easier to use solutions now. I am switching because using s3fs gives you much more options for using s3, it is easier to set up and it is faster. I now use a combination of s3fs to mount a S3 bucket to local directory and then use rsync to keep up to date with my files. The following directions are geared towards Ubuntu linux, but could be modified for any linux distribution and Mac OSX. STEP 1: Install s3fs The first step is to install s3fs dependencies. (Assuming Ubuntu)
sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev libfuse-devNext, install the most recent version of s3fs. As of now the most recent is r177, but a quick check of s3fs downloads will show the most recent.
wget http://s3fs.googlecode.com/files/s3fs-r177-source.tar.gz tar -xzf s3fs* cd s3fs make sudo make install sudo mkdir /mnt/s3 sudo chown yourusername:yourusername /mnt/s3
STEP 2: Create script to mount your Amazon s3 bucket using s3fs and sync files. The following assumes you already have a bucket created on Amazon S3. If this is not the case, you can use a tool like s3Fox to create one. Choose a text editor of your choice and make a shell script to mount your bucket, perform rsync, then unmount. It is not necessary to unmount your S3 directory after each rsync, but I prefer to be safe. One mistake like an 'rm' on your root directory could wipe all of your files on your machine and your S3 mount. You should probably start with a test directory to be safe. Make the file s3fs.sh
#!/bin/bash /usr/bin/s3fs yourbucket -o accessKeyId=yourS3key -o secretAccessKey=yourS3secretkey /mnt/s3 /usr/bin/rsync -avz --delete /home/username/dir/you/want/to/backup /mnt/s3 /bin/umount /mnt/s3
Note, the --delete option. This will delete any files that have been removed on the 'source'. Change permissions to make executable
chmod 700 s3fs.sh
Before you run the entire script, you might want to run each line separately to make sure everything is working properly. The paths to rsync, umount might be different on your system. (Use 'which rsync' to check) Just for fun, I did a 'df -h', which showed I now have 256 Terabytes available on the s3 mount! Next, run the script and let it do its work. This could take a long time depending on how much data you are uploading initially. Your internet upload speed will be the bottleneck.
sudo ./s3fs.sh
That's it! You are backing up to Amazon S3. You probably want to automate this using cron after you are sure everything is running o.k. Just for simplicity of this tutorial, lets assume you are setting up the cron job as root so we don't need to worry about editing permissions for mount/umounting directory.
STEP 3: Automate it with cron
sudo su crontab -e 0 0 * * * /path/to/s3fs.sh # this runs it everyday at midnight
p.s. I use this in combination with hourly backups to a second local machine using git to have revision history. I only backup nightly to s3 without revision history in case my house burns down etc. If you would like to know how I set up my git backups locally, just leave a comment and I can make a follow up post.
comments powered by Disqus