How I automated my backups to Amazon S3 using s3sync.

Written by John Eberly on

UPDATE: See my newer article for the way I currently backup to Amazon S3.

Jeremy Zawodny has an excellent article/discussion about the different tools currently available to take advantage of Amazon simple storage service (S3). After testing many tools available for S3 currently, I decided to use the ruby program s3sync to backup my data to S3. As I explained an earlier post, I wanted a simple low level tool to perform automatic backups S3. I decided to use s3sync to do the heavy lifting and use the jets3t Cockpit GUI to monitor my S3 account. The following explains how I successfully started automating my backups to S3 using s3sync and cockpit.

My server is running Ubuntu Dapper with samba server. All the machines in my house use a "Public" drive on the samba server to store all files from Windows and Linux. All of our important files like photos, home movies, and documents are stored on this "public" drive. This simplifies the backup procedure, since I don't have to backup multiple sources.

The following steps describe how I backup my "public drive" to Amazon's awesome S3 storage service. I decided to post this, because I haven't found a fairly "simple" guide to actually automate backups to S3 that functions similar to rsync on Linux. This is a follow-up post to my original post on choosing a backup solution.

STEP 1: Activate an Amazon s3 account.

Go http://www.amazon.com/s3 and sign up for a s3 web service account

Have your Access Key ID and your Secret Access Key handy.

STEP 2: Install a management tool

(update, I no longer use cockpit, now I use the command line tools that come with s3sync that were not available at the time I wrote this original article, see Option 1.)

Option 1 use the command line shell tools that are included with s3sync (my new preferred method)

Here is a sampling of the commands from the readme file for command line tool, s3cmd.rb that can be used to create buckets and verify upload success or failure. If you use, this option, make sure you have the correct version of ruby installed on your system and you have downloaded the s3sync package (See step 3)

List all the buckets your account owns:

s3cmd.rb listbuckets
Create a new bucket:
s3cmd.rb createbucket BucketName
Delete an old bucket you don't want any more:
s3cmd.rb deletebucket BucketName

Find out what's in a bucket, 10 lines at a time:

s3cmd.rb list BucketName 10
Only look in a particular prefix:
s3cmd.rb list BucketName:startsWithThis

I plan to write a shell script to verify success of backup and run via cron job each night, but I haven't done it yet. I will update here when I do.

Option 2 (original option that I used before s3sync command line shell tools were available) UPDATE: I have had trouble getting this (or any other GUI) to work for folders containing large amounts of files. If you plan to have thousands of files stored at Amazon, then I suggest option 1.

Download a GUI tool and make sure you can log into your S3 account, create a bucket, add files, and delete them.

I have tried a lot of them, but I prefer jets3t Cockpit. It is java and open source, plus it is able to read objects uploaded to S3 by other tools. Some tools like Jungle Disk create buckets and objects in a propietary format. This means you would not be able to see your files uploaded to S3 by other tools using JD. Here is a screenshot of Cockpit.

Cockpit

Create a bucket that you will store your backups in. Make sure to give your Bucket a unique name, because bucket names have to be unique for all users of S3. Many recommend to use your Access Key ID from S3 as a prefix. For example, fakeaccesskey1234.backups. For the rest of this article, I will assume our bucket name is "mybucket".

Cockpit will be a handy tool for you to monitor your backups in S3, but the actual file uploading/downloading will be done with a shell script using s3sync.

STEP 3: Install s3sync (ruby)

s3sync is an open source ruby script that acts similar to rsync, the linux file sync program. Remember to read the README file from s3sync. Also, all the normal warnings apply. Test this on a couple folders and files you don't care about and make sure you understand what you are doing. Put the source/destination in the wrong order while using the --delete option and you could blow away all of your precious data.

Lets move on.

The following apply to a Debian/Ubuntu based distribution, but could easily be adapted to your own distro.

First, make sure you have ruby 1.8.4 or greater and the ssl lib for ruby or higher

$ sudo apt-get install ruby libopenssl-ruby

check ruby version

$ ruby -v
ruby 1.8.4 (2005-12-24) [i486-linux]

change into the directory where you want to install s3sync, like /home/john/s3sync

download and unpack s3sync

$ wget http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz
$ tar xvzf s3sync.tar.gz

clean up

$ rm s3sync.tar.gz

make directory for ssl certificates and download some (important, read README for info about these SSL certs)

$ mkdir certs
$ cd certs
$ wget http://mirbsd.mirsolutions.de/cvs.cgi/~checkout~/src/etc/ssl.certs.shar

run this shell archive

$ sh ssl.certs.shar

get back into main s3sync dir

$ cd ..

create two files with your favorite editor, upload.sh and download.sh with the following contents and update to suit your needs. (Important, like rsync, slashes matter, see README for examples)

upload.sh ----------------------------------------

#!/bin/bash

# script to upload local directory upto s3

cd /path/to/yourshellscript/
export AWS_ACCESS_KEY_ID=yourS3accesskey
export AWS_SECRET_ACCESS_KEY=yourS3secretkey
export SSL_CERT_DIR=/your/path/to/s3sync/certs
ruby s3sync.rb -r --ssl --delete /home/john/localuploadfolder/ mybucket:/remotefolder
# copy and modify line above for each additional folder to be synced 

download.sh ----------------------------------------

#!/bin/bash

# script to download local directory upto s3

cd /path/to/yourshellscript/
export AWS_ACCESS_KEY_ID=yourS3accesskey
export AWS_SECRET_ACCESS_KEY=yourS3secretkey
export SSL_CERT_DIR=/your/path/to/s3sync/certs
ruby s3sync.rb -r --ssl --delete mybucket:/remotefolder/ /home/john/localdownloadfolder
# copy and modify line above for each additional folder to be synced 

NOTICE: These scripts use the --delete option. This means it will delete any file on the destination not on source. Also, these shell scripts contain your Amazon secret info, so you will want to make sure they are only readable by you (chmod 700, credit Kelvin below). You can also add the "-v" option, so you get a verbose about of the changes. I did this this after my initial upload, so I can monitor activity via cron job emails.

Create the local upload and download directories and put some test files in the upload folder

$ mkdir localuploadfolder
$ mkdir localdownloadfolder

change the permissions on the files

$ chmod 700 upload.sh
$ chmod 700 download.sh

Test upload.sh

$./upload.sh

Use s3cmd.rb or Cockpit to make sure you can see the files made it to Amazon.

Test download.sh

$ ./download.sh

The files you uploaded to S3 should now be in your localdownloadfolder.

Once you are confident everything is working fine and your understand what you are doing. Change the shell scripts to backup your actual folders. Run the scripts manually first to ensure everything is working properly. Remember, the upload script will be limited to the upload speed of your ISP, which can be very slow. If you have a typical Cable internet connection upload speed of 384 k it will take approx. 6 hours to upload 1GB. Download speeds are usually much faster, approx 1GB/20 min, but hopefully you never need it.

STEP 4: set up cronjob to run backup script once a week/month etc.

Once you are sure the script is working for your uploads, you can automate the task by creating a cron job to run once a week, day or month. I have it run once a week, because I do nightly backups locally to my Desktop machine using rsync.

$ crontab -e

add the following line.

30 2 * * sun /path/to/upload.sh

save and exit.

Obviously, monitor to make sure everything is working.

STEP 5: kick back and relax

Now you can relax, if your laptop battery explodes and burns down your house, you know your data is safe sitting on Amazon's geo-redundant servers right between some bits describing a new book from Oprah and a bad review on latest Ben Affleck movie!

Feel free to leave a comment if you find this useful, incorrect, or just plain uninteresting.

UPDATE 1: One additional step I did, was to create one additional bucket where I uploaded all the necessary code/scripts to restore my files using s3sync (minus my s3 information).

UPDATE 2: I have changed the chmod 755 to chmod 700 to make script not readable to all. (Credit Kelvin below). Also, updated the information about the tools I use. I no longer use cockpit to verify success, but I mostly rely on the s3sync command line tools there were not present at the time I wrote the original article.

UPDATE 3: I never gave enough credit to the actual author of s3sync. Without him, this entire process would not be possible, thanks again.

comments powered by Disqus