Home > Debian, Joomla!, Linux, MySQL, Networking, Web Servers > How-to Backup Joomla! 1.5 to Amazon S3 with Jets3t

How-to Backup Joomla! 1.5 to Amazon S3 with Jets3t

October 23rd, 2008 Leave a comment Go to comments

Introduction to backing up a Joomla website to Amazon S3 storage using Jets3t.

We all know backups are important. I’ve found what I consider a pretty good backup solution using Amazon S3. It’s super cheap, your backups are in a secure location, and you can get to them from anywhere. For my backup solution, I’m using Debian Linux (Etch), but this whole setup is not dependent on your current favorite flavor of Linux because it uses Java.

  1. Signup for Amazon S3: http://aws.amazon.com/s3/
  2. Install the latest Java Runtime Environment: http://java.sun.com/javase/downloads/index.jsp
  3. Download Jets3t: http://jets3t.s3.amazonaws.com/downloads.html
  4. Extract Jets3t installation to a location on your server.Example: /usr/local/jets3t/
  5. Add your AWS account key and private key to the “synchronize” tool configuration file:Example: /usr/local/jets3t/configs/synchronize.properties
  6. Use an S3 browser tool like Firefox S3 Organizer to add two buckets: one for file backups and one for MySQL backups.
  7. Add a MySQL user whose primary function is dumping data. Let’s call it ‘dump’ with the password ‘dump’:
    [code lang="bash"]mysql>GRANT SELECT, LOCK TABLES ON exampleDB.* to 'dump' identified by 'dump';[/code]
  8. Build your backup script (replace paths with your own) called s3backup.sh:
    [code lang="bash"]JAVA_HOME=/usr/local/j2re1.4.2_17
    export JAVA_HOME
    JETS3T_HOME=/usr/local/j3ts3t
    export JETS3T_HOME
    SYNC=/usr/local/jets3t/bin/synchronize.sh
    WWWROOT=/var/www/fakeuser/
    MYSQLBUCKET=example-bucket-mysql
    WWWBUCKET=example-bucket-www
    MYSQLDUMPDIR=/usr/local/mysql-dumps
    WWWDUMPDIR=/usr/local/www-dumps
    # Perform backup logic
    dayOfWeek = `date +%a`
    dumpSQL="backup-www-example-com-${dayOfWeek}.sql.gz"
    dumpWWW="backup-www-example-com-${dayOfWeek}.tar.gz"
    mysqldump -u dump -pdump exampleDB | gzip > "${MYSQLDUMPDIR}/${dumpSQL}"
    # Compress the website into an archive
    cd ${WWWROOT}
    tar -czf "${WWWDUMPDIR}/${dumpWWW}" .
    # Perform Jets3t synchronize with Amazon S3
    $SYNC --quiet --nodelete UP "${WWWBUCKET}" "${WWWDUMPDIR}/${dumpWWW}"
    rm -f "${WWWDUMPDIR}/${dumpWWW}"
    $SYNC --quiet --nodelete UP "${MYSQLBUCKET}" "${MYSQLDUMPDIR}/${dumpSQL}"
    rm -f "${MYSQLDUMPDIR}/${dumpSQL}"[/code]
  9. Make sure your script has execute permission
  10. Add a cron job to perform daily backups:
    [code lang="bash"]$>crontab -e
    0 0 * * * /root/s3backup.sh[/code]

That’s it. Good luck!

  1. November 2nd, 2008 at 20:26 | #1

    getting the following error messages:

    WARN [org.jets3t.service.impl.rest.httpclient.RestS3Service] Response ‘/’ – Unexpected response code 403, expected 200
    WARN [org.jets3t.service.impl.rest.httpclient.RestS3Service] Response ‘/’ – Received error response with XML message
    WARN [org.jets3t.service.impl.rest.httpclient.RestS3Service] Adjusted time offset in response to RequestTimeTooSkewed error. Local machine and S3 server disagree on the time by approximately 22621 seconds. Retrying connection.
    WARN [org.jets3t.service.impl.rest.httpclient.RestS3Service] Response ‘/backup-www-houseofgod-ws-Sun.tar.gz’ – Unexpected response code 400, expected 200
    WARN [org.jets3t.service.impl.rest.httpclient.RestS3Service] Response ‘/backup-www-houseofgod-ws-Sun.tar.gz’ – Received error response with XML message
    ERROR [org.jets3t.service.multithread.S3ServiceMulti$ThreadGroupManager] A thread failed with an exception. Firing ERROR event and cancelling all threads
    org.jets3t.service.S3ServiceException: S3 PUT failed for ‘/backup-www-houseofgod-ws-Sun.tar.gz’ XML Error Message: EntityTooLargeYour proposed upload exceeds the maximum allowed object size5944202455

    Any help would be appreciated.

  2. November 6th, 2008 at 13:57 | #2

    @Mark Mims

    Amazon S3 limits PUT requests to 5GB. You can add exceptions to the TAR command in the script to exclude certain files, directories, or filetypes if you wish. Otherwise, you can try using better compression.

    If you have a lot of video files, consider using a Joomla component that stores your videos off-site (JVideo for example). Doing so will reduce the footprint of your website, reduce your S3 storage costs, and make management a lot easier.

    - Matt

  3. August 5th, 2009 at 00:17 | #3

    Hi Matt,

    I would like to know when i follow the above backup procedure for mysql database, can i do this backup on a database having size in few TBs?

    when the mysqldump is running will it lock the database or slowdown the access to the users who might be requesting data in the meantime?

    Cheers!
    Hassan

  4. August 5th, 2009 at 10:27 | #4

    @Hassan Ali

    As I mentioned above, Amazon S3 limits the size of requests to ~5GB. You would need to have a strategy that involved splitting up database backup files into 5GB chunks in order to accommodate a Very Large Database (VLDB) in the +TB range.

    mysqldump and mysqlhotcopy both require LOCK TABLE in order to build a consistent backup. If you’re working with VLDB’s, you may need to consider using replication instead of mysqldump. Are you using a SAN for the database? In my experience the best plan is to mirror database volumes to a separate storage array.

    - Matt

  1. No trackbacks yet.