#!/bin/bash
#mysqlbackup is script writen to dump eglug databases
#+ Diaa Radwan
DATE=$(date +%d-%m-%y)
BACKUP_PATH='path/to/backups'
DBACKUP_PATH='/path/to/ocument_root'
DB_NAME='dbname'
DB_USER='dbuser'
PASSWORD='pass'
DOCUMENTROOT='/path/to/docroot'
MYDUMP_OPTIONS='--add-drop-table -acv --compatible=mysql40 --host dbhost'
case "$1" in
databases)
for db in $DB_NAME ;do
mkdir -p $BACKUP_PATH/$DB_NAME/$DATE
mysqldump $MYDUMP_OPTIONS -u$DB_USER -p${PASSWORD} $db > $BACKUP_PATH/$DB_NAME/$DATE/${DB_NAME}.mysql
tar cjvf $BACKUP_PATH/$DB_NAME/$DATE/${DB_NAME}.mysql.tar.bz2 $BACKUP_PATH/$DB_NAME/$DATE/${DB_NAME}.mysql
rm -f $BACKUP_PATH/$DB_NAME/$DATE/${DB_NAME}.mysql
done
;;
tables)
for db in $DB_NAME
do
mkdir -p $BACKUP_PATH/$DB_NAME/$DATE/dbtables
for tables in `echo "show tables" | mysql -u$DB_USER -p${PASSWORD} $DB_NAME | grep -v "Tables_in_"`
do
mysqldump $MYDUPM_OPTIONS -u$DB_USER -p${PASSWORD} $DB_NAME $tables > $BACKUP_PATH/$DB_NAME/$DATE/dbtables/$tables.mysql
tar cjvf $BACKUP_PATH/$DB_NAME/$DATE/dbtables/$tables.mysql.tar.bz2 $BACKUP_PATH/$DB_NAME/$DATE/dbtables/$tables.mysql
rm -f $BACKUP_PATH/$DB_NAME/$DATE/dbtables/$tables.mysql
done
done
;;
docroot)
mkdir -p $DBACKUP_PATH
tar cjvf $DBACKUP_PATH/eglug.org_docroot_${DATE}.tar.bz2 $DOCUMENTROOT
;;
dt)
sh $PWD/$0 databases
sh $PWD/$0 tables
;;
*)
echo "USAGE: $0 {tables|databases|dt|docroot}"
esac
Comments
To use it
to use this script to backup eglug mysql do we need any more mysqldumb options ?
peace
Live Free Or Die Trying... GPG Key ID:0x6FD809F4
nah it's okay
i saw the real script we need to test the backs from it and why we are using --compatible=mysql40 if we are using mysql5 ? and for ex. i'm using mysql 5 too .
peace
Live Free Or Die Trying... GPG Key ID:0x6FD809F4
we need to have test for
we need to have test for the dump,it will be found at ~/backup/mysql/eglug/$DATE/
NB: the script is updated to backup the document root
Diaa Radwan
Looks good IMO
The script is fine, it could use some minor improvements code wise, but otherwise it does the job right. Anyone have comments? I hope we start using this asap.
Alright, some things that won't prevent us from using the script, but just for the sake of code quality..
sh $PWD/$0 databases; sh $PWD/$0 tables is so ugly. why not wrap each step in a function and call both instead of invoking two more shells.
Process Killer on Server prevents script
After a bit of investigation, turns out there is a process killer on eglug server, kills process after they exceed some limits (limits that I don't know of). This happens especially with tar. May be it consumes more memeory than others. Anyway..
Result of this that most of backups are corrupted
Even renicing the process didn't make it go uncovered, the killer shut it down :-(.
Any ideas?
bzip2--
The backup script was compressing with bzip2, after changing that to gzip, the process killer started to like us. So far backups are performed successfully.
So it works? yes. Cool? no. That's not a solution. If for any reason gzip takes more processing time (e.g. larger db), it might be killed as well. I kept the nicing in the script, that should make things cooler but doesn't mean we should mark this problem as fixed.
MSameer suggested that we could split large files before gziping. Indeed a good idea. Any other ideas?
we can try this
we can try to renice to to 19 or something and see what will happen
peace
Live Free Or Die Trying... GPG Key ID:0x6FD809F4
that's how it's already..
that's how it's already..
For the sake of documentation..
I've found that the mysql dumps completely ignore the auto_increment bit for all primary keys!!
Mystery resolved, thanks to MSameer, he pointed me to this bug: http://bugs.mysql.com/bug.php?id=14515 which says this is a bug triggered when you use the --compatible=mysql40
Likely though, the fix is easy. We just need to run these SQLs against the DB after restoring it from the backup:
Note for the lazy who don't wanna read the full bug thread: this bug is fixed in 4.1.21 and 5.0.23.
Code (note for self)
Another solution