Posts Tagged ‘ script

A simple script to backup all mysql databases into separate files

Hi, today I’m going to present you a very simple script that I use to backup all my mysql databases into separate files compressed with gz compression. Doing this way, to restore a database you just need to extract the dump from the file and restore it with this command:

The script is the following:

First there are some user defined variables:

  • MYSQL_USER and MYSQL_PASS: the username and the password of a user with read privileges on all the database to include in the backup;
  • date: it’s simply the current date/time, used to easily identify different backups;
  • BACKUP_DIR: the folder where the backups will be saved, I used a /backup folder and every backup is within a folder named as the current date. This will create a structure like this:

Then comes the script:

  • first there is a check on the output folder, to verify that exists and otherwise to create it;
  • then we obtain all the databases names, in two different ways:
    • if you want to exclude some database include their names within the grep -vE command using | as separator: mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -vE '(information_schema|performance_schema|mysql)
    •  otherwise to get all the names: mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases'
  • lastly mysqldump will create a backup for all the databases and gzip will compress it into a single file.

As I mentioned you need a user with read privileges on all the databases, and forget about using root user :)

So, to create a user just use these commands:

  • open a mysql console using the root account:  mysql -uroot -p
  • create the user with name USER and with a PASSWORD: GRANT LOCK TABLES, SELECT ON *.* TO 'USER'@'%' IDENTIFIED BY 'PASSWORD';
  • then finally flush the privileges: flush privileges;
  • close the mysql console with \q , make the script executable with: chmod +x db_backup.sh  and test the script.

Now you also may want to add a crontab entry to execute it automatically.

As always if you have any questions of suggestions just use the comments below :)

A simple backup script using rsync

Today I’m going to present you a simple backup script that I wrote a while ago to maintain backups for my home computers and my server.

This script requires only one program to be installed, rsync. And optionally ssh if you want to store your backups on a remote machine.

With rsync you can do a lot of things, but in this post I’m going to explain only a simple way to copy a folder into another located:

  • on the same computer (for example an external hard drive);
  • on a remote computer (for example a server or a NAS).

Let’s start by examining the contents of the following script:

The first section of the script will contain the user defined variables:

  • DEST: is the destination folder of your backup; this is the full path on your local or remote machine;
  • HOST: used only if you need to save the data on a remote machine, it’s the combination of the username (with write permissions on DEST folder) on the remote machine plus the hostname; the format is: [email protected];
  • PORT: the ssh port, by default 22;
  • KEY_PATH: the fullpath to the local user’s (who executes the script) private key;
  • LOG_FILE: the fullpath to a logfile on the local machine (logs are usually stored in /var/log/).

The second section will print only a small header on the log file at each run of the script to separate the old outputs and know the last backup date.

The third and most important section issues the rsync commands, one per folder to backup. In this example we are going to backup two folder, /etc and /home where are stored respectively the system configuration and the user’s datas. Add one line per folder you want to backup.

Each rsync command has some parameters:

  • -a: ‘archive’ activates recursion into the folders and preserve all file’s metadata;
  • -R: ‘relative’ will create the same folder structure on the server;
  • –stats: rsync will write, at the end of the job, a small report;
  • –delete: activate the propagation of file’s deletion: if a file is deleted on the source it will be deleted even in the backup; remove this if you want to preserve all your old datas!

Then there are the connection parameters (if applicable), the source folder and the destination folder:

  • to store the data on a remote machine: rsync -a --stats --delete --rsh='ssh -p '$PORT' -i '$KEY_PATH'' /home $HOST:$DEST
    We are using ssh as transfer protocol using the private key for the autentication, then we specify the source folder and the destination folder stored on the server.
  • to store the data on the same local machine: rsync -a --stats --delete /home $DEST
    We are telling rsync to copy the data from /home to $DEST.

At the end of the command there is the &  operator that tells bash to fork the process in a subshell and >> $LOG_FILE  to redirect all outputs to our log file.


 

Now a three-step how-to to create a private/public set of keys to authenticate via ssh:

  • execute this command with the user that will launch the script (manually or with a cron):

    Then follow the instructions: you have just created your public and private keys; Don’t use any passphrase if you want to execute this script with a cron job otherwise the script will wait for a password that you can’t enter… so just press enter two times to enter a blank passphrase;
  • save into the server your public key so you can authenticate on it by using your private key (do not share this one with anyone!):

    Change remote-host to your server host;
  • try to login with:

    If it’s all ok it shouldn’t be necessary to write a password.

As always if you have any questions of suggestions just use the comments below :)

return top