Done a couple of times, but needed rather rarely. Means I have to re-think every time. So here is a simple recipe (Linux assumed):
The MySQL DB and all the files in the web directory must be saves. These are two separate steps.
The program "mysqldump" will read all tables of the specified database and create a readable script file, which contains commands to create the same tables to the same DB (e.g. restore to same server), or to another DB (e.g. move the site to a new server).
user@webserver:~$ mysqldump --add-drop-table -u <database-user> -p <database-name> ><dumpfile-name.sql>
The option --add-drop-table inserts commands to drop existing tables, if they exist. This is good to have if a restore over an existing installation should happen. All existing tables with identical names will be removed first. This is usually the case, so I add this option always.
The string after '-u' is the DB user with read rights to the DB to dump.
'-p' makes mysqldump to ask for password. The password could be put into the command line straight after the '-p'. But this is a security risk because it would be stored in the command history. Just leaving it blank will make mysqldump to query it interactively.
The last parameter in the line, just before the '>' output redirection, is the DB name.
Following the '>' redirection symbol is the name of file to store the script. It's a good idea to give it a .sql extension for easier recognition later on.
2) All files in the web root directory
The files in the web root can be backed up to their very own archive, resulting in two files for the complete backup. An alternative is to copy the .sql file from the first step into the web root of the site to back-up and archive it together with all other files. The result is one .tgz archive with all data of the site.
cp dumpfile-name.sql web-root/
tar -czvf ../directory-for-backup/backname-and-date.tgz *
Executing the tar command from within the web-root has the benefit that the web-root directory itself is not in the archive. This allows easy installation of the backed-up site to other servers, possibly into a different web-root.
The tar option -czvf mean: create a new archive, zip (compress) the archive, be verbose (show details during execution), and after 'f' follows the file name of the new archive. The last parameter specifies the files to archive, '*' means everything in the current directory, this includes all directories.
3) Restore data
Copy the complete archive into the target web root.
Extract all files:
tar -xvf backup-file.tgz
tar option: extract an archive, be verbose (show details during execution), f indicates the next parameter is the archive file.
Next, execute the .sql script archive to restore all tables into the DB.
mysql -u <database-user> -p <database-name> < <dumpfile.sql>
Recently I had the problem, that special character like German "Umlauts" were not correctly displayed. Everything else was fine. The reason was that MySQL did not interprete the character from the dumpfile as UTF-8 encoded.
I had to do the restore again, but in a slightly different way, which allows to specify the character encoding:
mysql -u -p <database-name>
> charset utf8
> source <dumpfile.sql>
The same can most probably be achieved with the one liner above and adding '--default-character-set=utf8', but I have not tried this yet, as I found the other way before.
Remarks: The -h option for mysql and mysqldump might be needed to specify the DB host, if it's not on the same server (localhost).