Recently, I had to copy a large database from one server to another. I had 100G free on the source server. Unfortunately, the server would run out of memory before the mysqldump would complete. So I had to find a way to make the mysqldumps smaller. This can be done by creating separate dumps for each table. Then I had to compress the files to conserve memory. Following is the script I wrote to accomplish this. Type entire code on one line.
for I in $(mysql -u root -p node2 -e 'show tables' -s --skip-column-names); do mysqldump -u root -pMyPass node2 $I | gzip > "$I.sql.gz"; done
The mysql command in parenthesis will get a list of tables. The for loop will loop through all the tables. mysqldump will create sql dumps for each table. I included my my password in the command because I didn’t want to type my password 400 times, once for each table. gzip compresses the file.
For your work, if you don’t want to compress the sql dumps, try the following code (type entire code on one line):
for I in $(mysql -u root -p node2 -e 'show tables' -s --skip-column-names); do mysqldump -u root -pMyPass node2 $I > $I.sql; done
When the job is complete, remove the line from history because it has your MySQL password
history -d <linenumber>
or change your MySQL password