Tech and stuff
Recently I was working on optimising our internal application at work that is heavily MySQL driven. I was focusing on reworking SQL statements and making sure we are utilizing indexes properly. Modyfing and toying around with tables on live server is less than ideal so I needed a way to reliably and quickly move the data elsewhere so I can steadily experiment and fine tune sql statements and tables. Instead of using available phpmyadmin export options I decided to work directly with mysql which gives you more controll.
Here’s the syntax to perform mysqldump form MySQL docs
shell> mysqldump [options] db_name [tbl_name ...]
An example of this for InnoDB table would be:
mysqldump -u myusername -p --single-transaction exampledb exampletable > exampletable.sql
This command executed on the server where your MySQL is installed will export to the file “exampletable.sql” in the current working path.
By default mysqldump will lock the tables. That means if there are any insert/update/delete commands issued they will wait until dump is finished. Again, this is less than ideal if you’re exporting data from your live database. In our case we can skip locking tables since we just want this data for testing purposes but you should learn more about why sometimes you want to lock the tables. In example above the table is using InnoDB storage engine. For that setup you want to use –single-transaction option in order to avoid locking the table. If your table is using MyISAM storage engine you should use –lock-tables=false option.
Now to load it into a new database, we execute the following command:
mysql -u myusername -p testdb < exampletable.sql
Importing table will take a bit longer than exporting it due to HDD I/O characteristics but the whole process in general is pretty quick.
Now you can profile your sql statements, tune indexes etc. on your table without affecting your live application.
Once you’re done with the tunnig and you know what you want apply your findings to the live table.