I have a mysql server hosting a production database, and routinely I make snapshots of it, which we can also use as a staging environment.
Sometime bad things happen, e.g. we delete a row (and all those connected to it through foreign keys) from production, and after fixing the bad code I procede to import the backed up data.
But, being a noob at ops, I am not sure of what the proper process would be.
As of now I have a script that does something like:
- get the row indexed by key K (with mysqldump -w)
- get all the rows where row_key = K from a list of tables (again with mysqldump -w)
- insert all these rows in the prod db (with mysql )
This seems hackish, and is always open to some inconsistency both on the data finding part (e.g. the list of "related" tables in my database is incorrect) and on the data importing part (e.g. constraints get violated because the DBs have "diverged" and I must fix the INSERT
code by hand ).
Is there a proper way to do this kind of things?
I can see related questions on SF, e.g. moving data from one MYSQL db to another but this seem related to merging whole dbs, or importing a whole backup, and not restoring a small "self contained" slice of data.