0

I was importing a database dump - almost 1GB worth:

mysql -uroot -ppassword < mysqldump.sql

When I received warnings that the hard drive was almost full. I panicked and Ctrl-Ced the import. Twice.

Looking at my DB I can see that the import was canceled. However when I look in /var/lib/mysql/ I can see ibdata1 is still too big - I guess it still has all the imported data in it. It's about twice the size it should be for the data available in the DB.

I ran mysqlcheck -optimize -A but if anything it made the ibdata1 file larger.

How can I clear this orphaned data out? I have about 100MB left on the hard drive ...

Rudolf Vavruch
  • 1,215
  • 2
  • 11
  • 16

1 Answers1

1

This is a known bug in MySQL: 1341. It's been going on for 8 years now. You might want to try a workaround, posted on StackOverflow:

That ibdata1 isn't shrinking is a particularly annoying feature of MySQL. The ibdata1 file can´t actually be shrunk unless you delete all databases, remove the files and reload a dump.

But you can configure MySQL so that each table, including its indexes, is stored as a separate file. In that way ibdata1 will not grow as large.

It was a while ago I did this. However, to setup your server to use separate files for each table you need to change my.cnf in order to enable this:

[mysqld]
innodb_file_per_table=1

http://dev.mysql.com/doc/refman/5.5/en/multiple-tablespaces.html

As you want to reclaim the space from ibdata1 you actually have to delete the file:

  1. Do a mysqldump of all databases, procedures, triggers etc
  2. Drop all databases except the mysql-db
  3. Stop mysql
  4. Delete ibdata1 and ib_log files
  5. Start mysql
  6. Restore from dump

When you start MySQL in step 5 the ibdata1 and ib-log files will be recreated.

Now you're fit to go. When you create a new database for analysis, the tables will be located in separate ibd* files, not in ibdata1. As you usually drop the database soon after, the ibd* files will be deleted.

http://dev.mysql.com/doc/refman/5.1/en/drop-database.html

Or, if you want to keep the database running whili you do this, you could try this:

Enable the innodb_file_per_table option without shutting down the database. The idea is:

  • Configure your original database as master. Unless your database is already using binlogs for security, this is the only step that will require restarting MySQL.
  • Make a backup of the original database using Xtrabackup.
  • Restore the backup and run a second instance of MySQL.
  • Run mysqldump on the second instance.
  • Stop the second instance, but do not delete it yet.
  • Create a new database and start the third instance of MySQL with the enabled option innodb_file_per_table.
  • Restore the dump by feeding it into the third instance of MySQL.
  • Configure the third instance as slave and run the replication.
  • When the initial replication finishes and the slave catches up with the master, reconfigure your clients to use the new instance.
  • That's it. You can stop the first instance now and delete it.

A detailed article can be found here.

Bart De Vos
  • 17,761
  • 6
  • 62
  • 81
  • Thanks for the detailed response. I'll give this a try when I get home. – Rudolf Vavruch Oct 14 '11 at 12:13
  • I have the same steps when I answered this question a year ago : http://stackoverflow.com/questions/3927690/howto-clean-a-mysql-innodb-storage-engine/4056261#4056261. Notwithstanding, your answer is totally correct. It should be accepted. It gets a +1 for answering this particular question first. – RolandoMySQLDBA Oct 14 '11 at 15:24
  • I also like your plug for XtraBackup in case anything goes wrong. – RolandoMySQLDBA Oct 14 '11 at 15:26
  • Hey Bart, I just read your StackOverflow link. Someone answered it a month before I did. Anyway, cleaning up ibdata1 is a necessary evil. At the very least, it should never have to be revisited. – RolandoMySQLDBA Oct 14 '11 at 15:29
  • Thank you. That seems to have fixed the problem, a tick for you sir. Does splitting up the tables into files speed up the db at all? Why the big log files, what does that do? – Rudolf Vavruch Oct 15 '11 at 15:17
  • Hey. It sure does speed up things. The big log files are common in DB-world. It's to be able to revert a state of a DB. – Bart De Vos Oct 16 '11 at 01:23