1

I am running a mailserver with maildir storage. This means that quite a lot of files are created and I have just run out of inodes. AFAIK there is no magic command to increase number of inodes on ext# filesystem (or am I wrong?) so I have to backup and restore the whole filesystem. But how do I do that? I tried creating another partition and do:

dump -f - -0 /vservers/mail | restore rf - -u -v

While this seems to work it takes much longer than I am willing to wait (it managed to create 500 empty directories in 2 hours before I stopped the process; strace showed that restore was calling lots of useless lseeks). Is there any other method to copy complete filesystem (including sockets, device files, owners, permissions, acls, etc)? Additional info: source fs is ext3, destination was ext4, filesystems are on lvm, the fs I want to move is root fs for vserver.

3 Answers3

1

My alternative suggestion for copying the filesystem follows. Keep in mind that the closest I've come to this problem is using find+xargs+rm to clear a maildir that had gone wild with useless junk, so you should see where this gets you after an hour or so.

cd root_of_source ; find . -print0 | tar -c --null -T - -f - | tar -sxf - -C root_of_target

The function of this construct is

  • Retrieve the list of files in their raw order
    • Which is why I use find instead of tar's default... I don't know that tar's default is bad, I just know that find's is good,
  • Pass that list to tar null terminated (so that any special characters are handled correctly).
  • Take the tar format output, and untar the result (without re-sorting the input (-s)) in the target directory.

Regardless of the method you use:

  • If this data is starting and ending on the same physical disks, your performance will naturally suck a LOT compared to normal operations (lots of seeks between reading from the source and writing to the destination).
  • If you have some CPU available, a little compression shouldn't hurt, and may help Just add a 'gzip -c -1' stage between the tar commands, and a -z to the second tar.
Slartibartfast
  • 3,265
  • 17
  • 16
  • In my experience, tar is much slower than dump, especially with Maildir type loads where you have many small files. The last time I did this, tar took over 30 minutes, and dump took under 5. Compression is also a complete waste of cpu time since you're just decompressing it again right away. – psusi Sep 12 '11 at 02:32
  • I don't strictly disagree with you w/r/t compressing being a waste of time, but I think it is worth trying. In my head (a crazy mixed up place, to be sure) when writes block, the OS buffers piped data pending for the blocked process for a short while. That buffer will hold more data if compressed than uncompressed, giving the reading tar the ability to read more before blocking due to being unable to write to the output. So basically I'm just saying that if you're willing to try anything, see if compressing has a positive effect. – Slartibartfast Sep 12 '11 at 06:19
  • But will tar preserve file permissions, acls, owners, special files (devices, sockets, hard/soft links), etc.? – Tomasz Grobelny Sep 12 '11 at 10:19
  • @Tomasz Grobenly, of course it will; it wouldn't be usable as a backup tool otherwise. – psusi Sep 12 '11 at 13:42
  • I got warnings like this: "tar: ./var/spool/postfix/private/verify: socket ignored" but the command did its jobperfectly. – Tomasz Grobelny Oct 02 '11 at 18:09
0

Have you considered trying to use unionfs to union the existing filesystem with a new one, with writes going to the new filesystem?

I've never used UnionFS myself, but what I have heard about it sounds like it could let you go live and start writing data to disk again without having to re-create the filesystem by unioning the existing filesystem read-only, with a new filesystem as the writable filesystem. There may be performance hits or other issues that make this a no-go, but if you're just looking for ideas and have some time while the dump is running, you can probably research this to a workable set of commands.

Slartibartfast
  • 3,265
  • 17
  • 16
0

Other than dump | restore, you can use tar | tar, or just cp -ax or rsync to copy all of the files to the new fs. In my experience, dump | restore is the fastest method.

For reference, on a rather old and slow machine, it takes me 35 minutes to duplicate an fs using dump | restore where the fs has 420,774 inodes using 7.8 GB of space.

By comparison, it takes 61 minutes using tar | tar, and 64 minutes using cp -ax.

A few months ago, I posted a patch to make dump faster, but it was after 0.4b44 was released, and there has not been another release yet. You can find the patch on the mailing list. Building 0.4b44 yourself with this patch applied may make a significant difference. For me, it reduced the time from 35 minutes to only 25. Feedback on the mailing list would be helpful.

psusi
  • 3,247
  • 1
  • 16
  • 9