3

I am running a BackupPC server with a hardware RAID 5 for the main storage of the backups. Since the machine was created on a tiny budget, the controller is a 3Ware 9500S-4LP for the PCI port and the drives are slow 200 GB SATA types.

However, even with this hardware, I see far worse performance than expected. The clients and the backup server use rsync as a transport over a Gigabit network, which is never even close to saturation. Backing up a normal Linux installation of about 5 GB takes over three hours.

So I monitored the server using the atop process monitor. It showed that neither processor nor memory use are critical, but read accesses to the RAID are the bottleneck.

When I built the server, I chose RAID 5 because according to this tabular overview of RAID characteristics it seemed the best compromise between read performance and space efficiency on a 4 port controller.

By the way, although this is a backup server, using rsync means there are far more reads than writes here -- around 1000 times more, currently. I suppose that moving and linking older files in BackupPC's hierarchy of old backups also contributes a lot to this.

So, how would you optimize performance on this machine? I have the following tunables:

  • Using a different transport with BackupPC (tar is an alternative)
  • Changing the array's filesystem from ext4 (noatime) to something else
  • Changing the RAID level (preferably not, due to data loss)
  • Recreate the array with a different stripe size (preferably not, due to data loss)
  • adding more memory to use as a buffer cache
  • adding a second controller and more drives (yes, I have those around)
  • Change the controller (preferably not, due to financial constraints)
  • Change all drives (preferably not, due to financial constraints)
jstarek
  • 628
  • 1
  • 6
  • 18
  • Are you using rsync with ssh? Is compression enabled in ssh? Would compression make sense? – Nils Apr 29 '12 at 19:53
  • "the controller is a 3Ware 9500S-4LP" — if you have extra-bucks, you'd better buy Linux SoftRAID which (gr8 thanks to Neil Brown) doesn't have lots of all those "preferably not, due to data loss". Ough, I've forgotten — it seems to be freeware. ;-P – poige Apr 29 '12 at 20:11
  • @Nils: Yes, rsync is used with SSH. SSH does not use compression. Since I am fairly certain that the network transfer is not my bottleneck, what advantages would enabling compression give me? – jstarek Apr 29 '12 at 20:25
  • @Poige: Sorry, I don't quite follow -- what do you mean with "preferably no, due to data loss", and what advantage would a software RAID give me above a hardware solution? Checksumming would have to be done in the CPU then... – jstarek Apr 29 '12 at 20:27
  • @jstarek, "[ 0.357986] xor: using function: generic_sse (10012.000 MB/sec)" — beat this. By citing all those yours "preferably no, due to data loss" I mean that LSR way more advanced RAID solution than all those "hardware" RAIDs. – poige Apr 29 '12 at 20:32
  • compression can slow down transfers (on server and/or client-side) this does not seem to be the case here. – Nils Apr 29 '12 at 20:36
  • It is strange that you see read-requests during transfer on the target side. Normally rsync first looks if it has to transfer something (reading both sides) and then starts the transfer - read on one side, write on the other. – Nils Apr 29 '12 at 20:39
  • @Nils: Yes, this also surprised me... I guess rsync needs to read all original files to check whether there are changes on the client side. Interestingly, conventional wisdom is that backup servers need to be built for write loads only. -- Anyway, that seems to be what I need to optimize for. – jstarek Apr 30 '12 at 08:46

4 Answers4

5

Here comes a short small random IO primer: 7200RPM disk drives do roughly 100 IOPS. 15k RPM drives double that, about 200 IOPS. With a RAID-5 array, the best possible attainable IOPS is number of data data drives time single drive performance; as you have 3 data drives, the best possible sustained value you'll get, ever, is 300 IOPS.

Use iostat -mx 5 while your backup is running. If you see a number of read or write operations (third and fourth columns) in the 300 range, you're basically maxing out the heck out of your setup.

Note: most modern SSD drives achieve 20000 IOPS. A pair of SSD in RAID-1 can put a rack full of spinning rust to shame. SSD changes everything. When facing an IOPS problem, 99% of the time the solution is called "SSD".

If you're not currently maxing out your RAID array output, there are a few things you can do :

  • Enhance the queue depth. Standard kernel queue depth is OK for old single drives with small caches, but not for modern drives or RAID arrays:

    echo 512 > /sys/block/sda/queue/nr_requests

  • try different IO schedulers. CFQ (default scheduler for modern kernel) often sucks with server operations:

    echo 'noop' > /sys/block/sda/queue/scheduler

  • try RAID-10. RAID-10 doesn't need to collapse writes together and fares better than RAID-5 in single thread operations.

  • alternatively, try running as many threads as there are data drives. It may enhance overall performance.
wazoox
  • 6,782
  • 4
  • 30
  • 62
  • 1
    Good points... although, unfortunately, SSDs are out of the question because they're just too expensive for the machine in question. But I'll keep the figures in mind! I'll try the iostat measurement later and report back. – jstarek Apr 30 '12 at 12:19
  • 1
    Sometimes I wish there would be a Clippy (TM) popping up every once in a while at your favourite Linux distribution. "It seems you are installing BackupPC. Would you like me to help you changing the I/O scheduler and ordering more RAM for your server?" – Janne Pikkarainen Apr 30 '12 at 12:24
  • 1
    Accepted as "correct" although the other answers were good, too. In the end, enlarging the queue depth brought the largest performance improvement for me. Together with using the "anticipatory" scheduler, I was able to get from ~280 writes per second to ~330, and I think that's the most I can expect from those drives. Unfortunately, I was not able to test whether the better file deletion performance of XFS compared with ext4 would have an impact because I'd prefer to keep the machine up for now. – jstarek May 02 '12 at 12:21
  • @jstarek, for XFS, only a very recent version (using kernel 3.0 or better) will give you a better file deletion performance than ext4. For enhanced metadata performance, if you have a Battery Backup unit or like to live dangerously, you can turn barriers off. For ext4 add the barriers=0 options, for XFS add nobarrier mount option. – wazoox May 02 '12 at 15:09
  • 1
    ...I guess that, on a backup server, I prefer not to run that risk just to get a few IOPS more, but thanks for mentioning this :-) – jstarek May 02 '12 at 15:15
1

First try benchmarking the raid performance locally, to see if it's really the raid problem. You can even use:

dd if=/dev/zero of=/your/raid/zerofile bs=16M

and then after ~10 seconds

killall -SIGUSR1 dd

in another terminal to see the local write speed. If the speed is good enough, then try some other network method (try first with netcat (check the man page for the first command, some distos dont need the '-p' flag)

pc 1: nc -l -p 12345 > /your/raid/file
pc 2: cat /some/big/file | nc ip.of.pc.1 12345

I've had problems with slow speeds with rsync over ssh (12-15MBps on gigabit link, but on relatively slow pc's).

After you know if the problem is with the disk or with the rsync/ssh speed you can continue debugging.

mulaz
  • 10,472
  • 1
  • 30
  • 37
  • Thanks for your answer. Both raw network throughput and sequential write / read performance on the RAID are fine. It seems that only random reads of small files are slow. – jstarek Apr 29 '12 at 17:51
  • 1
    @jstarek random reads/writes are always slow on classic HDDs. Try transfering a big file with your current setup, to see if it's faster, but small file read/writes are gonna be slow, and ssh transfer slows down thigs too. – mulaz Apr 29 '12 at 17:58
  • I'd advice adding `oflag=direct` to aforementioned `dd if=/dev/zero of=/your/raid/zerofile bs=16M`. Also, simply added `count=SomeReasonableNumber` would make this "~ 10 sec" postponed `kill` needless. – poige Apr 29 '12 at 20:06
  • I've done those tests, again with the rather clear result that I need to optimize for small reads. I've altered the title of my question to reflect this. – jstarek Apr 30 '12 at 08:45
1

BackupPC is very I/O intensive program and can lead to lots and lots of disk seeks. With low-end hardware there's only so much you can do, but try the following:

Optimizing BackupPC itself

  • Maximum number of concurrent backups and administrative operations plays a huge factor in BackupPC performance. Set that too high and your low-end hardware (or even expensive one...) grinds to halt. Set that too low and you're not maxing out your hardware capabilities. With commodity hardware try anything between 2 to 6 concurrent backups, see what works for you.

  • If not needed, disable BackupPC pool compression.

  • Even if BackupPC Perl rsync library does not fully utilize rsync v3.x, make sure you have rsync v3.x in use.

Optimizing server

  • Make sure you choose a correct I/O elevator. With RAID and lots of concurrence the default cfq can be a crappy choice; most of the time RAID controller knows things better and noop can be good. With certain workloads and el cheapo RAID controllers deadline can also be good.

  • I know you don't want to change the filesystem, but I've found XFS to be excellent with BackupPC. (Caveat emptor: the hardware in my case is pretty good)

  • BackupPC loves you back if you give it enough RAM. How much RAM your server has? The more the better; if the server can keep most of the directory structure in memory, the read operations BackupPC makes are much, much faster if they don't need to hit the physical platter.

If I were you, first I'd upgrade the server RAM and would also check the BackupPC settings. If those would not help enough, then I would tinker with file system and RAID settings.

Janne Pikkarainen
  • 31,454
  • 4
  • 56
  • 78
  • Good ideas! Just a few minutes ago, I found the [Speedupbackups](http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Speedupbackups) entry in the BackupPC wiki, citing a posting on a mailing list that also suggests changing the IO scheduler. I'll report back when I have results from that, and when I found someone who'll plug in more RAM for me (the server is 500 km away). – jstarek Apr 30 '12 at 12:15
0

So you suspect random read performance is the problem. The solution to that would be to get storage with better IOPS (SSD, or HDD with higher rotational speed, or RAID with more spindles). More RAM (cache) can also help, if the working set (inode cache) fits in memory.

One thing would be to verify that this is the case. Take a look at dstat output and iotop output. Also check that the file system for backuppc is mounted relatime or noatime, so that every file access doesn't translate into a write.

ptman
  • 27,124
  • 2
  • 26
  • 45