2

I got couple servers not long time ago with pre-installed hardware RAID5. I plan to use those servers as a dedicated NAS (with some VMs) on my LAN.

Currently I see these options:

  • disconnect HDDs from RAID controller and let ZFS do the job

or

  • leave controller alone and just use UFS.

What should I do in this situation and what are the best options? Plus I never used ZFS and really keen to try it out ^^.

As a base system I'm using XEN with NetBSD as DOM0.

4 Answers4

2

Why not run benchmarks on your particular server hardware and figure out what is best for your combination of hardware and file usage?

One tip: RAID5 will take forever to rebuild an array on modern size hard disks (2TB or higher), and during that time the performance of the RAID array will be compromised.

Modern setups use stripes of mirrored hard disks: this combines scalability/expandability with rebuild performance. Less 'efficient' than RAID5/6, but hard disks are REALLY cheap these days*.

  • I'm assuming you are using good old Winchester spinning metal, and not solid state devices.
DutchUncle
  • 1,265
  • 8
  • 16
  • Thank you for advice. Yeah, I think I'll just make some benchmarks with different RAID/file system options and see what works best. –  Oct 03 '11 at 18:15
  • And yes, I'm using old good Winchester X) –  Oct 03 '11 at 18:30
1

You didn't mention if the hardware raid controller has a battery backup module on it or not. If so, the controller will be able to commit writes as soon as they're in RAM on the controller... if not, it will have to wait until they are actually committed to disk. Depending on your workload, this can make a major difference in performance of the raid controller.

Personally, unless you have a reason to pull the raid controller, I'd leave it in but map each physical drive through as an independent drive (i.e., set up one "raid group" per drive, each with one drive in it), then use ZFS on top of that. This would let you take advantage of any battery backed RAM on the raid controller, but still let you use ZFS and get all of it's advantages.

jlp
  • 401
  • 2
  • 5
  • 2
    «any battery backed RAM» — you're saying BBU to often. You'd better don't until you realize that even pure software solutions such as Linux Softwar RAID (LSR) allows you to forget about BBU, since BBU is such a thing that prevents RAID-5 parity-write-hole, but LSR uses re-calc parity in case of crash (and possibly write-intent-bitmap to avoid redundant re-calc). ZFS doesn't use partial writes at all — RTFineMaual. – poige Oct 02 '11 at 08:41
  • 1
    Actually, the big advantage of battery backed RAM on the raid controller is that it lets the controller return as committed as soon as the written block is in memory (on the controller)... without having to wait for it to go to disk. With ZFS, you can also realize a big with with a SSD-type device for the ZIL. The OP mentioned that they were using netbsd so directing them towards any type of Linux-based solution is a non starter. Also, you mention RTFM, you don't mention which FM to R? – jlp Oct 03 '11 at 03:29
  • Thing you label as big advantage of BBU is called system I/O cache. BBU makes only one issue resolved (hopefully) — making sure *internal* RAID data consistent. As the manual I'd recommend reading ZFS primary source: http://blogs.oracle.com/bonwick/en_US/tags/zfs – poige Oct 03 '11 at 03:37
  • So it is not necessary to pull out cables. That's very good. Thank you for advice. I'll run some tests to decide which option works best. –  Oct 03 '11 at 18:16
1

ZFS can be raid-z{1,2,3}, stripe, mirror, and even RAID-10. Meanwhile RAID-5 is always just RAID-5. Hence you're comparing juice to apple.

RAID-5 is efficient capacity user, but very laggy data writer. RAID-10 is often preferable instead.

poige
  • 9,171
  • 2
  • 24
  • 50
1

Keep in mind that if you're dealing with big throughputs, you might be CPU-constrained with a software solution.

By saying you want to use "ZFS" i assume you want to use a software solution using ZFS as the filesystem of choice for the raid volume (with all the perks that it brings to the table).

If you are CPU-constrained, leave the raid management to the dedicated controller. At some point, that card has been bought and you might still be in the "return of investment" time window.

That means you should leave it alone and not add work hours to fix something that still works.

If you're not CPU-constrained, your infrastructure is scaling up and the current solution is dragging down the performance of the whole system (and, therefore, productivity), then you can consider a software striped/mirrored setup as someone else suggested.

user9517
  • 114,104
  • 20
  • 206
  • 289
ItsGC
  • 905
  • 7
  • 12
  • Thank you for reply. Well this server will not get big throughputs. It is more like my playground :) But still it is interesting how it will work with ZFS and and like 10 XEN instances running together :) –  Oct 03 '11 at 18:10