2

I'm using Ubuntu 11.10 Desktop x64 with Native ZFS using a mirrored pool of 2x2 TB 6.0 Gbps hard drives. My issue is that I'm only getting about 30 Mb/s read/write at any time, I would think my system could perform faster.

There are some limitations though:

  • I'm using an Asus E350M1-I Deluxe Fusion which is a 1.6 Ghz processor and a maximum of 8 Gb ram, which I got. I didn't know about ZFS when I bought the system and these days I would've selected a system capable of more ram.

  • My pool has about 15% free space, but performance wasn't that much better when I had more than 50% free space.

  • When the processor is very busy the read/write performance seems to decrease, so it may very well be the processor that is the bottle neck.

I've read the other posts on this site about using an SSD as a log cache device which is what I'm thinking of doing, considering I don't have that much ram.

My questions:

  1. Do you think adding a SSD as log cache device will improve performance?

  2. Should I instead get another 2 TB hard drive and make a RAID-Z pool instead? (I'm gonna need the space eventually, however the price is still high on mechanical drives) Would this increase performance?

  3. Sell my system and go for an Intel i3 instead?

Thanks for your time!

ewwhite
  • 194,921
  • 91
  • 434
  • 799
knorrhane
  • 113
  • 1
  • 2
  • 10
  • This question seems to be geared towards workstation usage and may be outside of the scope of the site format. Beyond that, it's a bit of a shopping question. – ewwhite Feb 07 '12 at 15:45
  • It's a server, I just use the desktop version of Ubuntu. I use it for file/web/ftp/time machine and iTunes. – knorrhane Feb 09 '12 at 22:19
  • I find ZFS to be very CPU intensive, so in the absence of further info I'd guess that the CPU is the bottleneck in your case. Were there any improvement after migrating to FreeBSD? – netvope Jun 02 '12 at 22:51
  • 1
    @netvope Marginally actually, also if I remember correctly there were no network drivers for the integrated network card on my motherboard which made FreeBSD a no-go. It was good to test however and I'm happy to report that the ZFS pool migrated nicely between Ubuntu and FreeBSD. – knorrhane Jun 03 '12 at 12:59
  • @netvope What actually really boosted performance was installing the OS on an SSD and the performance is now closer to 70 MB/s. When I built the server I chose a 5400 rpm hard drive because I focused on power consumption and had a tight budget.The upgrade to a SSD disk really sped up everything (Windows XP in Virtualbox, VNC, overall network performance etc.) I'm very happy with the set up now and it's been running stable since. On a side note, ZFS is also working well on Precise Pangolin, which is nice. – knorrhane Jun 03 '12 at 13:09
  • Thanks for your reports. Do you mean the performance of your 2x2TB HDDs zpool improved after you used an SSD for just the OS? That's interesting. – netvope Jun 03 '12 at 14:21
  • @netvope No problem! Yes, it's very interesting. The only "explanation" I can think off is that the hard drive was so slow it bottlenecked the ZFS code somehow, I would've most likely seen the same perfomance increase with a "regular" 7200rpm hard drive. I don't know exactly how and why but it's just a theory. – knorrhane Jun 03 '12 at 19:25
  • I use an `nvme` boot drive on `luks` encrypted `btrfs` + `zfs` hard disk mirror for `home` with native `zfs` encryption & user apps are very fast (e.g firefox profile on the mirror). I have set a quota on the mirror for `80%` – Stuart Cardall Nov 28 '18 at 14:39

4 Answers4

5

Note that due to licensing concerns ZFS is not a native filesystem within the Linux Kernel but a FUSE implementation in userspace. As such, it has significant operational overhead which is also well-visible in benchmarks. I believe this to be the main problem here - a high amount of overhead in conjunction with the rather low processing performance of your system.

In general, adding an SSD in whatever capacity will only be of any help if I/O is actually a bottleneck. Use iostat to verify this.

Adding an SSD as a separate log device will only help if your main problem is the synchronous write load. It will not do anything to reads or asynchronous writes (which are cached and lazy-written). As a simple yet quite effective test, you should temporarily disable the intent log - if your overall performance increases significantly, you would benefit from an SSD log device.

Adding an SSD as a L2ARC will help your reads if you have a rather compact "hot" area on your filesystem which is read in a random fashion frequently. L2ARC does not cache sequential transfers, so it would be rather ineffectual for streaming loads.

the-wabbit
  • 40,319
  • 13
  • 105
  • 169
  • Thanks for the feedback! However I am using ZFS natively within the kernel (and not the FUSE implementation) as described on http://zfsonlinux.org since I didn't want to use ZFS in the userspace because of the performance issues. I'll run a few tests with the ZIL disabled and have a look at the iostat. Thanks again for the pointers! – knorrhane Feb 07 '12 at 09:26
  • @Henric I am not familiar with this implementation, but even if the overhead problem is out of the way, the other points about ZFS are still valid - run the proposed tests to see what you can do for your system. – the-wabbit Feb 07 '12 at 09:29
  • @Henric Keep in mind that the native implementation is still a release candidate - see [here](http://zfsonlinux.org/faq.html#PerformanceConsideration): `Additionally, it should be made clear that the ZFS on Linux implementation has not yet been optimized for performance.` – Shane Madden Feb 07 '12 at 18:37
  • So I'm thinking plenty of room for optimization :). No, I see what you mean and after some testing the short answer to the question in my heading is "no". I think I'm going to give FreeBSD a go, and hopefully VirtualBox will be stable enough for Windows XP and iTunes. – knorrhane Feb 08 '12 at 08:29
0

using an SSD as a log cache device which is what I'm thinking of doing, considering I don't have that much ram.

Eh? Main system RAM has nothing to do with it. Availabiltiy of RAM has a big impact on I/O performance - but you cannot use RAM for the disk journal (ZIL) - storage must be non-volatile.

You seem rather confused about how to solve your current problems - which makes me think your reasons for choosing ZFS may be flawed. It certainly interesting technically and has obvious benefits in managing very large volume groups, but that does not apply here - and I've not seen anyone recommending it over the usual suspects on Linux for performance. Have you tried running the same workloads on XFS or ext4? You'll probably find them a lot faster.

Given the price of a SSD to support this (see also my question here - Flash won't work) it's hard to understand why you think that an SSD will be a cost-effective way to improve performance. Yse, it will make it go faster - but I think you'd be better spending the money on a new dual-socket mobo, CPUs and doubling the memory (and you'll still have change left).

symcbean
  • 19,931
  • 1
  • 29
  • 49
  • From the FAQ "Additionally, it should be made clear that the ZFS on Linux implementation has not yet been optimized for performance" – symcbean Feb 07 '12 at 13:43
  • Yes, I am confused which is why I came here and asked these questions. Also I know that the RAM isn't used for the ZIL, I simply asked if putting the ZIL on a SSD would improve performance in my case. I chose ZFS primarily for the data integrity, so I wanna use ZFS. I have my reasons so please don't question them. You're answer is very helpful though and i appreciate that! – knorrhane Feb 07 '12 at 14:40
  • Confused about whether or not I can improve performance that is, not why I am using ZFS in the first place. – knorrhane Feb 07 '12 at 14:49
  • This is absolutely not a good use-case for ZFS. – ewwhite Feb 07 '12 at 15:45
  • 1
    While ZFS as implemented on Solaris might be a good choice for data integrity, a port of a port of a filesystem maintained outwith the usual kernel development process and not supported by any of the major Linux distributors might not the most reliable repository for your data? – symcbean Feb 07 '12 at 17:09
  • Your points are valid but I did a couple of tests before going with Ubuntu and native ZFS. I created a ZFS pool with native ZFS and successfully imported the pool to FreeNAS and FreeBSD using both Virtualbox with raw disk access and a full OS installation. I keep a drive separate for the OS so I could do these things quite easily. That made me confident enough to use native ZFS and I concluded that ZFS is ZFS, no matter what OS you're using. I'm gonna give FreeBSD a try though and see what I can learn. – knorrhane Feb 08 '12 at 15:14
0

I put native ZFS through testing for our servers, and I found it unreliable and lost data in tests. I also found the performance low even with ample CPU resources. I was using it to supply block devices (essentially an LVM replacement with integrity), not as a filesystem. This was on Ubuntu 10.10 so YMMV. I found it very sensitive with any sort of power failure or system hang and not as capable of recovering from this as the various native journal file systems on Linux.

Ian Macintosh
  • 945
  • 1
  • 6
  • 12
  • 1
    See: [ZFS Data Loss Scenarios](http://serverfault.com/questions/410551/zfs-data-loss-scenarios) and [The Things About ZFS That Nobody Told You](http://www.nex7.com/readme1st). – ewwhite Jul 27 '12 at 09:32
0

There is opensolaris, and it's ZFS implementation is the sun's implementation I guess. You can always try that. I don't think you'll be able to run VirtualBox on it though. But check it out, I may be wrong. Or, you can virtualize Solaris with disk attributed to the VM. Of course performance won't be excellent but it seems you got time to try out weird setups...