Optimizing ZFS for large sequential reads and writes

1

I'm currently troubleshooting a ZFS running Debian 7 with ZFS module v0.6.5.2-2-wheezy, ZFS pool version 5000, ZFS filesystem version 5.

The system is accessed via NFS and the workload consists of many large sequential reads and writes via NFS. The file size is between 50G and 100G and the reads and write occur in parallel.

The system has 16 cores and 64GB memory and uses disks from a central enterprise SAN which is capable of more than 1GByte/s read / write in parallel backed by lots of SSDs.

When I'm only writing data, I can sustain 300MByte/s without issues, but as soon as I start reading in parallel, performance goes down to around 150-200MByte/s for reading and writing, but regularly drops to a few MBytes/s for several seconds and thus the average throughput is just 100MByte/s for each read and write.

How can I optimize ZFS for parallel large sequential read / write performance and if possible reduce the time where no data can be written?

Florian Feldhaus

Posted 2017-03-26T17:53:03.713

Reputation: 218

What else is running on this server? Did you already tweak settings or is everything stock? Do you have compression and/or deduplication enabled? What's the CPU doing when writes are slowing down? – Daniel B – 2017-03-26T18:23:34.303

There are other processes running on the server uploading the data at some point, but those processes are not running when throughput drops. I didn't tweak settings appart from setting zfs_arc_max to 17179869184 due to out of memory issues. But the throughput drop was occuring before that change as well. – Florian Feldhaus – 2017-03-26T18:28:19.990

You may have better luck asking this on [sf], our sister site for professional server administrators. – a CVn – 2017-04-20T11:14:45.113

No answers