1

Vdev async read queues have min=1 and max=3 by default. All sync read/write queues are defaulted to min=max=10, while async writes get min=2 and max=10.

Async reads are described as "prefetch reads" in the docs. So I take it that by design, when i/o is heavy, prefetch activity should be kept relatively low. Yet I have seen a lot of recommended configs which raise zfs_vdev_async_read_min_active and the other min_active tunables to the same number A > 10. The corresponding maximums are typically given the same value B, and sometimes B=A.

I understand the general reason for making the i/o queues bigger, but is it a good idea to change the balance between prefetch reads and other i/o?

Perhaps, it's a good idea, only if the workload has a favorable ratio of prefetch hits to misses?

Tavin
  • 111
  • 2
  • It depends. Are you asking about a specific situation or configuration in mind? – ewwhite Dec 12 '21 at 16:07
  • Well, I've inherited a configuration where all zfs_vdev_(a)sync_(read|write)_(min|max)_active = 16. We've had some performance issues. Before diving into workload testing with different settings, I wanted to understand the theory behind this parameter. – Tavin Dec 12 '21 at 16:17
  • What leads you to believe that the performance issues are related to the settings? – ewwhite Dec 12 '21 at 17:38
  • I don't necessarily believe that, I'm just investigating. – Tavin Dec 12 '21 at 17:42
  • There isn't enough information here to assist. – ewwhite Dec 12 '21 at 22:09

0 Answers0