Vdev async read queues have min=1 and max=3 by default. All sync read/write queues are defaulted to min=max=10, while async writes get min=2 and max=10.
Async reads are described as "prefetch reads" in the docs. So I take it that by design, when i/o is heavy, prefetch activity should be kept relatively low. Yet I have seen a lot of recommended configs which raise zfs_vdev_async_read_min_active and the other min_active tunables to the same number A > 10. The corresponding maximums are typically given the same value B, and sometimes B=A.
I understand the general reason for making the i/o queues bigger, but is it a good idea to change the balance between prefetch reads and other i/o?
Perhaps, it's a good idea, only if the workload has a favorable ratio of prefetch hits to misses?