0

By default zfs_vdev_async_write_min_active=2, with a corresponding max of 10. The same default max of 10 applies to the sync read and write queues, but for them the min is also 10.

I've seen configs that boost all of these minimums to the same number (> 10). Why does it help to equalize the balance between async writes and sync reads/writes in this way? It seems to go against what one of the designers recommended in this blog.

Some configs I've seen also set zfs_vdev_async_write_min_active = zfs_vdev_async_write_max_active. Again, why does this help? It seems to defeat the intended behavior of the async write scheduler as described in the above blog and the docs.

Probably these settings are what did well in performance tests, but it would be nice to understand why. I would expect ramping up async writes to slow down sync reads/writes and that sync reads/writes equate to application performance.

Tavin
  • 111
  • 2

0 Answers0