because of my unanswered question : qemu snapshot exclude device i decided to use NFSv3 for the VM to handle user data. Because of slow performance of BTRFS after maintance-tasks i use now zfs Raid1 Version: buster-backports 0.8.3-1 on the Debian Host.
When I copy data on the host there is no performance problem.
BUT: the performance via NFS is exorbitant slow; in the beginning for both write and read with 10 and 40 MB/s. After some Tuning (i think it was NFS with async) i got the writes to ~80 MB/s. Thats enough for me. The reads stayed at 20 MB/s per device, yet.
Any ideas what to test? I'm new to zfs and NFS.
Host: Debian 10
VM: Debian 10
NFS:
Host: /exports/ordner 192.168.4.0/24(rw,no_subtree_check)
client: .....nfs local_lock=all,vers=3,rw,user,intr,retry=1,async,nodev,auto,nosuid,noexec,retrans=1,noatime,nodiratime
ZFS dataset:
Volume with:
....create -o ashift=12 zfs-pool ....mirror
sync=default
zfs set compression=off zfs-pool
zfs set xattr=sa zfs-pool
zfs set dnodesize=auto zfs-pool/vol
zfs set recordsize=1M zfs-pool/vol
zfs set atime=off zfs-pool/vol
zfs-mod-tune:
options zfs zfs_prefetch_disable=1
options zfs_vdev_async_read_max_active=1
options zfs_vdev_sync_read_max_active=128 (also 1 tested)
options zfs_vdev_sync_read_min_active=1
Can u give an advice?
- Generell question: Why do the openzfs developer programm the buildin nfs configuration modul when they use the kernel-nfs-server, too? i dont understand such logic. Waste of human resources!!! – jew May 22 '20 at 02:30