8

I am configuring the service which stores plenty of files uploaded by nginx in /srv/storage dir on host system. These files are processed by worker KVM guests which may create new files or assign extended attributes to existing ones. Files are never overwritten but eventually deleted by one of worker.

So host server has file write speed about 177MB/s. KVM image is QCOW2 file stored on host filesystem and image achieves ~155MB/s inside KVM instance thanks to this virtio setting:

<driver name='qemu' type='raw' cache='none' io='native'/>

However I can't get such results for shared folder. I got max 40MB/s with virtfs aka virtio 9p. It seems there is no AIO equivalent for mount:

mount -t 9p -o trans=virtio,version=9p2000.L uploads /srv/storage

I was also thinking about:

  • no NFS - extended attributes are missing
  • no GlusterFS - works but performance worse than virtio because networked, kind of overkill on single hardware machine,
  • maybe sharing LVM volume for r/w? - actually the folder is stored in separate partition, however I read somewhere that LV can't be r/w shared because this may cause fs corruption.
  • keep uploaded files on QCOW2 and share it by all parties?
  • keep nginx and uploaded files in KVM instance on QCOW2, share the image with all guests somehow?
  • iSCSI - possible with single partition?

So how to efficiently share host's folder with KVM guests with extended attributes working).

gertas
  • 1,007
  • 10
  • 11

4 Answers4

2

CIFS can do extended attributes. You can set it up with Samba on Linux.

suprjami
  • 3,476
  • 20
  • 29
2

If your problem is throughput, you might benefit from increasting the max packet size. It defaults to 8 KiB (msize=8192).

The optimal value might take some experimentation, and can vary depending on your usage and the underlying filesystem, but I found 256 KiB (msize=262144) to work well for my purposes. That brought throughput up from ~150 MB/s to ~1.5 GB/s.

See also: https://lime-technology.com/forum/index.php?topic=36345.15

Bob
  • 1,536
  • 12
  • 17
  • Whoa, nice! Tried `msize=262144` on a SSD and it brought my [`fio-cdm`](https://github.com/buty4649/fio-cdm) sequential read/write numbers up from ~90 MiB/s to >1000 MiB/s! The SSD does ~430 MiB/s on the host. – genpfault Apr 23 '18 at 02:15
  • @genpfault While 90 looks slow, 1000 also looks too fast if you're dealing with a single non-NVMe SSD. It's possible that caching is involved - try a longer test (I think I did an extended `dd`) to check. Also, if you are using very compressible data (e.g. `dd if=/dev/zero`), that can also greatly increase benchmarked throughput depending on the filesystem and disk firmware. – Bob Apr 23 '18 at 04:40
0

If you decide to share an LV volume, either directly or via iSCSI, you will not be able to share it as r/w without a clustered FS. If you aren't short on space, you might share two volumes, one r/w on host and the other r/w on guest, while the other pary will have only read permissions on the non-r/w volume. And keep those two in sync with drbd or rsync.

Pretty ugly, but that's what you gen when you can't use NFS

dyasny
  • 18,482
  • 6
  • 48
  • 63
-2

Finally I have switched from KVM to LXC+Docker containerization which supports bind mounts. Selected host directories are mounted inside the container. As there is no networking or translation involved the performance is the same as on the host machine. Additionally multiple containers can write to single "volume" at once without any exclusive locks.

gertas
  • 1,007
  • 10
  • 11