0

I'm running 3 proxmox 3.4 nodes using NFS shared storage with a dedicated 1GB network switch.

root@lnxvt10:~# pveversion
pve-manager/3.4-11/6502936f (running kernel: 2.6.32-43-pve)

root@lnxvt10:~# mount | grep 192.168.100.200
192.168.100.200:/mnt/volume0-zr2/proxmox1/ on /mnt/pve/freenas2-proxmox1 type nfs4 (rw,noatime,vers=4,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.30,minorversion=0,local_lock=none,addr=192.168.100.200)

My vms are qcow2 based.

I'm experiencing very slow performance. VMs (both windows and linux) are very slow and usually hangs on iowait but when monitoring the situation on the NAS side there is no such load as expected: ethernet usage is about 20/30 Mbit/s.

I don't think the problem is related only to network because iperf get a reasonable speed

Client connecting to 192.168.100.200, TCP port 5001
TCP window size: 19.6 KByte (default)
------------------------------------------------------------
[  3] local 192.168.100.30 port 56835 connected with 192.168.100.200 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-30.0 sec  3.26 GBytes  933 Mbits/sec

Also dd on the NAS filesystem get a very better result:

[root@freenas2] /mnt/volume0-zr2/proxmox1# dd if=/dev/zero of=file.dd bs=320M count=10
10+0 records in
10+0 records out
3355443200 bytes transferred in 16.386541 secs (204768244 bytes/sec)

Ok, bottleneck can be the NFS/qcow2 combination, but possible those poor results?

EDIT

root@lnxvt10:~# pveperf
CPU BOGOMIPS:      105594.48
REGEX/SECOND:      1245255
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    167.64 MB/sec
AVERAGE SEEK TIME: 8.36 ms
FSYNCS/SECOND:     961.94
DNS EXT:           58.04 ms
DNS INT:           2002.71 ms (mydomain)

NAS spec (hdparm not available): Freenas 9.3 ZFS ZRAID2 volume with 6 SATA disk Hitachi Deskstar 7K3000

sgargel
  • 190
  • 1
  • 15
  • The read speeds on your prox servers are desktop-grade. It's pretty clear that the IO wait is a combination of the prox servers themselves, and the Freenas box, which is using desktop-grade hard drives. This setup is probably not going to give you the performance you are looking for. FSYNCS/SECOND should be in the 2500+ range at a minimum to keep IO wait % to acceptable levels. Do the VM's use disk cache? That can sometimes help, but really, you need to have a solid hardware foundation. – Gmck Jan 28 '16 at 17:19
  • @Gmck i know that my hardware is desktop grade, but if i can write on the nas volume at 200MB/s and my network is near 100MB/s (gigabit), I expect my VM should do better than 20/30 Mbit/s (~2/3 MB/s) – sgargel Feb 01 '16 at 12:24
  • Being able to login to the NAS server and execute a command to test write speed, does not directly translate into performance on the VMs. Networking is one factor, for sure, but it's not the only factor. What type of switch is it? Cisco? What type of NICs? How many VMs are running, and what are the services? DB, HTTP, etc. If you can provide some additional info, that would help. – Gmck Feb 01 '16 at 16:33

0 Answers0