1

i posted this thread in citrix forums earlier today, but i thought id try here too :)

we have 2 xenserver 5.5 U2 hosts running each on a HP blade servers BL460. Each blade has 1 quad-core intel xeon processor (2.66GHz), 16 gb rams, and two 300GB 10K SAS SFF hard drives (raid 1 array). the first blade has 2 windows server 2003 R2 and 10 Windows XP SP3 VMs, the second has 4 windows servers.

when most of the users are logged on and working on their vms, they all simultaneously lag for a few seconds and sometimes minutes. the clicked icon takes a few seconds to become highlighted, microsoft office becomes a nightmare and basically everything slows down frightingly. we first naturally suspected the network but eventually we were able to rule that out seeing that we reused cat6 cabling and all switches are relatively expensive decent ones. this problem even happens when i was connected through my laptop directly to the blade's gigabit switch and RDC-ing a vm. now we're thinking this is storage related. when we first asked for quotation for the blades, we opted out of using SAN seeing that the drives are SAS and we only have about 10 employees, so we're only using the local storage on the blades. are they not enough for such an environment? it appears like at a certain point, when there's -heavy- activity (and by heavy i mean everyone logged on and using very basic software), the hard drives get overloaded and cant handle so much requests at once. all XPs almost are given 1gb of memory (some are given 2gb) and 2 vCPUs (some 4), windows 2003 servers are given 2 gb memory and 4 vcpus.

i understand that xenserver hosts are used in pools of up to dozens if not hundred of VMs, and of course sophisticated storage is installed in that case. but is local storage LVMs not also used for less significant scenarios in magnitude such as our own? any hints or ideas are greatly appreciated..

3a2roub
  • 294
  • 4
  • 15
  • Do you have the same slowdowns/freezes when all the servers are only assigned one vCPU? At least with VMWare, assigning multiple vCPUs to VMs when you don't have the physical CPUs/cores to back it up, can cause delays. I can't comment specifically about XenServer though... – KJ-SRS Feb 06 '11 at 00:58

1 Answers1

1

1.have you tried disable task offload? 2.have you tried performancevm? ( http://support.citrix.com/article/CTX127065)

Disable Task Offload :

For Windows 2003 VM’s you will need to create a registry key to DisableTaskOffload – REG_DWORD = 1 under HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter. The easiest way is to create a Group Policy to push this registry key.

Disable CheckSum Offload :

The second fix you should try if you still have poor network performance is to Disable Checksum on the XenServer interfaces, both the Virtual (VIF) and the Physical (PIF). Be aware that you don’t need to restart the XenServer or the VM’s. This script will do this automatically on all you network interfaces in your XenServer Pool.

echo Setting checksum off on VIFs VIFLIST=xe vif-list | grep "uuid ( RO) " | awk '{print $5}' for VIF in $VIFLIST do echo Setting ethtool-tx=off and ethtool-rx=off on $VIF xe vif-param-set uuid=$VIF other-config:ethtool-tx="off" xe vif-param-set uuid=$VIF other-config:ethtool-rx="off" done echo Setting checksum off on PIFs PIFLIST=xe pif-list | grep "uuid ( RO) " | awk '{print $5}' for PIF in $PIFLIST do echo Setting ethtool-tx=off and ethtool-rx=off on $PIF xe pif-param-set uuid=$PIF other-config:ethtool-tx="off" xe pif-param-set uuid=$PIF other-config:ethtool-rx="off" done

John
  • 11
  • 1