3

Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system.

In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea.

VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see.

What I can find are various forum posts online, but which are somewhat dated, e.g.:

ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments

(via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10)

and:

VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server)

Will you know what else would be on that box besides ClearCase?

Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31)

and:

... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted.

(via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10)

Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

user9517
  • 114,104
  • 20
  • 206
  • 289
Garen
  • 153
  • 1
  • 6
  • VMWare is a company, not a product. What product are you intending to virtualise onto? – Chris Thorpe Feb 12 '11 at 03:05
  • If you go ahead with this keep a close eye on the clocks in your VM's. The VMWare Timekeeping guide is a must read; http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf. I don't like the idea of having to sort out commits from the future and machines running in the past for a source/revision control system like clearcase running in VMWare. My personal experience is don't put Solaris x86 under VMWare. – gm3dmo Feb 12 '11 at 08:36
  • @chris-thorpe VMWare ESX, presumably. – Garen Feb 14 '11 at 18:14

5 Answers5

3

For ClearCase registry server or license server, why not.
But for ClearCase Vob server or View (storage) server? I think not:

All of our Vob servers are on Solaris10, with zones, and ZFS (for extra large disk capacity).

VonC
  • 2,653
  • 5
  • 29
  • 48
  • Solaris 10 with ZFS seems to be a really popular configuration (even internally at IBM AFAICT). Do you use any of the more recent ZFS features (e.g. deduplication, ssd cache devices, ...)? – Garen Feb 14 '11 at 18:18
  • @Garen: no, those are still being validating by our Unix system support team. – VonC Feb 14 '11 at 18:23
0

I have built two VMware clearcase clients (a RH5.3 and a RH4.2). They have 2 GB RAM and a CPU reservation of 2GHz on an ESXi 4.1 hypervisor. IMHO they work ok - faster than the Sun Fire V240/Solaris 10u7 clients I had before. I am thinking of creating both a view and vob servers on ESXi but using raw device mapping in order to speed up things. I do not expect serious performance bottlenecks. As for time sync I solved the issue via VMWare tools installation - no more clock skews since.

Daniel Voina
  • 188
  • 1
  • 10
  • Sounds like you had a successful upgrade, but I wonder about your statement of "work ok": are things working OK but with the view+vob servers not running on ESXi? – Garen Feb 14 '11 at 18:21
  • @Garen: "Work ok" - is the subjective perception from a developer's perspective. The Sun box was extremely slow on accessing the views and the IOPS numbers were ridiculous low. In terms of system stats (vmstat/iostat) I cannot really compare the two machines as they are different ISAs and clock frequency – Daniel Voina Feb 16 '11 at 11:46
0

I recently have had some experience with VMWare & ClearCase. For one of my environments it was required to use a ClearCase client on a VMWare machine. The specific purpose was to build the code in snapshot views. Previous testing (2yrs ago) had shown that snapshot load times itself were almost 1.5x more on VMWare machine.

However, a recent testing was encouraging and much lag wasnt seen. The config was: Physical Machine was a DL386 G6 with 26GB RAM; VM's had 8GB RAM & 160GB HDD allocated

0

I've already installed ClearCase VOB servers in VMware on Linux RedHat and Windows.

Check this: http://www.ibm.com/developerworks/rational/library/smart-virtualization-1/index.html

user986086
  • 113
  • 3
0

I've been testing a Solaris 10 x86 VM (2.6Ghz physical machine) with 4 cores and 16 Gig of memory against our old V210s 1.3GHz sparc with 16Gig of memory in a NAS environment. The VOB is a copy of production. 1.2Gig database with 9.5Gig source pool. Results so far: - Database load on VM is 2X faster than V210 (10.5min vs 22min) - cleartool find -version lbtype(label) -print ON the server is 2x faster. WARNING BUT: Same command on a client (Ubuntu/Solaris) is about 2X SLOWER.

It appears all the RPC calls from the client to the VM ClearCase server is the issue. I have tcpdumps loaded into wireshark. Nothing stands out. About the same number of network transations, but the VM x86 machine takes longer per call and as a result, the old V210 gets the job done faster when a client is interfaced to it.

The point was to test out stay NAS vs SAN. I would use a physical machine as the VOB/View server(s) in a final solution. I wasn't expecting this performance hit. If I find the issue, I'll repost. I have an open case with IBM.

Curt
  • 1
  • What VMNIC? Defaults are not usually the best performing, but usually the least effort/trouble. If you aren't using VMXNET3, you should look into it. – Aaron Copley Dec 13 '13 at 19:30
  • Good Point Aaron. My VMware admin said the same thing (try a different nic). Right now, it's using the E1000. I see the VMXNET3 you mentioned. Thanks for the hint! – Curt Dec 13 '13 at 23:26