5

We are currently running VMWare ESXi, with 1 Windows 2008 R2 Standard instance, 1 Windows 7 instance and 2 Linux instances.

So, recently I wanted to upgrade my server, because there were a lot of things happened. I couldn't find Intel 1GB multiport adapter, so I get Dlink's adapter, but it was not supported by VMWare. Another issue is that our hardware NAS is extremely slow, and their technical support is dud at best. 20MB/s of RAID5 with NIC teaming and only iSCSI really can't say much in term of performance. So I have encountered Starwind VirtualSAN which can only run only under Windows. Therefore I plan to migrate my VMWare into HyperV.

We have only 1 Windows 2008 R2 Standard license available. But as I read on Microsoft site, Windows 2008 R2 Standard is eligible for 1 VM and 1 host. This Windows 2008 R2 is responsible for Citrix Remote Desktop.

I have successfully performed the migration, but I have a few question remains before performing the live migration.

  1. Since I can run 2 Windows 2008 R2 Standard total (host + VM), should I run Windows 2008 R2 Standard on top of Windows 2008 R2 Standard host, or is it better with Windows 2012 R2 HyperV host? Is there any new features in 2012 R2 I should know about than 2008 R2 counterpart?
  2. VMWare's Ubuntu support has been excellent, and it is very fast. How about HyperV performance? Anyone ever tried to compare Ubuntu server on HyperV and on VMWare?
  3. Is it better to let them VMs access iSCSI from their host, or just allocate fixed VHD to them? i.e, 100GB for VHD + 2TB of iSCSI, or just put 100GB VHD and 2TB VHD? The iSCSI will be cached by about 16GB cache RAM on the host. I really have to install Starwind on HyperV host, since I have another server which currently accessing iSCSI too.

Thank you for helping

prd
  • 596
  • 9
  • 21
  • Really? I thought it was 1+1. Thank you for the heads up. – prd Jul 04 '16 at 13:30
  • In terms of Isci or vhd; use vhd. It gives you more options going forward as it is more flexible. You can move storage devices and migrate the vhd live (in most cases). – Drifter104 Jul 04 '16 at 13:50
  • 1
    @yagmoth555 with Server 2008 R2 you get 1 physical and one virtual license, not 2. That only changed later with 2012. Many sources confirm this like [TechNet](https://blogs.technet.microsoft.com/kevinremde/2012/05/17/can-i-run-hyper-v-on-windows-server-2008-r2-standard-so-many-questions-so-little-time-part-33/) – Peter Hahndorf Jul 04 '16 at 19:39
  • @PeterHahndorf My error, I mixed 2012 licensing. Will erase my comment, thanks for your correction! – yagmoth555 Jul 04 '16 at 20:13

2 Answers2

3

StarWind vSAN runs under VMware just fine. You have VMware's own vSAN there also. Shouldn't be an issue to live migrate your VMs.

BaronSamedi1958
  • 12,510
  • 1
  • 20
  • 46
  • 1
    StarWind can run under VMWare ESXi? How? The file is EXE. And VMWare vSAN is not free. – prd Jul 05 '16 at 13:58
  • 3
    StarWind can be installed inside Windows VMs. Basically, you are attaching a vmdk to VM and StarWind presents it as ISCSI target for ESXi cluster. They also have pre-build hardware appliances both Hyper-V and ESXi based: https://www.starwindsoftware.com/starwind-hyperconverged-appliance – Strepsils Jul 05 '16 at 14:05
  • 2
    "StarWind can run under VMWare ESXi? How? The file is EXE." Provision a Windows or Hyper-V VM and install StarWind there. Alternatively you can use their prebuilt VSA. – BaronSamedi1958 Jul 08 '16 at 13:37
  • Ah... it's still runs under windows either way. With my migration, it will run at host. – prd Jul 09 '16 at 12:56
  • 3
    I have tested both VMware VSAN since it runs native and installs via a single click and StarWind vSAN since it is just much cost-effective. What I have noticed is that despite running virtualized Starwind vSAN outperforms VMware VSAN and is much easier to deploy properly and manage. VMware VSAN is a very powerful product and it’s installation and configuration requires some decent skills that one might not have and provided functionality might be an overkill in some cases. StarWind is simple and fast. I am using it right now in my environment. Very happy so far. – Net Runner Jul 11 '16 at 15:23
3

Ok.. it's been a while... I was hoping some has done some testing and can help me in this matter, but after last few days, I've been testing some configurations and benchmark them, trying to find and compensate myself. This may not be perfect config, but this is comfortable enough for me.

  1. I go with Windows 2012 HyperV + 2008 R2. StarWind running at windows 2012. 2012 hyperV is faster than 2008R2's HyperV. At least, I don't need it to leave overnight to finish an installation, unlike 2008 R2. Plus, I can NIC Teaming the server. Thus granting our iSCSI connection a full 3GBps bandwidth with 2 DLink plus one onboard. Our iSCSI running at around 2GB/s plus, close to 3GB. a very significant upgrade speed for our other servers, compared to old 20MB/s. Just lovely. More than that, Starwind can achieve much faster than that, read below.

  2. Ubuntu service was not good in 2008R2, but very good with 2012 R2 Gen2. another point in choosing 2012 R2 HyperV. Microsoft's Gen2 HyperV is a VERY significant update to old Gen1 one. Doesn't feel that different than those in VMWAre.

  3. I go with iSCSI starwind route, and minimal VHDX. The reason being, is that I can only achieve 50MB/s with VHDX, but I can achieve some 9 to 10GB/s with StarWind. Using 8GB RAM cache, plus 100GB L2 SSD Cache, Starwind is screaming FAST. And I really mean FAST. It also has server mirroring, so data integrity is covered there.

  4. Also as for migration, I don't really that care, since with iSCSI, I can simply disconnect the drive from VM 1, and connect it to another VM on another server or simply connect it to physical server, and resume operation at virtually no time lost. No migration needed, simply switch connection, and voila.. done.

So, after running benchmark testing, I decided to go with HyperV 2012 R2, 2008 guest, and all data in iSCSI.

prd
  • 596
  • 9
  • 21
  • 2
    Looks like a great performance gain indeed. I would actually try MPIO instead of teaming, this may give even more performance in your case. – Strepsils Jul 15 '16 at 18:23