4

Current situation:

  • 2xESX 3.5 (Soon to be vSphere)
  • 2xESXi 3.5

All four servers are running standalone. The two ESX servers are beginning to run out of hard drive space but still doing very well on the processing and memory fronts. We're not running any external iSCSI or SAN storage for the servers at all.

The ESX servers are also running Ultra320 SCSI drives which are beginning to get very pricey at low storage rates ($550 for 300GB!), so I want to shy away from just throwing more 'small' drives at the ESX servers knowing the hard drives will become rarer with time in the event of drive failure.

What is probably the best solution? Right now I'm looking at the DroboPros which will allow for more growth over a longer period of time, which looks nice for budgets, or at a Dell Poweredge 2950 with 2TB of storage (using 3 bays out of 6 in RAID 5) but still room to grow since it runs SATA for around $3400.

I'm also looking at trying to get vCenter and vMotion, but would there be any advantages to the above two solutions over each other? By switching to external storage I'm hoping to not have to replace these servers until they are maxed out on RAM usage or the CPU load is finally too great.

Update

I will be going with either the PowerEdge bumped up to support SAS drives or a PowerVault with SAS, with the intention to end up running our VMWare from the external storage. The DroboPro is nice but not a long term solution. Here's hoping I get the PowerVault!

Thanks for the great answers everyone!

dragonmantank
  • 483
  • 3
  • 12
  • 19

4 Answers4

4

I would suggest it may be the time considering the iSCSI SAN solution. DroboPros is really a good affordable option for that matter. However, if the budget permits, I would also recommend implement SAS disks instead of SATA. The performance will be a lot better, especially when the SAN is going to host several or maybe more than 10 guests on it.

I have a iSCSI SAN w/ SAS disks in place but I am at the point where I need to upgrade the space too. So I upvoted this question and will be looking at closely as well to see if other guys have any suggestions.

kentchen
  • 754
  • 5
  • 9
3

If you have any plans at all to scale out your ESX cluster, and it sounds like you are as you're considering vmotion and vcenter, you do need to pay attention to your I/O channel. SATA is OK for one or two servers pounding it, but if you ever plan to go past two it won't scale well without serious engineering.

Unfortunately, 'serious engineering', costs a lot of money. As do SAS-based arrays. In the long run, SAS will give you good performance for longer on an equivalent amount of disk. The SATA architecture doesn't handle massively random I/O as well as SCSI based disks (of which SAS is one). You can compensate for this in array hardware with larger caches to help de-randomize the I/O, but there will still be that fundamental limit at the base. There is a reason the big array vendors suggest that SATA drives not be used in an 'online' capacity (ESX hosts, file-servers) and instead suggest 'nearline' (Backup-to-disk, email archive, that kind of thing).

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
2

What I help run:

  • 4 ESX 3.5 Servers
  • 3 Equal Logic SANS (From Dell)
  • 2 Dedicated Switches

Being able to vmotion machines from one server to another has helped many times. So many I cannot count. We even had to replace the mobo on one of the servers. No down time for the VMs.

If you have more than one ESX server, seriously consider a good SAN (w/ iSCSI), with good disks (SAS). Spending a little extra on good equipment will save you a lot of heartbreaks, and headaches, later on.

Joseph Kern
  • 9,809
  • 3
  • 31
  • 55
  • Are your EqualLogic SANs SATA-based? I've worked around an SATA-based EqualLogic SAN and found it very enjoyable. I didn't get a chance to do any benchmarking, but it seemed to be fairly peppy even though it was SATA-based. VMotion is a beautiful thing. We did some RAM upgrades and added NICs on a 4-node VMware ESX 3 cluster for a Customer last year and took zero downtime on any of the VMs. We'd already done the test cluster, so when we did the production cluster we didn't even scheduled it with the admins of the VMs. It was really, really sweet, and nobody noticed anything! – Evan Anderson Jun 19 '09 at 16:38
0

If you're very budget-constrained then an iSCSI SAN box will be fine but I'd be very tempted to go for a low-end FC-based SAN, something like an HP MSA2000fc G2 - it'll blow the socks off most iSCSI solutions.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • I looked at FC and agree it would defiantly be the right choice, but not only would we need to get the SAN we'd also have to purchase FC cards and switches. – dragonmantank Jun 19 '09 at 19:59