3

We are looking to move our infrastructure from our office to a COLO.

Currently we run a rack-mount white-box server using commodity hardware, and ESXi 4 as the hypervisor to power 9 VM's for internal development/DC/Exchange etc.

We are looking to use a SAN for storage, and have come up with a network diagram which allows us to use the spare ethernet port on the physical server to attach to another server - which is proposed to be used by the SAN.

The question is, is an ethernet port sufficient for this application? It is a gigabit ethernet port. I have used fibre in the past for this, but not ethernet.

These guys (http://www.datacore.com/) have a method of providing SAN over ethernet.

The proposed physical architecture is as follows:

alt text

With the virtual machines looking a bit like this (onbviously the connection between pfSense and eth1 would be removed if the top server was a SAN ):

alt text

Darbio
  • 547
  • 1
  • 5
  • 15

4 Answers4

7

I've had good luck with iSCSI and moderate workloads, but of course whether or not the single gigabit connection will be able to keep up with your environment is not something we'll be able to help you determine.

The one glaring omission I noticed immediately with your plan is that, by only using a single port for your storage, you have no option for failover or load balancing. Of course, if you only have a single-head SAN (which seems to be the case), you have a single point of failure there as well.

EEAA
  • 108,414
  • 18
  • 172
  • 242
  • +1, I'd say iSCSI is good up to moderate workloads. I can max out our system with 1200 IOPS or 175 MBps (obvious trade-offs apply; 2x1GbE min). It chokes under heavy workloads where FC shines though. – Chris S Dec 13 '10 at 03:04
  • Would you suggest 2 GB Eth connections? The SAN is proposed to be RAID 5, and yes there would only be one. Maybe something in teh future woudl be a failover SAN. – Darbio Dec 13 '10 at 03:08
  • 1
    My preferred setup is 2 1Gb interfaces for storage and 2 for the "normal" VM traffic. If you have 4 interfaces available, then that's what I'd do. – EEAA Dec 13 '10 at 03:13
3

iSCSI is a method VMWare certifies solutions for, so it's good enough. Whether or not it's good enough in your exact use-case we can't tell from here. VM workloads tend to be very highly random, and storage capabilities expand by number of disk spindles more than they do raw capacity. If you SAN has more actual disks than the direct-attach storage you're using now, you shouldn't have to fear there.

Where you do have to fear is throughput. 3G SAS is faster than 1Gb Ethernet, that's all there is to it. However, for highly randomized workloads you may not be pushing even 1Gb on SAS. It all depends.

You'll probably be OK, but the only way to find out for certain is to try.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • +1, If there's monitoring tools available, take a look and see what your workload is. Knowing that will go a long way to determining if it's workable for your situation. – Chris S Dec 13 '10 at 03:07
  • I agree with most of what you are saying but any decent SAN hardware that supports iSCSI will support multiple front end ports and distribute IO loads across them with multi-pathing so iSCSI throughput is not limited to 1GBs even if your iSCSI network is only Gigabit Ethernet. – Helvick Dec 18 '10 at 07:48
1

SAN over Ethernet is very much doable. But the answer to the question that would it succeed in your environment would be subject your disk usage. If you requirement is disk intensive then it may not suffice.

Since you already have all the hardware why don't you give it a try.

Sameer
  • 4,070
  • 2
  • 16
  • 11
1

I've used GB NICs on Oracle databases talking to SAN/NAS arrays for years using iSCSI with no performance problems. I've been quite surprised at how little traffic really flows. Little enough that I'm having a tough time letting my vendor sell me more NICs.

I've also used VMware over NFS in a similar fashion for more guests spread across a few servers than you are using, again with little troubles.

My current config with ESXi is running 17 (soon to likely be 18 later tonight) over NFS on a single redundant GB link.

Keith Stokes
  • 927
  • 6
  • 7