2

I've been by here as a lurker a few times in the past, and have found it nothing but helpful, now I have a question of my own.

I'm charged with creating a VM cluster solution and have been looking into the MD3000(i) series DAS/iSCSI storage. I currently have 2 PowerEdge 1950s that I can hook up to a MD3000 via PERC5 SAS HBAs. However, and this is the tricky part, I want to create a Clustered or High Availability Disk that is accessible over the network.

One way I can see doing this is to divide up the MD3000 into a few LUNS, use one to create a clustered VM and then Connect another LUN as a pass-through disk to that VM, which can then "share" that disk via an iSCSI target. However I do see a few pitfalls in here, if the VM is Active/Passive I only get the benefit of using 1 HBA to handle IO. Additionally I am weary of performance overhead that may be introduced by using a VM to Manage the SAN Disk.

Are these concerns Justified? Can the VM even successfully fail over and still communicate with the Pass through disk?

Another option that seems far simpler is to just pick up a MD3000i instead and just set it up as an iSCSI target using my 1950s to manage it. The only reason I am think of alternatives is because I am concerned that the 1 Gigabit Ports on this unit will create a bottleneck.

I realize that if I'm looking for a super high performance SAN solution then probably the MD3000 series isn't the way to go but I am looking for an reasonably priced solution to cluster 5-6 Low/medium Utilization VMs (around 60iops each, ~90% writes).

I don't mind "out of box" thinking to come up with a solution but I do need to be able to support more original thinking with documentation.

Thanks in advanced for any thoughts.

Adrian
  • 23
  • 4
  • From what I see, I think you're trying to create a NAS. For a total of ~350 IOPS, you'll be just fine using a 1950 to share iSCSI/NFS mounts. That's less throughput than a single 15k SAS drive, and fully capable of being handled by a single 1Gbit connection. – Hyppy May 13 '11 at 19:27
  • So from what I gather you are suggesting creating a iSCSI target on an MD3000i? Which method do you suspect offers better throughput and why? Lets say I want to leverage this MD3000 further, what kind of IOPS am I looking at as the "Breaking Point"? – Adrian May 13 '11 at 20:23
  • @Hyppy: ISCSI is still considered a SAN. A NAS would be NFS or CIFS. Just so you know. – James May 13 '11 at 20:40
  • @James That's debatable, really. – Hyppy May 13 '11 at 20:44
  • part of my confusion with your posts is your terminology; you don't "create a iSCSI target on an MD3000i". It IS an iSCSI target. @Hyppy: NFS/CIFS=NAS, iSCSi/FC=SAN. What's the debate about? – icky3000 May 13 '11 at 21:58

2 Answers2

2

Another idea to save money: you could get a Norco DS-24E. They are really popular among DIY storage enthusiasts. In fact, IIRC you can even find guides for packing the server itself inside the enclosure.

Now, I don't really understand how you want clustered/HA storage but you only seem to be buying a single storage server. Is it the storage you are wanting to be HA, or are you talking about making multiple VMWare servers to be clustered/HA attaching to this single storage server?

Edit: oops, disregard that. I reread your first sentence and see you have two storage servers attached to a single disk enclosure.

Are you installing VMWare on these 2 PowerEdge 1950s or are these two servers JUST for storage servers for a separate group of VMWare servers?

UPDATE

I'm going to take a stab in the dark, and guess what you are trying to describe is this: You will have a single disk enclosure and you want to connect two VMWare servers to it, and you want the two VMWare servers to be able to fail over to each other, using the single disk enclosure. Am I right?

This is much easier, and very standard, than you may think. You'll want the MD3000i, so you get the ISCSI and can share the same disks between multiple servers - this is necessary for clustering VMWare servers. When you configure the disks in the MD3000i, you'll want to set them up with RAID, so you have some disk protection. Their are many ways to do the RAID, but a popular, standard way to start would be with all disks in a RAID 5 array except one disk assigned as a hot-spare.

Then you'll need to export an ISCSI LUN. You'll just need a single LUN to start with. You could use the whole RAID array as one big LUN, or you could use like half and save the rest to use for other LUNs in the future (you can always expand the original LUN in the future).

Now you install you VMWare servers and connect one of them to the ISCSI LUN and partition it as a VMFS datastore. Then connect the other VMWare server to that same LUN and search for the existing datastore. Now both of your VMWare servers are using the same disk LUN and can run virtual machines at the same time, on this one LUN (Active-Active).

If one of the VMWare servers goes down, then you can always run all the virtual guests on the other VMWare server. If you want the failover to happen automatically, you'll have to purchase VCenter.

NOTE: The only way both servers can use this LUN at the same time is because the VMFS filesystem is "cluster-aware". If you connected two linux machines or two windows machines to a single LUN with a typical file system, they would instantly eat each others data (unless it was mounted read-only, but the are still issues there). NTFS, ext3/4, FAT, XFS - all these are NOT cluster aware. You can do this with GFS or OCFS on linux, or on Windows with NTFS combined with Cluster Services. But you don't need to worry about your Linux or Windows guest machines on your VMWare servers because the VMFS datastore takes care of that.

Phew, that was wordy.

James
  • 819
  • 4
  • 10
  • I see that this Model is a JBOD, if that's the case how would you enable Clustering on it? Would I be looking at purchasing additional RAID controllers for this kind of configuration? The unit itself also only seems to have one PSU and one SAS HBA, do they sell redundant models? – Adrian May 13 '11 at 20:58
  • It only has one SAS input, because you can only connect one server to a SAS enclosure. This is true even of the MD3000. To connect multiple servers to a single disk enclosure, you will need ISCSI (or other, more expensive options). The MD3000i does ISCSI natively - it basically has as server built in to it and probably uses SAS internally. I do not believe there are any options for ISCSI built directly in to a Norco enclosure. As for disk redundancy, the RAID levels will depend on the SAS card in your server. – James May 13 '11 at 21:16
  • The reason I had selected an MD3000 was due to the fact that is was more than a simple enclosure its actually a DAS that does allow for multiple servers to Cluster the Disk.http://www.dell.com/downloads/global/power/ps2q07-20070373-LSI.pdf – Adrian May 13 '11 at 21:30
  • scratch that about the MD3000, I see the MD3000 specs say it can connect to 4 servers, although I'm betting it can not connect 4 servers to the same disks. **update:** that pdf you linked to, it combines the MD3000 with the SE600W. It's the SE600W that does the clustering. – James May 13 '11 at 21:33
  • Please reread the pdf, the SE600W is the combined solution utilizing both the Servers (1950s) and the Storage (MD3000) alongside the appropriate HBAs and OS. – Adrian May 13 '11 at 21:46
  • Ah, yeah, it looks like you are right. I would not expect this to work with VMWare though. I updated my anwer with what I think you are looking for. *edit:* well actually maybe it would, but I would still use ISCSI; it's more the standard and expected setup for VMWare. – James May 13 '11 at 21:54
  • As for throughput, I run 6 VMWare servers over 1Gbps Fiber Channel and don't (yet) have a problem running about 50 virtual guests between them. However, if I were to do it again, I would run the storage off a NAS using NFS. I know several other sysadmins who swear by this one brand of NAS for their VMWare clusters - I think it's NetApp. – James May 13 '11 at 22:10
  • If it's a NAS for vmware it's not a netapp. Netapp price/performance is horrific for VM loads (at least in every bake-off I've particiapted in) – Jim B May 14 '11 at 03:24
  • I'm not at all certain it was netapp they said, so I would trust you over my memory. – James May 14 '11 at 03:34
1

I've reread your 3rd paragraph several times but I'm still confused by it so I won't comment on that part.

Dell used to sell a PowerEdge 1950/MD3000/optionalMD1000/optionalMD1000 as a NAS bundle with Microsoft Storage Server installed on the 1950. You could easily recreate that config with your existing 1950 and MD3000 by running the now freely available Microsoft iSCSI Target. Personally, I think the Microsoft iSCSI Target stuff is handy for labs but in a production environment relying on the stability of Windows to serve my storage makes me uneasy. I ran a couple of these systems and they were ok. Obviously you could use the same hardware and run any OS and your favorite iSCSI target or NFS gateway.

The MD3000i iSCSI option works too. I have a few of these. For the load you're talking about, they would be more than adequate. The MD3000i really couldn't be any easier to manage.

If you have some of this hardware already, it's certainly very viable still. If you don't, not that Dell itself isn't selling the MD3000i anymore - there's a new line that does similar stuff.

icky3000
  • 4,718
  • 1
  • 20
  • 15
  • TY for the detailed answer. Most of it makes sense to me except for the iSCSI target part. My confusion is in how exactly clustering would work with iSCSI target software, From my understanding the initiator needs to identify a server as the iSCSI target host. If that is truly the case, How do I manage to identify both nodes in my cluster as the server for a single target, and even if I do this, is the initiator designed to "Fail over" if one host is unavailable? Is that a feature available in the Target Software? – Adrian May 13 '11 at 20:53
  • Lets clarify the kind of clustering you are talking about. Are you talking about a VMWare cluster or ? – icky3000 May 13 '11 at 21:55