4

I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less).

I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches?

The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN?

With the above linked question for the SAN:

  • How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port?
  • With 3 to 6 servers how much buffer cache do you really need?
  • What else should I be looking at?

Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <-> server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins).

We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

Luke
  • 1,892
  • 4
  • 22
  • 27
  • possible duplicate of [Completely disjoint iSCSI networks vs dedicated switches and VLANs](http://serverfault.com/questions/363149/completely-disjoint-iscsi-networks-vs-dedicated-switches-and-vlans) – MDMarra May 30 '12 at 19:46
  • @MDMarra I also want to know what kind of switches I should be looking at for either method. Other threads don't cover that. It's a little more than "should they be dedicated" - "yes they should be". – Luke May 30 '12 at 20:14
  • "What kind of switches should I get for _____" is a shopping question, which is generally off-topic across all of Stack Exchange. – MDMarra May 30 '12 at 20:16
  • @MDMarra Not asking for brand or model specifically. More of what to look for in a switch. The question I linked covered SAN quite well. If I were to have a dedicated, what is sufficient for LAN that isn't for a SAN? Perhaps that should have been the title. – Luke May 30 '12 at 20:22
  • Clarified the question a little more. – Luke May 30 '12 at 20:48

3 Answers3

5

As a best practice, yes, your SAN and LAN ought to be physically separate.

That said, like all things, it comes down to what problems you're trying to solve, your performance needs, your sensitivity to transient storage slowness (if you experience port or backplane contention), and the amount of money you have to throw at the project.

I know many businesses that run converged SAN and data networks, and they have great luck. I know equally as many that maintain two physically separate networks as well.

What's best for your situation depends on the above factors.

EEAA
  • 108,414
  • 18
  • 172
  • 242
  • So what makes more sense: Spending $10k on each switch (2 total) or $5k on each switch (4 total)? – Luke May 30 '12 at 19:43
  • 1
    @luke - that's impossible for me to answer. You need to work with your vendor and go over your goals/requirements with them to see what would be the best solution. – EEAA May 30 '12 at 19:45
  • We haven't decided on a vendor yet. Right now it's between Dell and CDW (EMC + HP). I'm also considering Cisco. I haven't really gotten any firm answers from them. Looking more for experience with either approach. The biggest pro for dedicated switches I've heard is more redundancy. But beyond that, I'm not sure where I should be investing. – Luke May 30 '12 at 19:49
  • Dell Powerconnect 5224's are perfect for this, and cost effective. The "iSCSI optimizations" they provide allow you to roll them out pretty quickly. – SpacemanSpiff May 30 '12 at 20:08
  • @SpacemanSpiff Interesting. The 5224's are ~$1500 each. These would be good for iSCSI w/ MySQL databases? I think I would probably invest a little more, but if these switches will work then I guess it doesn't matter too much. I was thinking I'd have to spend $5k-$10k per switch. – Luke May 30 '12 at 20:19
  • @Luke: I'm running a little bit of iSCSI over some Dell Powerconnect 5500-series switches and I'll second SpacemanSpiff's recommendation. They're extremely inexpensive and work pretty well. They're nothing fancy but they're also not overly expensive. It sounds like you've got a pretty small deployment there and I can't imagine you're going to have switch bottleneck problems before you bump into storage device I/O limits (assuming you're using spinning rust). – Evan Anderson May 30 '12 at 22:23
  • Luke, the advantages of a switch like the PowerConnect is that it will do what you need for iSCSI without a lot of fuss. It won't do advanced switching, but you don't need that. My recommendation is to get two PowerConnect switches, don't stack them, and run multipath I/O. That way if any one switch fails MPIO will take over and you won't loose host <-> storage connectivity. – Jeremy Jun 01 '12 at 17:13
5

Some best practices are to run them separately, however, in doing so you lose out on the benefits of having a converged network. This can be important when you have a large environment and oodles of 10Gb ports.

However, your environment is a very small one, and I think Dell is trying to oversell you on network hardware, and their own iscsi hardware.

You can purchase a switch with multiple heads that is functionally equivalent to having 2 switches. Also, you can easily look at FC instead of iscsi, and maybe compare NFS too, along with infiniband. You can also use something like infiniband virtualization e.g. Xsigo?

On the NAS/SAN side, i would not be so tied to Dell, but might instead go with a best of breed product line, including things like Netapp and other competitors.

Questions I would ask:

How easy is it for me to find talent for this configuration?

How close to industry standard is this hardware?

What are my out year costs of this solution going to be? (TCO)

How expandable is this solution?

Does this solution miss any nice to haves?

Is the vendor trying to oversell me on a specific solution?

Do I adequately understand the problem space, and do I know (reasonably) a good number of alternative solutions?

Can I use one vendor quote to get price concessions from vendor 2?

How remotely manageable and monitorable is this solution?

How well does the entire stack integrate?

What is the cost per minute of an outage and how does that compare to extra hardware?

Can I mitigate risks another way?

Might I have better advantages by going with a cloud stack from a vendor in a regulated environment and trading off higher operating costs for less capital investment?

Where is my application aware security?

How easy is it to secure this infrastructure?

Am I attempting to optimize the solution prematurely?

Have I performed sufficient performance analysis and benchmarking to know what my true performance requirements are?

How does this system fail over and to whom? (HA and Vmotion, among others)?

Do I have single points of failure?

Have I received quotes for both integrated stacks and best of breed stacks, from at least 3 vendors apiece (6 vendors total)?

Can I go with a different model altogether, perhaps using a blade enclosure with blades, or virtualized i/o over a higher speed network (Xsigo)?

Can I use virtualized switches (e.g. Cisco 1000Vs and their competitors) instead of physical switches?

One other thing I would add is that several vendors are now selling pre-engineered solutions like the Cisco/Vmware/Netapp partnership with Flexpods, or what would be a competing fully one integrator solution such as HP's VirtualSystem. I'd ensure that these vendors know about what your goals are, and they get working with their own virtualization specialists to create a solution that meets your requirements.

You are able to use demo model solutions from these vendors without buying anything (for a limited time) then make a selection based on whichever best meets your needs. Head to head competition - always a win :)

Brennan
  • 1,388
  • 6
  • 18
  • 1
    Great set of questions! – 3molo May 30 '12 at 19:59
  • 1
    Some good questions indeed. I've asked some of them. The biggest question is "Am I attempting to optimize the solution prematurely?" and is the vendors I'm working with trying to over-sell what I actually need. – Luke May 30 '12 at 20:24
  • @Luke, that is a good comment, I added it to my list of questions :) – Brennan May 30 '12 at 20:56
1

Use Cisco 3750x switches, buy two of them and vlan the ports off for what you need. Two 48 port switches should do you fine. I just setup our iscsi setup and this is how we did it. Works fine. The san plugs into the switches and the servers plug into the switches. I think we needed 6 cables for each server because of all the heart beats, but it works mint!

Rob
  • 607
  • 3
  • 8
  • 16