5

I've hit a bit of a wall with our network scale-out. As it stands right now:

enter image description here

We have five ProCurve 2910al switches connected as above, but with 10GbE connections (two CX4, two fiber). This fully populates the central switch above, there will be no more 10GbE Ethernet connections from that device. This group of switches is not stacked (no stack directive).

Sometime in the next two or three months I'll need to add a sixth, and I'm not sure how deep of a hole I'm in. Ideally I'd replace the core switch with something more capable and has more 10GbE ports. However, that's a major outage and that requires special scheduling.

The two edge switches connected via fiber have dual-port 10GbE cards in them, so I could physically put another switch on the far end of one of those. I don't know how much of a good or bad idea that would be though.

Is that too many segments between end-points?

Some config-excerpts:

Running configuration:

; J9147A Configuration Editor; Created on release #W.14.49

hostname "REDACTED-SW01" time timezone 120 module 1 type J9147A module 2 type J9008A module 3 type J9149A no stack

trunk B1 Trk3 Trunk trunk B2 Trk4 Trunk trunk A1 Trk11 Trunk trunk A2 Trk12 Trunk

vlan 15 name "VM-MGMT" untagged Trk2,Trk5,Trk7 ip helper-address 10.1.10.4 ip address 10.1.11.1 255.255.255.0 tagged 37-40,Trk3-Trk4,Trk11-Trk12 jumbo ip proxy-arp exit

Zypher
  • 36,995
  • 5
  • 52
  • 95
Blue Warrior NFB
  • 611
  • 6
  • 17
  • 2 questions do you need 10GBe on all the end points? Also how much will you grow soon? will downtime out weigh the upgrade? – Zapto Oct 22 '12 at 17:56
  • @t1nt1n The expansion I'm anticipating is likely to be a bunch of VMWare nodes that'll need high throughput to resources (database and filers) kept on other switches, so I am leery of bandwidth. Also, that central switch already has a lot of 1GbE ports consumed. Network-down upgrades only can happen once a quarter, which limits the timing on the expansion. – Blue Warrior NFB Oct 22 '12 at 18:02
  • How about stacking a new core switch with the existing core switch? – joeqwerty Oct 22 '12 at 19:34
  • @joeqwerty That still qualifies as a major outage (I'm presuming stacking requires a reboot), but is an option I hadn't considered. – Blue Warrior NFB Oct 22 '12 at 19:37

1 Answers1

4

I would just switch to having two core switches instead of one. The two core switches would be connected by 10GbE, leaving six available 10GbE ports on core switches. That would support adding up to 6 additional edge switches. Your network diameter only increases by one, which is clearly the minimum possible.

If your switches support trunking, you could use a 2x10GbE link to trunk the switches. That would leave you with 2 10GbE ports left on each switch. That will work with just six switches but you'll be back at a wall if you ever need another switch.

David Schwartz
  • 31,215
  • 2
  • 53
  • 82
  • My switches do support trunking, I'm using it for distributing out to the edge switches. One of the CX4-connected switches does have a spare slot for another 2-port SFP+ module, so that's looking good. – Blue Warrior NFB Oct 23 '12 at 00:02