2

We are planning to purchase new switches for our network. We have three buildings in the same area that are all connected by fiber. Our main building is very large and needs to be connected by fiber from north to south. North is where our core switches will reside.

My question is this: We will have three fiber connections coming into our core switches - 1 from each remote building and 1 from the south side of the plant. Can we use two switches in the north plant each with 2 fiber ports to accomodate this? This would leave 1 fiber port open. We would be trunking the two core switches by ethernet ports. Will this work ok or do we need to trunk by the fiber ports?

skinneejoe
  • 264
  • 2
  • 8
  • 20

4 Answers4

4

You haven't provided any info about the interfaces you're going to use with the fibre link, so I'm guessing that this is 1gbit.

If the switches you're going to use have 1gbit copper interfaces aswell then it's no problem to trunk between them, this is the normal way of connecting switches that have several uplinks.

I need to warn you however - if those core switches only have two fiber slots each then what are you going to do if one of the switches go down? Since you don't have more than 1 link to each building I highly suggest that you either get core switches with enough (sfp) ports to carry all the fiber links, or get more links between the buildings to build a ring topology - like this:

                                  HQN
                                 / | \
                                /  |  \
                               /   |   \
                              /    |    \
                             B1----B2---HQS

(Link between HQ North and Building 2 would actually be optional)

This would ensure that any single link can go down without interrupting the network. The next step would be designing STP etc, but that's out of scope for this question.

pauska
  • 19,532
  • 4
  • 55
  • 75
  • I may be confused, but if I have a core switch with enough ports on it and it goes down, won't I lose all connectivity? Wouldn't it be wise in either config to have a backup switch available? – skinneejoe Mar 16 '12 at 14:28
  • Yes, like I said, you need TWO core switches, where both them have enough ports to hold all of the 3 fibre links. – pauska Mar 16 '12 at 14:31
4

What you propose will work, but is not ideal from a redundancy perspective. You propose this: proposed topology

A better topology that would allow for link or core switch failure would be this (note it only uses two fiber ports in the core, but you have a building-to-building link). Obviously spanning tree of some sort must be enabled on all switches to prevent loops:

better

If you can run two fibers from one of the switches, this would provide the best performance with the gear you have, one hop to the core. And still provide redunancy for all access switch links.

best

Of course the best thing would be to have two switches at each location, but that is not usually done for campus access switches, since end-user machines only have one link anyway. In a datacenter access setup, servers have two or more ports and are connected to at least two separate switches.

rmalayter
  • 3,744
  • 19
  • 27
0

It will work in this setup, however, your Ethernet trunk could be a potential bottleneck.

HostBits
  • 11,776
  • 1
  • 24
  • 39
0

To answer the question - yes you can do this. The copper interfaces should be the same speed as the fiber to avoid serious bottleneck. I would also recommend a portchannel / etherchannel between the two core switches in the north. With a portchannel you can bundle several copper ports together to increase the bandwidth between the switches. Alternatively, you could go with stackable switches and not worry about the uplink between them at all.

Paul Ackerman
  • 2,729
  • 15
  • 23
  • 1
    Stackable switches have a major problem in a core or even aggregation/distribution scenario: they only protect you from power failure or a failure that causes a switch to "go dark". What they do not protect you from is software bugs that affect both switches (since all the switches are "dumb" except for the master). Stacks also usually don't let you keep things running during a switch software update, since the whole stack reloads. We have had many problems with this and Cisco 3750-series as well as Dell PowerConnect stacks, and now only use stacking at the "edge" of the network. – rmalayter Mar 16 '12 at 15:52
  • Yeah I suppose I've used them to simply increase capacity (though also not in the core). For redundancy, of course two separate switches in the core with a ring would be ideal. – Paul Ackerman Mar 16 '12 at 23:43
  • @rmalayter You can reload a stack member in a 3750 stack, but it doesn't help much if it's a software bug (which is active on the master switch). – pauska Mar 17 '12 at 17:38
  • @pauska you can't upgrade the software on 3750 stack members individually and keep the stack running. And as soon as you reload the master (which you must do at some point), the whole stack reloads. – rmalayter Mar 19 '12 at 17:13
  • @rmalayter Yes, of course. This is how a stack works. If you need to upgrade software individually then you don't need a stack :) – pauska Mar 19 '12 at 18:10
  • @pauska There are "stacking" techologies from other vendors where the control planes are separate, only the management plane is centralized. So you can in fact upgrade the software with zero downtime, and run mixed versions in the stack during an upgrade. I think Brocade, Arista Networks, and HP's new IRF thingy do this. But you still have one management point and can do cross-stack link aggregation. – rmalayter Mar 19 '12 at 20:50
  • @rmalayter why are you telling me this? You were the one who said "Stackable switches have a major problem in a core or even aggregation/distribution scenario: they only protect you from power failure or a failure that causes a switch to "go dark". I'm fully aware of there being different stacking technologies. – pauska Mar 20 '12 at 08:15
  • @pauska because vendors realize that "stacking" has a bad name with some network engineers, they all gave the "stacking with separate control planes" a different name. Arista calls it MLAG, Cisco vPC, HP calls it IRF, etc. Vendors reserve the term "stacking" for the old-school approach of "one master, all other switches are just dumb ports". The reason I bring it up is the same reason you participate here: I thought it would be useful to the community to have relevant information attached to the question and answer. – rmalayter Mar 26 '12 at 12:11