34

I realise this may be a stupid question for some, but it's something I've always wondered about.

Let's say we have two gigabit switches and all of the devices on the network are also gigabit.

If 10 computers connected to switch A need to transfer large amounts of data to a server on Switch B (at the same time), is the maximum transfer speed of each connection limited by the bandwidth of the connection between the two switches?

In other words, would each computer only be able to transfer at a speed of one gigabit divided by the 10 machines trying to use the "bridge" between switches?

If so, are there any workarounds so that every device can use it's maximum speed from point to point?

Nick
  • 4,433
  • 29
  • 67
  • 95

9 Answers9

54

Yes. Using single cables to "cascade" multiple Ethernet switches together does create bottlenecks. Whether or not those bottlenecks are actually causing poor performance, however, can only be determined by monitoring the traffic on those links. (You really should be monitoring your per-port traffic statistics. This is yet one more reason why that's a good idea.)

An Ethernet switch has a limited, but typically very large, internal bandwidth to perform its work within. This is referred to as the switching fabric bandwidth and can be quite large, today, on even very low-end gigabit Ethernet switches (a Dell PowerConnect 6248, for example, has a 184 Gbps switching fabric). Keeping traffic flowing between ports on the same switch typically means (with modern 24 and 48 port Ethernet switches) that the switch itself will not "block" frames flowing at full wire speed between connected devices.

Invariably, though, you'll need more ports than a single switch can provide.

When you cascade (or, as some would say, "heap") switches with crossover cables you're not extending the switching fabric from the switches into each other. You're certainly connecting the switches, and traffic will flow, but only at the bandwidth provided by the ports connecting the switches. If there's more traffic that needs to flow from one switch to another than the single connection cable can support frames will be dropped.

Stacking connectors are typically used to provide higher speed switch-to-switch interconnects. In this way you can connect multiple switches with a much less restrictive switch-to-switch bandwidth limitatation. (Using the Dell PowerConnect 6200 series again as an example, their stack connections are limited in length to under .5 meters, but operate at 40Gbps). This still doesn't extend the switching fabric, but it typically offers vastly improved performance as compared to a single cascaded connection between switches.

There were some switches (Intel 500 Series 10/100 switches come to mind) that actually extended the switching fabric between switches via stack connectors, but I don't know of any that have such a capability today.

One option that other posters have mentioned is using link aggregation mechanisms to "bond" multiple ports together. This uses more ports on each switch, but can increase switch-to-switch bandwidth. Beware that different link aggregation protocols use different algorithms to "balance" traffic across the links in the aggregation group, and you need to monitor the traffic counters on the individual interfaces in the aggregation group to insure that balancing is really occurring. (Typically some kind of hash of the source / destination addresses is used to achieve a "balancing" effect. This is done so that Ethernet frames arrive in the same order since frames between a single source and destination will always move across the same interfaces, and has the added benefit of not requiring queuing or monitoring of traffic flows on the aggregation group member ports.)

All of this concern about port-to-port switching bandwidth is one argument for using chassis-based switches. All the linecards in, for example, a Cisco Catalyst 6513 switch, share the same switching fabric (though some line cards may, themselves, have an independent fabric). You can jam a lot of ports into that chassis and get more port-to-port bandwidth than you could in a cascaded or even stacked discrete switch configuration.

Jeff Atwood
  • 12,994
  • 20
  • 74
  • 92
Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
6

short answer: yes, it can be a bottleneck

slightly better answer: try port trunking to add more links between switches.

more personal answer:... it's quite likely that you won't need it. It depends a lot on the kind of work done by your users; but it's very seldom that you have many users pushing data around 100% of the time. More likely, each link will be idle like 95% of the time, which would mean that that link shared by 10 users would be idle around 50% of the time, and two users actively sharing it only 1.8% of the time.

Javier
  • 9,078
  • 2
  • 23
  • 24
  • 1
    +1. Good answer. In theory: Yes it could be a bottleneck. Reality: It probably isn't and probably won't become a bottleneck. Before rushing to make changes, set up link aggregation, etc., etc. You should monitor and measure the utilization of the link between the 2 switches. – joeqwerty Dec 30 '09 at 17:48
  • 1
    I take a little bit of issue with the phrase "it can be a bottleneck." It *is* a bottleneck. Whether or not it's creating a problem is an orthangonal concern. On any modern gigabit Ethernet switch the fabric exceeds 1Gbps, so by definition cascading gigabit switches with crossover cables creates bottlenecks. – Evan Anderson Dec 30 '09 at 19:09
  • @Evan Anderson: yes, i see your point... but is it the worse bottleneck? and can it be called a bottleneck when it's still much wider than what you push through it? – Javier Dec 30 '09 at 19:56
  • 1
    @Evan: I see your point. Is it a bottleneck? Yes. Is it creating performance problems? That can only be determined through monitoring and measurement. – joeqwerty Dec 30 '09 at 20:13
4

If you use one of the 1Gb/s ports to link the two switches then yes, the total bandwidth available will be 1Gb/10 + some overhead. so your throughput will be around 0.8 Gb/s in total.

If your switches support it, you can use a stacking module. This usually allow a much higher throughput rate at almost the speed of the switch backplane.

If your switch supports it you can also use link aggregation.

There is however another issue here as well, if your server is connected on a 1Gb port, it does not matter whether you stack the switches using another method as your server will only be able to transfer/receive data at 1Gb/s.

Your best option would be to use a stacking module for your switches and put your server on a 10Gb link. This also assumes that your server will be able to handle that amount of data. Typical server RAID setups will only support sustained throughputs of around 700Mb/s over an extended period of time.

Francois Wolmarans
  • 1,570
  • 10
  • 14
3

In the example you provided; That you have ten clients on switch A, and a server on switch B; all connections (client to switch, switch to switch, and server to switch) are all 1gb, the bottleneck(s) are going to be where all traffic is funneled into one port. Unless your server has a connection faster than 1gb, it doesn't significantly matter what the switch to switch connection is if the final connection from the switch to the server is still only 1gb.

Ideal configuration order would be; One switch for all devices. If using multiple switches and if available use ports designed to connect switch to switch to get increased bandwidth. If using multiple switches and interconnect ports aren't available, you can possibly bond multiple ports to increase the bandwidth between switches.

Jeff
  • 31
  • 1
2

If you are using managed switches (ones you can log into in some way) then perhaps you can combine multiple switch ports to get more bandwidth.

Many off the shelf gigabit switches will have no restrictions between ports on the same switch. That is, if you have 10 switch ports, all of them can be in use at full speed without any problems.

If you use one of those ports to connect to another switch, then yes, communication between those two switches is slowed down. However, the computers which share a single switch won't slow down, only when the traffic crosses that single inter-switch cable will people begin to fight for bandwidth.

If you find that too limiting, you will have to use a managed switch on both ends, and aggregate switch ports together to get 2, 3, 4, whatever speed you need. Or, buy a very high end switch and use 10-gig between the switches. Chances are combining many 1 gig ports together will be cheaper.

Michael Graff
  • 6,588
  • 1
  • 23
  • 36
2

If, and only IF, both switches support a lag/trunk connection of multiple ports to create a single width connection, you can then connect from 2 to the maximum allowed number of ports to create link aggregation.

Warning, you don't just connection cables and you're set to go! You need to configure the ports on both sides and only then connect them, otherwise you risk a sure broadcast storm that can bring down both of your switches.

thedp
  • 323
  • 1
  • 6
  • 14
1

This is a possible bottleneck. Some switches will allow you to agregate bandwidth with multiple ports so 3X 1gbps or 4X1Gbps. The switch OS will have a method for doing this and it varies from switch to switch as each vendor has their own way of doing this. Sometimes different names for this feature as well. Check the manuals for your make and model to see if this is supported.

Dave M
  • 4,494
  • 21
  • 30
  • 30
1

The answer is yes.

Possible workarounds include using multiple gigabit links between the switches or faster link between the switches. Both options require support from the switches, and with aggregating multiple links it can be problematic to get the load divided evently between the links.

af.
  • 999
  • 1
  • 8
  • 4
0

In other words, would each computer only be able to transfer at a speed of one gigabit divided by the 10 machines trying to use the "bridge" between switches?

Yes

What you have to ask yourself is how often does that actually happen. In your particular network is this a theoretical bottleneck that isn't causing any real problems or a real bottleneck that is worth spending serious money on resolving.

Also if all the computers are accessing the same server then the connection to the server is going to be just as much of a bottleneck as the inter-switch connection.

If so, are there any workarounds so that every device can use it's maximum speed from point to point?

There are soloutions but those soloutions are going to cost you. Say goodbye to dirt cheap unmanaged gigabit switches.

First you can try and build a single switch that is effectively bigger. Many switch families have "stack" connectors that are faster than typical Ethernet interfaces though they may still be a bottleneck in some cases. Going more upmarket you have chassis switches which (for a price) can put a large number of ports on multiple linecards with a really fast interconnect down the back. Eventually though you reach a point where putting more ports on one switch just isn't a soloution either because you need too many ports or because you need the ports in different places and you don't want a mountain of cable..

Secondly you can look at faster variants of Ethernet. 10 gigabit ethernet is now widely available. 40 gigabit and 100 gigabit are also available for a price.

Thirdly you can look at link aggregation. Link aggregation is a useful tool but due to design limitations you are unlikely to see 100% utilisation of all ports in the aggregation group.

If you need more than two switches you can also start looking at non-tree topologies. Unfortunately Ethernet wasn't really designed for this so the soloutions for supporting it are somewhat "bolt-on".

Peter Green
  • 4,056
  • 10
  • 29