Will multiple switches slow transfer speed

13

7

I want to create link between a data server (It's more like NAS) and around 300 computers.

Data transferred per day is around 2GB/computer and speed really matters.

If I use a single switch it would 300 Ethernet cables and may be too messy to maintain.

If I use a switch on every 50 computers would it slow down the connection speed?

Parth Parikh

Posted 2013-01-18T13:45:38.350

Reputation: 360

2You have two answers below but both don't address the issues with having 300 clients + network equipment on a single subnet. ARP and spanning tree updates are going to creat alot of overhead on this network. I would suggest segmenting these groups of 50 PC's into VLANs. I doubt you will get anywhere near gigabit speeds during real world transfers with what your planning... – Supercereal – 2013-01-18T19:18:12.713

Answers

16

If with 'transfer speed' you mean throughput: It should not matter much.

Every extra device will introduce some minor latency (after all some processing is needed, if if it is only very minor). However latency is not the same as throughput.

Compare it with a conversation via a satellite phone. There will be a 3 second lag before someone else can comment on what you said, but if one person just keep talking, telling long (2GB) stories then the slow down will be minimal.

Which means that I would test these setups:

     +-48 port switch ------ 40 computers
B    |
a    +-48 port switch ------ 40 computers
c    |
k    +-48 port switch ------ 40 computers
p    |
l    +-48 port switch ------ 40 computers
a    |
n    ...
e    |
     +-48 port switch ------ 40 computers

Many switches have a connection which allows you to turn several separate units units into one giant switch. That makes management much easier. Much sure that the switches you buy have this feature.

Why 48 ports switches?
It limit the number of devices. (less space, less devices which can break down).

Why 40 computers per 48 port switch?
Future expandability (Computers moving to different rooms increasing local density, added devices such as printers, a free port for debugging etc. etc.

Why not a single 300 port switch?
Good luck finding those...

[Edit] Apparently there are some. I looked up the model mentioned by David, it is about 25K US$... Use these kinds of switches if you absolutely need maximum performance.

If you already have switches without an backplane link you could always to something like this, but that would mean traffic would flow excessively to whatever switch hosts your file-server. That might overload that switch and with will introduce much more latency than needed.

                 1 fileserver
40 computers     39 computers     ...  40 computers
   | | |               | | |              | | |
48 port switch   48 port switch   ...  48 port switch
    |        |     |         |             |       | 
    |        +-----+         +--        ---+       |   Disabled by 
    |                                              |   default
    +----------------------------------------------+

(The long roundabout cable is in case a switch dies. That would cut off all computers on it and to the side from the switch with the fileserver. In which case switches with spanning tree protocol can detect this and automatically enable the workaround link.)

Lastly, there is always the classical tiered setup:

        Fileserver and other servers
                     |
                 CORE SWITCH
                /   |        \
               /    |         \
 48 port switch   switch  ...  48 port switch
      | | |       | | |                | | |
  40 computers    computers   ...  40 computers

This one has the advantage that you have one (very good) switch in the server room, and at least one link from that switch to each floor or each section.

Then you set up a local room with all the switches for that floor. (If needed with multiple switched, tied via a backlink).

Hennes

Posted 2013-01-18T13:45:38.350

Reputation: 60 739

1A Cisco 4510R+E can support up to 384 Gigabit ports. – David Schwartz – 2013-01-18T15:19:14.677

Note that the "backplane" in the first setup is roughly the same as the "core switch" in the last. One difference is the length of cables you have between them, the other is that the backplane can use faster ports since it doesn't need to drive long cables. – MSalters – 2013-01-18T15:51:57.340

Aye. I found the essential difference to be the 'local patch room per floor'. Might not be much different from a technical perspective, but depending on your building it might be a lot more practical. (esp. when the floors in my example are actually neighbouring buildings). – Hennes – 2013-01-18T15:54:33.907

Thank you for your prompt reply, this will be helpful to me. – Parth Parikh – 2013-01-18T17:14:49.517

1Every extra device will introduce some minor latency – Ярослав Рахматуллин – 2013-04-10T20:24:18.657

6

Every extra step of switching is an extra delay. No matter how fast your core is, it's still processing. That said, at only 2GB a day you won't notice it, and I'm sure that 300 port switches don't exist.

Now if you were using hubs, that would be a very different story.

Switches only send packets to the IP address tagged on the packet. Hubs bounce packets around every computer, and it's up to the computer to accept or reject.

If you're really concerned about speed, you should look at making your data store as efficient as possible. If it only has a single gigabit connection, you'll always be limited there. (300 gigabit connections to 1 gigabit source = trouble)

Edit: I should add a solution the the issue I identify here. What I have done is build a computer with two Intel NICs (Network Interface Cards) and enable the Teaming feature. This enables the two cards to work as one, essentially creating a 2 gigabit network interface.

LuckySpoon

Posted 2013-01-18T13:45:38.350

Reputation: 672

1300 port switches definitely do exist, and a single switch will give you the best performance. It's just not cheap and likely his requirements are very modest. – David Schwartz – 2013-01-18T15:16:54.680

Last I looked, network switching happened at layer 2, which is below IP. You're looking at Ethernet MAC addresses (or their equivalent), not IP addresses. – a CVn – 2013-01-18T17:36:52.733

This is true, IP addresses are just easier to explain. – LuckySpoon – 2013-01-18T22:36:50.193

3

If I use a switch on every 50 computers would it slow down the connection speed?

Your topology won't change the "connection speed", but the effective throughput would be affected.
Another consideration is the type of switch(es) you install.
An Ethernet switch can use either of two techniques for receiving and then transmitting the Ethernet frames:

  • store-and-forward (the entire frame is received & buffered before it is re-transmitted), or
  • cut-through (aka wire speed) (only the destination address has to be received & buffered before re-transmission is initiated).

For a full length Ethernet frame of 1542 bytes and 100Base-T, a store-and-forward switch would introduce a latency of about 123 microseconds, whereas a cut-through switch would introduce a latency of about 1.2 microseconds. For short frames (e.g. ARP packets and TCP Acks) the difference is of course much smaller.

As you add tiers of switches, you could be adding significant amounts of latency to the transmissions. Consider the case of one more layer than the ideal "flat" model (of just one (monster) switch):

                   |
                 Switch_A
                 /      \
                /        \
          Switch_B      Switch_C
            /               \ 
        Host_1            Host_200

For a full length Ethernet frame of 1542 bytes and 100Base-T, three store-and-forward switches would add latency of about 369 microseconds, whereas three cut-through switches would add latency of about 3.7 microseconds.
If Host_1 starts transmitting a full length Ethernet frame of 1542 bytes at 100Base-T with three store-and-forward switches in the path, then Host_200 receives the last byte about 492 microseconds later; that's an effective throughput of about 25 Mbps (compared to the actual wire speed of 100 Mbps).
With three cut-through switches in the path, then Host_200 receives the last byte about 127 microseconds later; that's an effective throughput of about 97 Mbps.

If you want the best throughput possible. then you need to use as few switches as possible (one monster switch is ideal) and use cut-through switches (to minimize the latency each switch introduces). Note that almost all low-cost switches are the slower (i.e. longer latency) store-and-forward variety

sawdust

Posted 2013-01-18T13:45:38.350

Reputation: 14 697