11

I have some equipment that will be moved to a new datacenter soon.

At the current datacenter, the switches are mounted in the back of the racks so the air flow for the switches is reverse in comparison to the rest of the equipment on the racks.

Since on the new datacenter they strictly follow the cold/hot aisle setup, I have been asked to move the switches on the front of the racks, something which entails a lot more downtime and I wish to avoid, if possible.

The switches are standard Cisco Catalyst 2960(G).

Is it possible to reverse the airflow of the switches so that can still be left on the back of the racks?
Do the fans and IOS support something like that, or if I mount the fans in reverse on the chassis, would that be ok?

Matt
  • 2,711
  • 1
  • 13
  • 20
Cha0s
  • 2,432
  • 2
  • 15
  • 26
  • Cisco switches are usually side to side airflow wise with hot air exiting from the left to allow more room at the front for cable access. Using them in a hot aisle / cold isle data centre usually requires a cabinet with ducts and possibly extra fans to direct the air to/from the isles. Filler panels in racks beside them can work as well. – Brian Jul 23 '15 at 23:36

2 Answers2

8

Cisco 2960 switches pull in cool air from the sides and exhaust to the rear.
Depth-wise they are 1/3 to 1/2 rack-depth (depending on exact model switch and rack).

This leaves you very few options. If you mount them unmodified backside of the rack just about the entire switch is in the hot zone. That will only be OK if you have cold airflow running on the side of the rack so the switch can get sufficient cooling. Unfortunately this is usually not the case in a strict hot/cold isle setup.
When you mount on the front you will have to run most server-cabling from the back to the front (if your servers have most wiring on the back) which makes for messy cabling.

Reversing the fans is maybe possible. I have no idea if this is electrically feasible and how the firmware would react on a 2960. In most switches that I took apart the fans plug directly into a motherboard connector and are not reversible.
Then you could mount them, ports facing backwards, in the front of the rack. This makes cabling awkward because you will have to reach quite a long way into the rack to reach the RJ45's. It might be OK if you only need to (re-)patch them on very rare occasions. Be prepared to leave 1 U above and below a switch unused, just to give yourself some working room.

Precisely because of these issues nowadays we do it completely different in our bigger server-rooms. We avoid the problem altogether:
- For each 42U rack we reserve the lower 30U for servers. (Not higher, to difficult to mount/unmount them.)
- Next 6U is for switches: ports at the front.
On the side of each rack is filler plate, except at the U's with switches so there is some cold airflow to the side-intakes on the switches.
- Top 6U is for patch-panels: Also ports at the front. From the back of the patch-panels we just run 8 UTP cables (CAT7) to the back of each of the 30 U's. (Alternating 8 left-side, 8 right-side). That is 30x8= 240 ports which fits in 5x 48 port patch-panels. This cabling is a one-time fixed installation with all the cables made exactly to length and neatly placed in cable-guides/trays in the rack.
The top-most patch-panel is reserved for backbone-cabling to other racks (24x OM3 or OM4 fiber). We have another fiber patchpanel (in some racks 2) backside mounted in the topmost slot(s) for SAN cabling.

We simply hook up all UTP ports (used or not) on the back of the server to the corresponding block of patch-panel ports. (In the rare cases a server needs more than 8 UTP connections we take them from the U above it. Typically such servers are more than 1 U anyway.)
All UTP patching is done front-side. Fiber-SAN stays at the back of the rack.

This way cable management becomes easy: You don't have to thread new cable though the rack every time you change something. The cabling (except for the short patches in the front) is static and can be made exactly to length. There is no over-length to stuff in a corner inside the rack itself. That also helps airflow.

It is so easy you can talk anybody (liek someone from local FCM on-site who has access to the server-room) through a re-wiring job on the phone if necessary:
Find rack number 5. Big yellow number on the front door. Open that front-door. About chest high and higher you'll see a bunch of cables in several colors. On the left and right there are numbers on the sides of the equipment. They go from 31 at the lowest piece of equipment that has cables in it, all the way up to 42 at the top. Find number 33 on the side, look for the cable in port 21 (should be a blue cable). Pull it loose (press on the little lip on the plug to unlock it) and plug it back in at height 35 in port number 17. Thank you for your help, don't forget to close the door of the rack on your way out.

Initial cost of setting up a rack is higher, but you recoup that real quickly in labor and downtime when you need to swap servers later on.
Of course that totally depends on how many changes you expect. In our case about 1 server-replacement per 6 weeks in each rack and we deal with about 300 racks in 35 server-rooms on 21 locations all over Europe.
It really pays of in the long term if you don't need to physically go over to each site for small changes.
I get service-techs from HP, DELL, etc. which I just direct over the phone where to place the new server. As soon as the cables are in and I can see the ILO or DRAC on the LAN I can take it from there.

Tonny
  • 6,252
  • 1
  • 17
  • 31
  • 2
    A photo would be helpful. – ewwhite Aug 03 '15 at 16:06
  • Great post! I was designing the racks from scratch to accommodate for the switches positions on the front (since apparently I cannot reverse the airflow of current equipment and I don't want to do 'weird' stuff) and I came up pretty much with what you suggested :) So it gives me comfort to see that my design is already proven in production. The only thing I dislike is the messy cabling from the front of the rack to the back. But as you said this cabling will be done once so I hope there won't be any problems afterwards. Thanks! – Cha0s Aug 03 '15 at 17:17
  • 1
    @ewwhite Sorry: Photos are strictly forbidden on our sites. If I have to make some for documentation purposes there is a security guy with me to verify I only photograph what's needed and every photo is digitally watermarked (by security) before I can take the SD card with me. Security is no joke here. (Military contracts, also banks and insurance companies.) I'll see if I can dig up some samples. Rittal should have some. They demoed something similar on Cebit a few years back. – Tonny Aug 03 '15 at 18:36
  • @Cha0s I absolutely hate messy cabling (my OCD talking I guess) but in our racks only the loose patch-cables on the front are somewhat messy. The internal cables (from back of patch-panel to back of servers) are neatly bundled (1 bundle of 8 per U) and tied down except for the last 40" (1 meter) from rack-post to the back of the server. (1 meter is long enough for a cable-arm.) The ones that are not used are left hanging down (or pushed under the rack at teh very bottom), tied with velcro straps. – Tonny Aug 03 '15 at 18:59
  • Yeap, that's what I am preparing too. Bundles of 8 cables per U will already be installed for all Us so when servers come and go the cabling will stay the same and will support pretty much any configuration needed with so many available cables per server! The only thing I know I will hate when the time comes, is the power cables for each server. Some server models (even of the same brand) have the PSUs on the left and some on the right :( – Cha0s Aug 03 '15 at 20:17
  • @Cha0s Don't get me started on power-cabling :-) Worst thing is having 2 powerfeeds (left and right, UPS and generator-fed) in the rack and having to cross the rack twice because of the cable-arm. (Left feed to right-mounted cable-arm and at the back of the server from the cable-arm across to the left PSU.) Most vendors these days are smart enough to have dual PSU's next to each so you can attach the cable-arm on the same side. But you still need to cross the rack once if you have separated left-right powerfeeds. – Tonny Aug 04 '15 at 09:46
3

With the strict hot/cold setup it seems like even if the airflow could be reversed leaving the switches at the back of the racks would have a negative impact since they would be sucking in air from the hot aisle in the datacenter.

Based on the diagrams of the switch that I can see online it is not a full depth switch, so when mounted on the back side of the rack it would be nearly impossible for it to suck cool air in the way a server would. The hot aisle in a datacenter can be very warm and I have had issues in the past with switches that have been located too close to the hot outflow vents overheating.

So even if it is possible to reverse the flow on the fans it could still have a negative impact on performance.

Matt
  • 2,711
  • 1
  • 13
  • 20