12

I would like to open up a discussion on your experience with either using cable management arms or not.

It seems like a nice idea to ensure that you have enough cable slack to be able to pull a running server out of a rack without worrying about accidentally unplugging a cable, but how many times is this really done?

It seems like I'm still taking down a machine for maintenance if I need to get inside so I'm not sure of the benefit.

It also seems to me that the cable management arms restrict the air flow coming out of the server and the rack as a whole.

I'd like some thoughts on what others are doing either with or without the cable management arms.

Richard West
  • 2,968
  • 12
  • 42
  • 49

18 Answers18

23

Coming from a webhosting env. We dealt with hundreds of servers some of which were always moving based on contract changes.

I don't care for them and prefer velcro instead.

IMO, if you're going to pull a server from a rack to do something inside the case it should be off. Hot swappable drives are all accessible from the front.

It was one more thing I didn't need stuffed into the back of the rack.

It added to install time, and removal time.

It made it harder to replace a bad cable in a hurry.

It blocked access to the label on the cables near the jack.

It made it hard to move a server and cables if say I wanted to move it higher up and shorten them.

It added to any heat problems we might have had.

MathewC
  • 6,877
  • 9
  • 38
  • 53
  • +1 for adding to the difficulty in removing a bad cable. – Matt Simmons Jun 15 '09 at 16:32
  • 1
    How often do you guys have bad cables? I've only dealt with server numbers in the mid double digits in various datacenters, so I don't have as much experience there, but I've never had to replace a bad cable on a running server. – Brian Knoblauch Jun 15 '09 at 17:14
  • 1
    +1, I do not like cable arms, I would not use them on production farms. BUT I would have to say that I've replaced many bad fans on live systems, thanks to the lowly cable arm. – Joseph Kern Jun 15 '09 at 18:08
  • +1 for Velcro, can't go wrong with it, reusable and a lot friendly on the environment (than cable ties) – Beuy Jul 15 '10 at 00:46
15

The problem is one extra word in this sentence:

It seems like a nice idea to ensure that you have enough cable slack to be able to pull a running server out of a rack without worrying about accidentally unplugging a cable, but how many times is this really done?

Take the word "running" out of the sentence, and you'll see the light. Cable management arms make it easier to do ANY maintenance on a server, not just when it's running. Need to pop it open to add more memory, HBAs or network cards? Done. Less time during an outage.

If you're going after five nines, every second you can save during outages is crucial. Unplugging three or four network cables doesn't seem time-intensive, but watch what happens when you accidentally put the wrong network card into a port. Maintenance time skyrockets.

Brent Ozar
  • 4,425
  • 17
  • 21
  • 6
    I can see this point of view, but if you're going after 5 9's, one server had better not matter, or you've already lost the fight – Matt Simmons Jun 15 '09 at 16:18
  • 1
    Yeah, but you know how it is - every bit helps. We had an outage where dual ACs, both completely independent and supposedly redundant, failed within a matter of minutes of each other. Sometimes you need every edge you can get. – Brent Ozar Jun 15 '09 at 20:56
  • @MattSimmons what if those include 2-10 LC fiber cables that need to have the 4 faces cleaned on each cable connection when you leave that cable banging into things and exposed to any floating dust while you are working in the rack? – Rowan Hawkins May 02 '20 at 19:10
10

I prefer to use the cable management arms, but I can see the other side of the argument. I have found that a neat and tidy rack is easier to deal with in a crisis situation, because it is easier to know what is where.

Reasons PRO

  • GREATLY reduced likelihood of accidental disconnection when working on other things in the rack. This is big for me .. it just sucks when you are debugging a problem with server A and accidentally knock out the power cord for server B.

  • Tidiness / Cleanliness - the rack just looks better with arms, and it is harder to keep the rack tidy without arms.

  • No need to disconnect (and reconnect) when doing maintenance. Even if the maintenance is done offline, not having to touch the connections makes it easier.

Reasons CON

  • Airflow / cooling - the arms can reduce airflow, particularly in dense racks.

  • Difficulty changing cables - I think this is overblown. When swapping a cable in a crisis you skip the cable management, get both ends plugged in, and make it pretty when things settle down.

  • More moving parts - can pinch fingers, catch on things, etc.

tomjedrz
  • 5,964
  • 1
  • 15
  • 26
  • on your con side, when do you have time after the event to make things pretty? – TechGuyTJ Jun 15 '09 at 19:25
  • TJ, that's what a maintenance window is for! :) – Greg Meehan Jun 15 '09 at 21:16
  • Generally immediately .. once the "any which way" cable has things running I take the time immediately to properly run another cable and swap them (which then takes only a second) at the next available moment. – tomjedrz Jun 16 '09 at 13:43
9

I do not. My argument is that they impede airflow, and that there are better 3rd party cable management solutions that accomplish the same thing.

I can count the times I've wanted to leave a server powered on while I was adding or removing hardware on 0 fingers, and that's their only^H^H^H^Hmain purpose.

Edit

I admit, they make it faster to pull hardware out of the rack, but in my opinion, it's not worth the hassle and heat.

Matt Simmons
  • 20,218
  • 10
  • 67
  • 114
  • 5
    It isn't just about pulling the servers while they are on. It is about being able to pull the server out without disconnecting everything. – tomjedrz Jun 16 '09 at 13:44
5

I inherited an environment that had no cable management arms, and we've slowly been managing to get them installed.

The reasons the previous admin used for not purchasing/using them were cited above: You would not be unracking a live server, they interfere with airflow, and you should be trying to reduce the amount of cable in the rack, not increase it to deal with the full span.

The problem shows up when you're maintaining a heterogeneous server room over a number of years instead of installing an entire racks of servers at once. We have three manufacturers of servers and usually 2-3 generations of each in production. We add or remove machines every three months.

  • We have equipment arriving and leaving constantly.
  • We don't have the opportunity to zip-tie things to lacing bars -- we don't have enough space to give up a 1-2 U to them, and we don't want to "layer" things because we'll always end up digging the oldest cables out of the bottom layer.
  • We don't get to pick or focus on one vendor because I work for a university that receives grants (sometimes from hardware manufacturers) and relies on a public bidding process for large purchases.
  • We have three to four Cat5 spans to each server -- typically one for internal network, one for public network, one for KVM, and one for the ILO management port.
  • Some servers are also attached to fiber (and we run the fiber inside a small conduit to keep it from pinching), while others have an additional 2-4 cat5 cables running to teamed network interfaces.
  • Then there's two power supplies for each server.

I'd like to see anyone make a clean server cabinet with that many cables running to each server unless they use cable management arms of some sort.

In a "rush" environment, we're able to pull a server out without walking around to the back first. We know what cables are being plugged or unplugged because of their colors.

There's many reasons not to use cable management arms, but when you're working in a typical business environment and not a engineered environment, they're really worth it.

Karl Katzke
  • 2,596
  • 1
  • 21
  • 24
3

I am a fan of the cable management arms for two reasons

  1. Looks cleaner
  2. Reduces stress on the cables, especially important if you have any fiber
Aaron Weiker
  • 686
  • 1
  • 5
  • 10
  • 1. Really? I don't think so. take a look. http://royal.pingdom.com/2008/01/24/when-data-center-cabling-becomes-art/ – Joseph Kern Jun 15 '09 at 18:10
3

I, for one, absolutely, positively do NOT use cable arms, for these specific purposes:

  • Unless you cut your fiber or cat5 to-order/fit, then you're gonna have slack left over. I have seen more often than not, that slack, (no matter how well managed or tied off) get caught in those little nooks and crannies. When using dual-fiber channels on HBAs, and you accidentally catch one in there, you won't know until a path fails over or you test the failover manually during a preventative-maintenance run.

  • Makes airflow suffer, yes. VERYbad for airflow. In a closely-populated rack, this can trip thermistors on servers.

  • And finally a personal reason; Dude, I get my fingertips jammed in there all the time no matter HOW careful I am. :)

Greg Meehan
  • 1,166
  • 1
  • 9
  • 16
  • 1
    agreed on the jamming fingers part :( I also get lotsa cuts from the cable mgmt arms, spilling blood all over the cables till i get some bandages :P – MrTimpi Jun 15 '09 at 20:37
2

Cable management has the huge advantage of advoiding random downtime because a person pulls a cable by accident. These accidents are really stupid but it happens to the best of us really. Cable management allows to avoid this greatly.

It allows much more rapid removal of servers from a rack. It allows you to be able to service servers very rapidly. It allows your rack to look nice.

For the airflow, I don't think there is a real difference...

Antoine Benkemoun
  • 7,314
  • 3
  • 41
  • 60
2

Five years ago, before the prevalence of multi-core CPU's hit the data-center, management arms were very nice for making sure a server could be pulled out of the rack for maintenance. I've had servers that have had to be pulled out and did not have sufficient slack in their cables to be pulled out. This required completely undressing the server in the back before I could pull it out far enough to open the case. The management arms ensure that the server installer (um, me) has allowed sufficient slack cable to allow the server to be pulled out, and organizes the cables in such a way that they don't get in the way of each other.

Then came multi-core and our ESX cluster. The heat output from those newer servers (all 8 core) is such that the management arms do in fact get in the way of airflow. If we had space in the backs of our racks, I would be investing in fan-doors to help extract the hot air. As it is, I'm using the roof-mounted rack-fans to extract heat. So it is a good thing that our ESX servers are in the top 14U of the rack. If we had mounted cables there without the management arm and just lived with undressing the servers whenever we needed to pull one out, airflow would be significantly cleaner.

We've learned the hard way to include an empty rack-unit every third unit in order to allow air-flow.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
2

Cable management Arms are of the Devil. I inherited an environment that had them and had to pull them off for two reasons.

  • Air flow/Temp
  • Manageability

Cable Management can be controlled without the arms. One way I maintain my DC is by using Lacer Bars from cableorganizer.com These bars allow me to build in slack for my cables and tie them down with Velcro.

Also I believe in cutting my network cables to length. Yes it takes time but if you don't want any excess then you have to cut your cables to length. I am a cable Nazi and cable arms only breads hidden mangled messes. I have extra pre-cut and terminated cables for my systems that are critical to my environment. I do this so I don’t have to spend time making a cable I can just follow the path of the old cable and be done with it.

Addressing the unplugging and plugging in of cables I label each and every one of my cables. This includes both ends. It is so worth it at the end of the day you know that the cable you have in your hand is for serverA because it says ServerA. The arms only get in the way when trying to plug in and unplug cables because, in my experience, you can't unplug them when you have pulled the server out.

Finally the temperatures of my racks have decreased significantly when I pulled the arms off. The arms, especially 1U, get so full of cables that the air has nowhere to go. This only pushed the air back in the rack and makes the server work extra hard.

I think it is implied in the posts here that keeping our racks tidy and manageable is hard work. When the system we have employed is only slowing us down it becomes a pain point and in turn wastes our time. Therefore you have to find a system that works for you and your team and not deviate from it. To make any system work you must work the system.

I hope that makes sense.

TechGuyTJ
  • 782
  • 1
  • 12
  • 25
  • Great link to the lacer bars. Thanks for that. Are you using a bar for each server? Do you have a photo you could share of this technique? – Richard West Jun 16 '09 at 16:09
1

I am Extremely PRO management arm use.

The complete system needs to be installed for it to work properly.
There is a removable shelf that the arm sits on and the arm itself. Over time an unsupported arm will sag. The removable resting position shelf fixes that when the arm is in its default position. Dell arms get this right!

I work in a very large datacenter, Each system has a copper management connection, a copper KVM cable and anywhere from 4-10 -- 10G Fiber connections and 2 power cords. We have 40 racks with various hardware.

With Management arms if you need more room to access a particular device, you can extend a different server to allow that. Arms give you that flexibility. Need more room, at the back? half or full extend the two systems above the unit you need to install or access. No need to to power them off, just extend them.

  • Management Arms allow me to do any service on any system without without touching any connector.

    **Industry Best practice** 
    Clean all connecting faces on fiber connections when you disconnect them.
        This would mean cleaning 4 faces for every cable end you disconnect -- 2 on the cable 
        and 2 on the device for duplex cables -- every time you disconnect it.
    
  • I don't have to worry about misrouted cables after a service.

  • I violate no bend radius rules for fiber and I have almost no issues with snags as I am extracting a system.

  • You can even use the mechanical arm to pull a partially extended system back into the rack without yanking on any of its connected cords. The arm attaches between inner and outer rail.

The only slight issue I have is if I need to replace a hot swap power supply. In That case, you reposition the arm to the rear, remove the shelf and then remove the supply. You still did not need to move any cables, even power cords. You can even hold the supply and push the system out of a rack all without touching any of the other cables.

Rowan Hawkins
  • 590
  • 2
  • 18
1

I am a fan of cable management arms. We get by with having like 4 lenghts of cable in the rack. It makes management and maintenance easier. If we need to move a server in or out everything just works. You undo the screws, slide it out, add your RAM or replace your fan or whatever, and close the top and slide it back in. This has virtually eliminated the accidental unplugging of power, network or kvm cables from servers in racks where we are working.

At my university we have not seen temperature/airflow issues due to the management arms. We did have issues when we tried to fill a rack with 1U servers.

We do have fully vented doors on the back and front of the racks and have hot aisles and cool aisles in our data center. So we only pump cool air into the aisles between the fronts of the servers, and we let the back sides of the servers exhaust at the back sides of other servers one tile away, and those aisles are very warm, but that air just recirculates into the AC unit and gets cooled again. Adding solid plates over all the open gaps in the rack vastly improved airflow and temperature management as well.

Laura Thomas
  • 2,825
  • 1
  • 26
  • 24
1

We stopped using the arms right after we abandoned our KVM systems. It dawned on us that the bulk of cabling in the arms (and the risers) was running a console connection to each and every server. We also tore out all of the OEM power cords and replaced them with cords of various shorter lengths.

"Use Velcro." Roger that! I found out the hard way that the ends of snipped zipties are as sharp as razor wire.

  • And also, removing a ziptie requires a sharp blade, and using a sharp blade near your precious network (or fibre) cables can be a problem, especially when the zipties are really tight or in an awkward location. – Stefan Lasiewski Oct 11 '10 at 23:46
1

Dump them. Cable management arms are untidy, fiddly to install and useless. How often do you need to open a running server? In my case, after many years of managing 100s and 100s of servers of all kinds I do not recall ever needing to open a running machine in a rack. I learned very quickly to dump those arms as soon as I open the box.

John
  • 161
  • 1
  • 1
  • 6
0

On the racks I've set up, I didn't bother with the arms. There weren't enough situations where we needed to pull out a running server to warrant installing them. We just made sure the cables were neat and it was easy to unplug all the cables from a server if we needed to pull it out.

However, we took over management of another site that did have the arms set up, and I have to admit it was nice to have them when working on those servers. The rest of the cabling was a mess, so unplugging a server was a hassle and being able to slide out a server, open it up to check what was inside and then slide it back was easier than unplugging.

Ward - Reinstate Monica
  • 12,788
  • 28
  • 44
  • 59
0

We're about 90% blades these days but all of the pizza-boxes we have use arms because they all have at least three cat6 cables, two power leads and two fibres too - I'd hate to just have that lot hanging down.

If you'd just dealing with a couple of cables copper-only cables then I imagine they're more trouble than they're worth.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
0

Some things ARE actually serviceable with the server running. For instance fans can be replaced, some servers support hot-plug and on-the-fly reconfiguration of memory and PCIe cards and other things like a remote access card maybe hot-pluggable or a module on it (for instance a mirrored SD-card module, or the BBU of a RAID card may be at the end of its life and needs to be replaced.

Many, many reasons to not needing to deconned and turn off the server before opening is.

-1

If a picture is worth a thousand words ... here's 9,000. And not a single cable management arm in sight.

Be warned; For every ziptie used, two will need to be cut.

Use Velcro.

Seth
  • 247
  • 2
  • 14
Joseph Kern
  • 9,809
  • 3
  • 31
  • 55
  • 1
    Ok, now show me those 9,000 words in another 5 years. – Karl Katzke Aug 10 '09 at 16:59
  • 1
    I wonder how awesome it would be to manage an environment that homogenous. Throw a SAN, some fiber switches, and six or seven different models/generations of servers in a rack, then try to make it look like those 9000 words. – peelman Mar 31 '14 at 23:40
  • For anyone wondering that blog entry contains a bunch of photos. Which mostly seem to be sourced from flickr. One of the first pictures does contain a note on flicker. "... For people wondering why there is no slack in the wires and no cable management arms -- this is a purpose built scientific computing cluster. The smallest replaceable unit in this particular system is the 1U server itself. Nobody is going to waste time messing with a system while it sits in a rack. It's either installed and operating or it has been pulled from the rack entirely for replacement. ...". – Seth Feb 10 '22 at 10:36