9

We have a 24U rack in our lab and I'm going to completely redesign it during the next weeks due to the scheduled big hardware upgrade.

I have a couple of the questions about the layout of the server components:

  • UPSes. At this moment all UPSes (we use non-rackmount) are installed on the shelf near the top of the rack. Some people suggest to always put UPSes on the bottom but I'm just afraid what if the nearest server fall on them? I've never had a problem when server is falling down but never say never. One story is when it just fall on the rack stand, another story is when it crash an equipment.

  • Switches. Usually people suggest to put them on the top of the rack, but I can see the only practical reason doing this: when I open rackmount KVM, it makes the switch front panel inaccessible, that's good because it usually shouldn't make inaccessible any other equipment except a switch. On the other hand, all cables are going from the bottom and you need to stretch them thru the whole rack. If you change the cables frequently (in the lab/development setup you do), it could be a headache.

  • Patch panels - to use or not to use. And if use, how? Usually people connect all incoming cables to the patch panel then use panel RJ sockets to route them inside a rack, I agree it's useful for the large installations. But we actually have about 8 cables going in/out, why don't connect it directly to the switches?

  • Use a short cables just to connect an equipment or make them longer to allow getting a servers out without a disconnect? A first choice will never cause a cable hell in the future, but it doesn't allow to get a servers out without powering them off. Remember, this is for the development lab, but putting some of the equipment offline (i.e. SAN) may cause the whole lab to go down up to the hour.

Probably enough for one question. Thanks for all answers.

disserman
  • 1,850
  • 2
  • 17
  • 35

5 Answers5

13

UPS location
Bottom. Really. All that lead acid out-weighs a solid steel server any day. You want that in the bottom. Unlike servers, it'll get pulled out next to never. It'll provide stability in the bottom of the rack. Also by not putting all that weight at the top, your rack is less likely to act like a metronome in the case of some heavy rocking.

Switches vs. Patch Panels
Depends on your config, but for a 24U I'd lean towards switches. Or if possible, external patch-panels that then feed cables into your rack.

Cable Management Arms, or, what length of cable do you need
For 1U servers, I've stopped using the arms. I'll undress the back of the server if I need to pull it out (label your cables!). For 2U servers I have no firm opinion, but for larger server the arm makes sense.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • +1 You do really want the UPS's at the bottom. I'm beginning to think all the cable management arms on my 1U servers aren't worth the hassle. – GregD Aug 13 '10 at 21:07
  • Putting a lot of weight at the top of the rack will make easier for it to tip over and injure you... – Antoine Benkemoun Aug 13 '10 at 21:10
  • 2
    thanks for the answer. yes, "label everything" is probably the best advice I've got for a work with a hardware. Doing this even for the cables in stock, always labeling the type and the length. can you show any external patch panel? never seen them. – disserman Aug 13 '10 at 21:10
  • @disserman Bolt a short 2-post rack to the top of your 24U and mount your switches and patch panels there. – sysadmin1138 Aug 13 '10 at 21:30
7
  1. Heaviest things in the bottom. This is almost always your UPS. This also means that larger servers need to go in the bottom. This also makes it MUCH easier to install. Instead of having 3 guys hold it up while a 4th screws it in, usually it only takes one person to help you get the first couple screws and after that youre home free
  2. Dont use cable management arms - they block airflow. I was a big fan of them until someone pointed that out. Use velcro or zipties to secure your power cables to your power supplies so that things dont jiggle out or get knocked out if someone is working on another system.
  3. For a single rack (or even two) mount a switch central to the other devices, facing the rear. even if you have a 42U rack you wont fill up a single 48 port device unless you have ILO/DRAC/etc devices in everything, and then youre going to spend more rack space on the patching equipment anyway.
  4. longer cables with velcro will allow you enough distance to pull things out without disconnecting everything and still keep it on the neater side.
Devnull
  • 951
  • 1
  • 7
  • 23
  • Personally I've never found that cable management arms make anything less messy or easier to manage so I don't use them either. – joeqwerty Aug 13 '10 at 21:50
  • @joe the only benefit I've found is the ability to slide the server out while powered on. That said, it's still not risk-free, and the risks seem to outweigh the benefits by a good margin. There aren't that many instances where you need to slide out a server while powered on, either. – Chris Thorpe Sep 05 '10 at 11:44
3

Our data centre has a raised floor, with the void used for cold air; we also run the cabling in traywork under the floor. (I consider traywork to be the oft-neglected OSI Layer 0).

Although I'd put UPS and heavy kit near the bottom, I usually leave the bottom 2-3U empty, to make it easier to pass cables up, and to allow cold air to rise in racks where kit doesn't blow front-to-back properly (e.g. side-to-side). Don't forget to use blanking plates though to keep that hot/cold aisle separation working properly if the rack has mesh doors.

In terms of cabling, if I'm expecting a lot of switchports to be used, I'd consider running a panel-to-plug loom direct to the switch card, with each of the 24/48-ports numbered individually and the bundle labelled clearly, too. Then you can present that as a patch panel somewhere central and reduce the number of joins along the cable path.

I prefer cable management bars to be used 1:1 with patch panels; I'd usually place 2x24 port panels together with one cable management bar above and one below. Patch panels at the top of the rack as they are light.

All my labelling is as clear as I can make it, as I may not be on-site during an incident at 2am and want to reduce chances of problems after a random vendor engineer swaps hardware out.

Mitch Miller
  • 575
  • 3
  • 13
1

Ziptie every cable where it shouldn't move, both at the server and at the rack. Use Velcro to bundle the extra cable length needed to run the server when it's pulled out.

UPS goes at the bottom. So do battery packs. Heavier servers to the bottom. Fill panels to the top.

Given increasing server densities, I would add a switch to the rack. Depending on your requirements vlans or separate switches for the management connections might be appropriate. Color code your network segments.

Cable management arms tend to get in the way. Go with Velcro.

Cables should be as long as needed to run the server when it is pulled out. Cables longer than that become a problem. If necessary provide zones to run out the extra length. Somewhere near the power distribution panel for power cords, and near the switch for data cables. Wider cabinets with wiring channels in sides are also an option.

BillThor
  • 27,354
  • 3
  • 35
  • 69
0

I'm in agreement with the answers here so far. My only point of difference is in labeling cables: I don't and never have. I use cables color coded for the particular connection\host type and use a spreadsheet to track what connects to what. There's nothing I hate more than seeing a bunch of label tails hanging, cables mislabeled, running out of labels, labels coming off and littering the bottom of the rack, having to find the label maker, etc.

I know what's in my racks and I know how each device is connected. I don't need a label to tell me what's what. By the time I get to the label and read it I've already discovered what device\port it's connected to. Labels are useful for people who don't know what's in my racks... and those people will never have access to my racks... so no labels needed.

EDIT

I should state that I'm a one man operation (1 sysadmin - 50 servers). If I worked at a big shop with lots of equipment and personnel I might have a different opinion on labeling cables. I do keep my servers, switches, etc. labeled for the purpose of remote hands reboots and such.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171