Cable management arms

This afternoon I racked three new 1U servers. These will be an extension of our current ESX cluster, and will facilitate our upgrade to vSphere.

They didn't come with cable management arms. The whole arm/no-arm debate has been raging in datacenter circles for years and years. We use them. Others don't. Vendors have to support both.

It could be that the arms are something of a minority thing, since getting them (at least from HP) requires a few extra steps. We'd like to have them for these servers since there are 9 cables attached to each server, and that's a lot of cables to undress/redress whenever you need to pull it out for maintenance. Happily, we're no longer snaking old skool KVM cables into these racks (YAY!), which is one less monster cable to worry about.

We need to determine if it is worth the effort (and budget) to get the arms for these servers. We'll see how this turns out.

What's your opinion on these arms? How do they work (if at all) in your environment?


I tried using the cable arms that came with our 2u Dell servers. I never liked them because they would tend to sag down into arm below it. Now we have a kvm over cat5 so I can get away with Velcro strips.

The HP 1U servers tend to not come with management arms anymore, the 2U and larger servers do. We strap the cables for the 1U servers using velcro to the chassis. The one question we came up with was "How often do we actually pull these servers out?" the answer was VERY rarely. We're more likely to only pull the server out during hardware failure requiring access to the inside of the box(rare) or moving the server to another rack (ever more rare).

I've got to agree with Dave. I can count the number of times that I have wanted to pull out a fully powered machine, open it up, and replace a part on zero hands.

If you've got a chassis like some of the deep storage arrays that are essentially pull-out drawers, then yes, you need cable arms if the chassis is designed for them, because you aren't going to power off an array of disks just to replace one that failed.

On the other hand, I get the feeling that cable management arms hearken back to a time when the failure of a single machine was unacceptable (you know, back when we used to brag about how many years of uptime we had), where as now, uptime is (ideally) measured against the availability of the service, not any one server in a cluster or array.

Personally, I've never used them after the first time, because they built up too much heat (even if they're perforated) and they get in my way whenever I'm replacing cables or doing ad-hoc work. 3am in a beeping server room is no time to fight cable management arms.

No cable management arms here, but we run a lot of SPARC equipment. Most things there can be replaced from the front or back side anyway, with a few exceptions.

Heat would definitely be an issue if I had the arms on my x86 servers. The V40z and V20z servers in the Sun racks fill in a lot of the tail end.

If I had more than 100 physical servers, I might care more.
If I had more than 100 physical x86 servers, they would be blades.

Of course I hate the blade solutions just as much...