Trends in racks

| 2 Comments
As alluded to in my last post, we're doing a bit of cleaning. And this cleaning has brought me into contact with some 10 year old stuff. As always when doing cleaning, you run into stuff that makes you go, "Oh yeah! That stuff!" When cleaning a filing cabinet you end up reading a bunch of stuff that you haven't laid eyes on in ages. When doing racks, you (or at least I) ponder how things have changed.

As I said in the last post, our racks don't have enough space between the back rail and the door. This was a common config in the 1998-2000 timeframe, but the industry has moved on and the need for a couple more inches of back-of-rack space has been built into all modern racks. Another two inches would do our racks a world of good, four would be heaven.

As for power handling, we bought a pallet of in-rack power-strips similar to these from APC. Only ours have 6 outlets not 10 (one source of irritation). What I'd like to use are a certain kind of 0-U unit that mounts in the rear of the rack, similar to these. Those 6-outlet strips were on the way out back in the day, but we bought a decade's supply so we're still using them.

These racks were designed with top-venting in mind. Back when these were purchased, you could still buy top-of-rack fan trays and 5% perforated back doors in order to implement a bottom-to-top cooling strategy. This was fine when we only could fit four ML530's into a rack. This is very not fine with modern servers that are resisting off 300-500 watts of heat at any given moment, and can cram 5x as many servers per rack. We've already done a lot to fix this problem, including getting some perforated front and back doors, and consolidating our servers down to a lot fewer racks so we didn't have to buy as many doors. It's also very likely we'll be getting some HVAC reengineering attention which should help our overall airflow problem.

These racks for the most part have a single 4-port KVM switch and a CRT/Keyboard/Mouse tray in them. We have three racks with larger KVMs in them that were added after the initial build-out. Back in the day, when the standard server was a 9U HP ML530 and we could only fit four servers to a rack and flat panel monitors were still four figures, this was just fine. However, those CRTs consume a LOT of rack space. I'd be more rabid about that, but our cooling and space issues mean those kinds of densities aren't required; we just need to put a bit of clear plexiglass in to block back-to-front recirculation. We've just replaced a bunch of those CRTs with LCDs that were just rendered redundant in desktop-land, which has improved things a titch.

Ripping out 9 servers, all of which were old enough to be from the era when servers were 7U or larger and had 3 power supplies, has given me a biiiiig pile o' power cables. Every single one of which is 10 feet long. A 10' cable is enough cable to go through a cable management arm for a server in the very bottom of the rack, and have enough cable to reach a power strip mounted in the very top of the rack. Since we mount our power-strips towards the middle of our racks, we don't need 10' cables. Heck, for the most part 6' cables are perfectly fine, with a few spots requiring 7' cables. This is something the server manufacturers have figured out, since they're all now shipping with 6 footers instead of the 10 footers these ML530's came with.

One structural problem that isn't easy to fix without rebuilding the room is the location of our HVAC units. They're at the edges of the room, which was the standard for very many years. The modern high-heat datacenter has caused the industry to figure out that placing these units in the middle of the room is more effective, even better is having a physically separate hot-air plenum. If we do manage to upgrade our UPS, which would allow us to add more heat-producers in the room, we'll have to pay much more attention to hot-spots than we are now. Moving HVAC units is not even close to cheap, and is highly disruptive.

Another thing I'd like to start doing is moving away from ye ole standard NEMA 5-15 style outlets on our power-strips, and onto IEC style outlets. I got to use these at my last job when we finished building a new datacenter, and they're really nice to work with. They don't project as much as 5-15's do, they're a lot easier to lock into place, and you can fit a lot more outlets per linear foot. This isn't going to happen any time soon, since it'll require buying new outlet-strips, actually purchasing power-cables, and re-running the power for existing servers.

2 Comments

look at vertical PDU's instead... we're running into these same issues right now and even with some nice new APC racks we got a year ago with vertical ones on each side, we're now having issues routing power with the racks being so dense.

so what we're after now is either two vertical PDU's or one hi-density one with the plugs alternating circuits. that way we can use 5 ft power cables for servers.

So good to digest such a entertaining article that does not resort to base posturing to get the point across. Thanks for an enjoyable read.