November 2011 Archives


I've been spending a lot of time at our datacenter recently. Unlike at WWU, we colo at one of the large providers so I'm getting to interact with a datacenter vastly larger than the ones I've played with in the past. This is cool in many ways (this is a multi megawatt facility!) but there are some downsides.


I've known for years that datacenters can get very loud. When WWU picked up our first HP bladerack, the whine that produced was audible in the hallway outside the room. And this is with sound-proofing, mind. It was about then that I brought my shooting muffs to work for when I'd be in there for any length of time.

This facility? Worse. Two rows behind our racks are five racks full of dual power-supply servers with only one power cable each, which means five racks of servers doing their alarm beep continually for months (possibly years) on end. This is in addition to the usual hum of air-handlers and cooling fans in every rack.

It's loud in here. Loud enough that two people talking need to raise their voices, which puts it above 70dB. This is right close to the OSHA hearing-protection-required levels. And for a good reason.

I'm pretty sure my tinnitus has gotten a bit worse since I've been working here.

I haven't always been able to use my muffs when working, since talking to other people is problematic when I have them. The facility does offer softies for hearing protection, but they're only so useful. A couple of my recent 8 hour stints have been with help, so there was much shouting back and forth as we do things. There will be more, longer visits in the near future too, so I need to plan for that as well.

Hearing loss from long term exposure to loud white noise and blood-loss from sharp bits of equipment. Two hazards to what it is that we do.

Unexpected parallels

A friend of mine is going through some frustrating medical crap. And while reading her latest post about her experiences, she expressed a sentiment similar to this (wording changed to foil googling):

It drives me crazy. I get that these people have been there, and done that. But when you come there with something that isn't common, sometimes it just gets ignored.

Um... guilty.

As a technical support professional, albeit one that also wears many other hats these days, I'm guilty of just that. You may be too. Computers glitch, we all know that. We are also busy people who hate chasing after something that's just a transient glitch rather than a symptom of a deeper problem. We wait for the glitch to turn into a pattern, when instead the person who reported it saw that we won't help them and then doesn't tell us when things start getting patterns.

Treating every problem like it's the reporters most important problem in the world is a goal we strive for, but fail at far too often. Demands on our time are indeed heavy and that does require some triage. Not all problems get our undivided attention.

I imagine doctors have similar pressures, though more intense since it's people's health on the line. Perhaps less time pressure and more insurance pressures, but still pressure.

So that one guy? The one who has the VPN connection that resets every 63 minutes regardless of where he is? And you've pushed it off because you don't want to dig around his personal computer? Just a reminder that to him, this is a critical problem. You should get on that.

When civility fails

This is a ServerFault moderation / meta post, so skip if you're not interested.

In which I make broad, over-reaching statements guaranteed to piss someone off.
AMD has released their server-version of the Bulldozer CPU class they released over a month ago, called Interlagos.

Bulldozer/Interlagos is AMD's attempt to grab more of the market from Intel. Currently, it's competing in the value sector but not on performance. The days when AMD CPUs were the virtualization kings have been gone for a couple years now. AMD would like that crown back, thank you, and they're driving to go there.

That said, comparing performance between equivalently clocked AMD and Intel CPUs is hard. They're optimized for different tasks, which means that the smart Systems Engineer looking for the next CPU to base their environment on should pay attention. Workload matters! Those AMD CPUs may be damned cheap compared to Intel, but if you're doing the wrong things with them you'd be better off buying previous-gen Intel chips.

The most controversial thing AMD has done is to make two cores share a Floating Point Unit. They've also done quite a bit of optimization in their Arithmatic Logic Unit, where Integer math is handled. The reasoning behind this is that most server usage these days is integer heavy, highly parallelizeable workloads; most database and simple web-serving workloads are entirely Integer and parallel-friendly, and that's a large part of the webapp stack right there. The likes of Google Plus, StackExchange, and Reddit do far more Integer work than floating-point, so something like Interlagos should be a good fit.

And the early benchmarks show that AMD does indeed have an edge on integer-heavy workloads over equivalent generation Intel parts. Intel still has an edge on compute-performance-per-watt, but AMD holds the edge on compute-performance-per-GHz. Pick which is more important to you.

Specialist workloads like render farms are edge cases, if big consumers, so engineering to handle those workloads is not worth the time. By staking out the middle of the market, AMD can drive innovation in the marketplace by forcing Intel to get creative in the middle. It's good for everyone.

Yes, but what about me, you cry.

I called this

From Slashdot: Schools Buy .xxx Domains In Trademark Panic.

I called this several years ago. March 2007 to be exact. It's less trademark-defense, and more of an online-branding thing. These schools want their .edu domain to be the only brand that has that name. Sharing with the hot coed action on a .xxx is anathema to most US-based higher-ed institutions. They'd rather not do that thank you, and $200/year is not a lot to pay to prevent that from coming about.

An important observation

| 1 Comment
This tweet came out of the Grace Hopper 2011 conference going on right now in Portland.


That is a very interesting point. And I'm not sure how to fix it. If CompSci is used here as a shorthand for software engineering, this is a very valid point. Software Engineering is very much a creative field, even an intuitive one at times, but that's not the perception it gets.

I don't know how to fix that.


As of today, I visited ServerFault every day for a year! But then, you'd want that for a Moderator.

There is no spiffy badge for this kind of 'attendance'.
On the surface, I say no. We spend more months with DST than without, so why not go all the way?

Well, we could. The standard timezones the US uses could all move over one so we would be on what we now call DST all year. It would work!

However, it gets really dark in the depths of winter, and having dawn/dusk offset so large parts of the commute aren't in complete blackness has a lot going for it. In Bellingham, Washington, dawn/dusk on 12/21 are about 7:45am and 4:30pm. With a permanent +1 timezone offset, that would mean the sun would rise up there at 8:45am. Not only is the drive to work almost entirely in the black, but little Jacob and Emily out there on the street corner waiting for the school-bus will be doing so in cold darkness.

We'd need a "Winter Time" offset to bring more daylight in the AM hours.

Turns out we already have that with the current system, we just call the largest part of the year the DST period rather than the shortest part of the year. I'd like to see the two reversed, but I understand why forcing the change just to make the paperwork look better is not done. It's a lot of work, as we discovered when the US changed the DST rules a few years ago.

The shrinking desk

At Job #1 I had a cubical. This was during the dot-com rise and fall, so much cubical humor was had by all. There were Dilbert cartoons on the walls, because that's what you did back then. My boss' hair was not especially pointy, but our HR director had Catbert cartoons on his door so... well. Let's stop there.

I had a cubical. It had tables on three sides, shelving units, and even a pair of rolling files. I used it all. I had two keyboard trays installed because at the time I did have two PCs in there (VMWare Workstation was around back then, but memory limits make it not as useful as it is now). I had the same space as anyone else without an actual office. The Ergonomics people came by every so often to chastise me about posture and nod approvingly at my keyboard setup.

At Job #2 I had a 1970's vintage metal desk and a table, with some wall-shelves over the table. The filing was restricted to what was built into the desk. I'd give it about 60% of the desk-level horizontal work-surface I had at Job #1. I had a second computer, but it didn't get it's own keyboard tray (that's what the KVM switch was for). Nor did I have a keyboard tray, I had an old-school desk and keyboard trays don't work so well on those.

By the time I finished at #2 I was down to a single computer (lots of RAM and a quad core processor, so VMWare Workstation was how I was grooving that problem). The ergonomics had thrown both of my shoulders for a loop, and I had to move my workstation to the table from the desk since the keyboarding surface there was lower and less aggravating to me. The one time an Ergonomics person came by, he frowned at my setup, recommended a trackball mouse, and asked if I could convince my boss to find actual cubical parts.

This was during a major budget crunch so workstation upgrades of any kind were on hold. So, no luck there. Also, I expanded to fill all of my horizontal space.

Here at Job #3 I have very architectural and edgy looking oak doors on metal pipes for that industrial look that goes so well with our brick walls. I have two rolling files and no shelving what so ever. My total horizontal space is 50% of Job #1. And I, er, have, by far, the messiest desk of those of us here.

When I visited the StackOverflow offices two weeks ago for a moderator thingy, their sysadmins had even less space than I have. Probably... 30% of what I have right now. Lower Manhattan real-estate is expensive after all, but still. I don't know how George and Peter deal with that, maybe they have a bench-space somewhere they can expand into for dissections.

On my desk right now:

  • Three iPad boxes that I'm getting set up for our Sales people.
  • A probably dead KVM switch I don't have another home for.
  • A second keyboard for my computer for the use of people who find my ergonomic keyboard with the letters worn off too hard to type on.
  • Boxes for three different ExpressCard adapters
  • Three nodepads for meetings and suchlike.
  • Another laptop for those few Windows things that either don't VM well or I need an isolated environment for. And also for trouble-shooting problems with the same model of laptop for other people.
  • Keyboard, monitor, mouse, and docking-station for my primary work laptop.
If I had shelves, a lot of those things would be up there rather than on my desk.

The Devs around here are pretty good about clean-desk. Most have a few books back in a corner, all have at least one note-pad for meeting notes. Only one has a significant amount of random crap on their desk, and I beat them out by quite a bit.

This is the point where I thank my lucky stars I don't work for a place with an actual clean desk policy.

How messy is your desk?