June 2008 Archives

It has been mentioned many places, and I've done some of the mentioning, that since openSUSE is the foundation for SLED, it makes sense for Novell to distribute an NCL for openSUSE. It turns out they're working on just that. And here is the Novell beta page. I'm soooo going to try this out, since I'm running openSUSE 10.3 on my work desktop and won't be moving to openSUSE 11 until I can run the client on it (oh, wait, I can).

It should also be mentioned that Ubuntu is a very frequently requested target for another NCL, but I have reason to believe that'll never happen. First of all, any Novell Client involves closed source 3rd party licensed code, which makes it hard to port to Linux in the first place (a relic of being based on code from the days when open-source was just an ethical standpoint rather than a tangible market force). Second, Novell has proven to be rather light in developer resources in certain areas, and linux integration with non-SUSE linux distros is very minimal.
According to this documentation, the storing of NSS/NetWare metadata in xattrs is turned off by default. You turn it on for OES2 servers through the "nss /ListXattrNWMetadata" command. This allows linux level utilities (i.e. cp, tar) to be able to access and copy the NSS metadata. This also allows backup software that isn't SMS enabled for OES2 to be able to backup the NSS information.

This is handy, as HP DataProtector doesn't support NSS backup on Linux. I need to remember this.

Firefox3 and IT laziness

I'm just now loading FF3. Like IE8, they got a lot more paranoid about bad SSL certs. They've gone beyond just coloring the toolbar orange, and are now fully blocking bad SSL sites..

This is a bad thing for us IT wonks. Every appliance and web-ap comes with an SSL functionality these days, and all too many of these use a self-signed cert. Unfortunately, FF3 blocks access to these sites by default and you have to add an exception (a multi-click, "are you SURE you want to do this"? procedure) for each one. All but about 5 of our HP servers have iLO cards with certificates are self-signed, so that's A LOT of sites that'll need to get added.

I'm sure there is a, "Don't be paranoid about SSL Certs," setting somewhere in about:config, but I haven't looked.

Like IE8, this will push the IT administration industry to be less lazy about SSL compliance. We need that, but it'll be 5 years before we really get there. As that's how long it'll take to phase out old "self-signed is good enough for internal use" software and widgets.

A good article on trustees

| 1 Comment
Over on the Novell Cool Solutions site, Marcel Cox just posted an article about how Trustees are handled on the Novell Filesystems (TFS and NFS). If you wanted to know the fundamentals of how ACLs are done on NSS volumes and how it relates to eDirectory, this is a good start.

Email Hygiene

A blog over on TechRepublic talks a bit about one way to reduce spam. In short, a global white list of actual people managed by some trustable central authority. This attacks the "untrusted sender" vulnerability in SMTP. It takes it a bit farther than SPF or SenderID in that it's an actual person not just a domain.

Dooooooomed to failure. Email is global, and there simply isn't a central trustable authority of any kind. The blog post mentions the FCC, which might be good for US-based email, but certainly not good for trusting email out of China or Russia.

It wouldn't stop much in the way of spam. Such a central repository is its own version of a spammer's dream mailing list, and also represents a treasure-trove of email From: lines likely to be trusted. It would only work when used in conjunction with something like SPF or SenderID to ensure that the person who is "joe.bob@mywork.biz" only sends mail from the mywork.biz mail-servers. It also wouldn't stop "gray-mail" mail-blasts from vendors, as the Sales department folk would just put their own mail address on the From: line of their mass mailings in order to get them past the "Real person" filters.

Email hygiene is a hard problem. SMTP is the poster-protocol for a protocol designed in a far more trusting time. Both the addresses on the To: line and From: line, as well as the addresses on the RCPT TO: and MAIL FROM: lines on the envelope probably should be validated in some way. As well as the IP address(es) of the servers involved in mail delivery. SMTP doesn't do this, and there is a very thriving industry to provide just this sort of thing.

Shrinking data-centers

This is the 901st post of this blog. Huh.

ComputerWorld had a major article recently, "Your Next Data Center", subtitled, "Companies are outgrowing their data centers faster than they ever predicted. It's time to rethind and rebuild."

That is not the case with us. Ours is shrinking, and I'm not alone in this. I know other people who are experiencing the same thing.

The data-center we have right now was built sometime between 1999 and 2000. I'm not sure exactly when it was, as I wasn't here. I like to think they planned 20 years of growth into it, as that's how long the previous data-center lasted.

When I first started here in late 2003, the workhorse servers supporting the largest percentage of our intel based servers were HP ML530 G1's (here are the G2's, the same size as the G1's), with some older HP LH3 servers still in service. The freshly installed 6-node Novell NetWare cluster had 3 ML530's, and 3 rack-dense BL380's. If I'm remembering right, at that time we had two other rack-dense servers. The rest were these 7U monsters, and we could cram 4 to a rack.

With the 7U ML530's as the primary machine, it would seem that the planners of our data-center did not take 'rack dense' into consideration. This was certainly the case with the rack they decided to install, as they planned a very old-school bottom-to-top venting scheme; something I've spent considerable time and innovation trying to revise. They also heard about the stats like "20% growth in number of servers year-over-year," and planned enough floor space to handle it.

Right this moment we're poised to occupy a lot LESS rack-space than we once were. For this, I thank two major trends, and a third chronic one:
  1. Replacing the 7U monsters with 1U servers
  2. Virtualization
  3. No budget for massive server expansions
We're still consuming the same amount of power as we were 2 years ago, but the rack units drawing power has reduced. We still have most of those ML530's, but they've all been relegated to 2nd or 3rd line duties like test/deployment servers or single function utility servers. They're all coming off of maintenance soon (they're like 5-7 years old now) so I'm not 100% sure what we're replacing them with. Probably more VM servers if we can kick the money tree hard enough.

One thing we have been having growing pains over is power draw. The reason we're drawing the same as we were 2 years ago is largely due to us coming close to the rated max for our UPS, and replacing the UPS is a major, major capital-request process nightmare. It would seem that upgrading our UPS triggers certain provisions in the local building code that will require us to bring the data-center up to latest code. The upgrades required to do that are prohibitive, and most likely would require us to relocate all of our gear during the construction process. Since I haven't heard any rumors of us starting the capital-request process, I'm guessing we're not due for another UPS any time soon. This... concerns me.

One side-effect to being power-limited, is that our cooling capacity isn't anywhere NEAR stressed yet.

But when it comes to square footage, we have lots of empty space. We are not shoe-horning in servers into every available rack-unit. We haven't resorted to housing servers in with the sys-admin staff.