August 2008 Archives

Storage update

| 1 Comment
Both the EVA4400 and the EVA6100 parts arrived late Wednesday. Wednesday I got the EVA4400 partially unboxed, and finished it up Thursday. Got CommandView upgraded so we could manage the EVA4400, and thus lost licensing for the EVA3000. The 10/26 expiry date for that license is no problem, as the EVA3000 will become an EVA6100 well before then. Next weekend if the stars align right.

And today we schlepped the whole EVA4400 to the Bond Hall datacenter.

And now I'm pounding the crap out of it to make sure it won't melt under the load we intend to put on it. These are FATA disks, which we've never used so we need to figure it out. We're not as concerned with the 6100 since that's FC disks, and they've been serving us just fine for years.

Also on the testing list, making sure MPIO works the way we expect it to.

Woot!

| 1 Comment
The EVAs are scheduled to deliver today! This means that we are very probably going to be taking almost every IT system we have down starting late Friday 9/5 and going until we're done. We have a meeting in a few minutes to talk strategy.

There was some fear that the gear wouldn't get here in time for the 9/5 window. The 9/12 window has one of the key, key people needed to handle the migration in Las Vegas for VMWorld, and he won't be back until 9/21 which also screws with the 9/19 window. The 9/19 window is our last choice, since that weekend is move-in weekend and the outage will be vastly more noticeable with students around. Being able to make the 9/5 window is great! We need these so badly that if we didn't get the gear in time, we'd have probably done it 9/12 even without said key player.

The one hitch is if HP can't do 9/5-6 for some reason. Fret. Fret.
It would seem, and I've yet to trace down definitive proof of this, that Windows Server 2008 Clustering still has the Basic Partitioning dependency in it. This limits Windows LUNs to 2TB, among other annoyances. Such as the fact that resizing one of those puppies requires a full copy onto a larger LUN rather than extending the one you already have. How... 1999.

Email sizes

The question has been raised internally that perhaps we need to reassess what we've set for email message-size limits. When we set our current limit, we did it to the apparent defacto standard for mail size limits, which is about 10 meg.

This, perhaps, is not what it should be for an institution of higher-ed where research is performed. We have certain researchers on campus that routinely play with datasets larger than 10MB, sometimes significantly larger. And these researchers would like to electronically distribute these datasets to other researchers, and the easiest means of doing that by far is email. The primary reason we have mail-servers serving the (for example) chem.wwu.edu domain is to have these folk with much larger message size limits. Otherwise, these folk would have their primary email in Exchange.

The routine answer I've heard for handling really large file sizes is to use, "alternate means," to send the file. We don't have a FTP server for staff use, since we have a policy that forbids the use of unauthenticated protocols for transmitting passwords and things. We could do something like Novell does with ftp.novell.com/incoming and create a drop-box that anyone with a WWU account can read, but that's sort of a blunt-force solution and by definition half of a half-duplex method. Our researchers would like a full duplex method, and email represents that.

So what are you all using for email size limits? Do you have any 'out of band' methods (other than snail mail) for handling larger data sizes?

IPv6 uptake

Not too long ago I asked the question about what our plans were about IPv6. While the telecom guys didn't actually laugh at me, it was clear the question was considered a bit silly. After all, we are the proud owners of a full out class B (140.160.0.0/16) so IPv4 address exhaustion is not something we're likely to run into very soon. Certainly not by 2014 when we should be 'out' of IPv4 address space on the internet. Will IANA repossess our 'unused' spaces? Don't know, probably not.

That said us moving to IPv6 will require a few things, none of them internal processes:
  • A bill by the State Legislature mandating IPv6 uptake by all State agencies. We're not subject to the already existing Federal rule.
  • Enough of the general internet is routing IPv6 that the IPv4-over-IPv6 tunneling causes enough headaches we need to move due to user revolts.
  • Some new widget, be it server tech or some kind of net-attached device, only supports IPv6 and we need to get it running.
Of course, if the powers that be here decided that it must be done, and our telecom people fail to talk them out of it, it could still happen.

My favorite compiz plugin

| 2 Comments
Screenshot

I love that plugin. Ad-hoc screen captures. I use it all the freaking time. It has managed to engrain itself into my expectations about how computers should work to the extent that I started swearing at XP the other day since I couldn't do that. Such a simple thing, but my how effective it is at capturing a quick snippet of what I want. No more do I have to...
  1. PrtScrn the application I want
  2. Open Paint
  3. Paste the screencap
  4. Copy the section I need
  5. Create new canvas
  6. Paste the copied section
  7. Save to file
That's a lot of work. This widget? Much faster and easier.

Enabling autokey auth in NTP on SLES10

The NTP protocol permits the use of crypto to authenticate clients and servers to each other, as well as between time servers. By default, SLES10 is set up to allow the v3 method of using symmetric keys, but not the v4 method that uses public/private keys. If you want to use the v4 method, this is the tip for you.

Background

By default SLES runs NTP inside a chroot jail. This can be changed from the YaST NTP config screen if you wish. This is a more secure method of running NTP. The chroot jail's root is at /var/lib/ntp/.

Additionally, ntp runs with an AppArmor profile loaded against it for added security.

Getting NTPv4 auth to work

There are 4 steps to get this to work.

  1. Copy the .rnd file to the chroot jail
  2. Run ntp-keygen
  3. Modify the AppArmor profile for /usr/sbin/ntpd to allow read access to the new files
  4. Modify the /etc/ntp.conf file to enable v4 auth.

Copy the .rnd file to the chroot jail

By default, there should be a .rnt file at /root/.rnd. If so, copy this to /var/lib/ntp/etc/.rnd. If there is no file there, one can be generated through use of openssl.

timehost:~ # openssl rand -out /var/lib/ntp/etc/.rnd 1

Run ntp-keygen

Change-directory to /var/lib/ntp/etc, and execute the following command:

timehost:~ # ntp-keygen -T

This will drop a pair of files in the directory you run it, so running it while in /var/lib/ntp/etc saves you the step of copying them to this directory.

Modify the AppArmor profile

This is done through YaST

  1. Launch YaST
  2. Go to the "Novell AppArmor" section, and enter the "Edit Profile" tool.
  3. Select "/usr/sbin/ntpd" and click Next.
  4. Click the "Add Entry" button and select File.
  5. Browse to /var/lib/ntp/etc/.rnd and click the "Read" permissions check-box, and click OK
  6. Repeat the previous two steps to add the two files created by ntp-keygen, named "ntpkey_cert_[hostname]" and "ntpkey_host_[hostname]".
    1. Note: AppArmor behavior changes between SP1 and SP2. In SP1 you can use the link files, in SP2 you need to specify the link targets.
  7. Click Done on the main Profile Dialog
  8. Agree to reload the AppArmor profile

Modify /etc/ntp.conf

The YaST tool for NTP doesn't allow for v4 configurations, so this has to be done on the command line. Open the /etc/ntp.conf file with your editor of choice, and insert the following lines before your "server" lines:

keysdir /var/lib/ntp/etc/
crypto randfile /var/lib/ntp/etc/.rnd

Then append the word "autokey" to the server and peer lines of your choice. At this point, you should be able to restart ntpd, and it will use authentication. This is a very basic NTPv4 configuration setup, but this should set the ground up for more complex configs.

Virtualization and Fileservers

| 1 Comment
There are some workloads that fit well within VM of any kind, and others that are very tricky. Fileservers are one area that are not good candidates for VM. In some cases they qualify as highly transactional. In others, the memory required to do fileserving well makes them very expensive. When you can fit 40 web-servers on a VM host, but only 4 fileservers, it makes the calculus obvious.

This is on my mind since we're running into memory problems on our NetWare cluster. We've just plain outgrown the 32-bit memory space for file-cache. NW can use memory above the 4GB line, it does have PAE support, but memory access above there is markedly slower than it is below the line. Last I heard the conventional wisdom is that 12GB is about the point where it starts earning you performance gains again. eek!

So, I'm looking forward to 64-bit memory spaces and OES2. 6GB should do us for a few years. That said, 6GB of actually-used RAM in a virtual-host means that I could fit... two of them on a VM server with 16GB of RAM.

16GB of RAM in, say, an ESX cluster is enough to host 10 other servers. Especially with memory deduplication. In the case of my hypothetical 6GB file-servers, 5.5GB of that RAM will be consumed by file-cache that will be unique to that server and thus very little gains from memory de-dup.

In the end, how well a fileserver fits in a VM environment is based on how large of a 'working set' your users have. If the working set it large enough, it can mean that you'll get small gains for virtualization. However, I realize fileserving on the scale we do it is somewhat rare, so for departmental fileservers VM can be a good-sized win. As always, know your environment.

In light of the budgetary woes we'll be having, I don't know what we'll do. Last I heard the State is projected to have a 2.7 billion deficit for the 2009-2011 (fiscal year starts July 1) budget cycle. So it may very well be possible that the only way I'll get access to 64-bit memory spaces is in an ESX context. That may mean a 6 node cluster on 3 physical hosts. And that's assuming I can get new hardware at all. If it gets bad enough I'll have to limp along until 2010 and play partitioning games to load-balance my data-loads across all 6 nodes. By 2011 all of our older hardware falls off of cheap-maintenance and we'll have to replace it, so worst-case that's when I can do my migration to 64-bit. Arg.
I just reported a bug in the beta that surprised me. I can't talk details about it, but it strikes me as the kind of bug that should have been at least reported shortly after the client released. Perhaps it was just so overall buggy that it got lost in the forest, but still. The Vista client has been out for some time now.

Having said the following rant several times over the past few days, I figure it's time to post it ;).

The problem we're running in to is that the number of users of the Vista Client is a small, small sub-set of the overall users of the Novell Client, which are by now a minority of overall users of Novell NCP file-servers. Novell spent years hyping 'clientless' approaches to file-serving, through the CIFS stack on NetWare. A lot of places bought in to that. Because of this, the percentage of NCP-client Vista users among the overall Novell File-Server market is a rather small one.

And small means you don't get a lot of testing done by people-who-are-not-us, and seemingly obvious bugs showing up in the beta Sp1 builds. I don't have any Vista workstations, so I've done exactly zero testing of the Vista Client; this particular bug was reported and troubleshot by someone who is not me (I just filed it). Even though we have beta builds of the Vista client as part of this beta, I'm not testing it. All things considered, I probably should.

Since we're wedded hard to the Novell Client, it's probably time for us to start devoting resources to the ecosystem in order to keep it alive.

Budget crunch update

The University President has just sent out an email describing what the university plans on doing in light of the Governor's memo. Unfortunately, there isn't a lot in there.
  • There WILL be a hiring freeze.... and this will manifest by having newly vacant positions have to go before a Provost/Vice-Provost review before being filled.
  • Salary increases will still be given per negotiated contracts.... for classified people, which I'm not. We don't know what us Professional types will get. And won't know until the State passes the next budget during the coming winter.
  • Out of state Travel WILL be restricted.... and this will manifest by having travel requests go before a Provost/Vice-Provost review before being approved. Like they always do.
Almost all of this comes down to, "we'll be looking even harder at expense requests, so be careful." We really won't know how bad it'll get until the Legislature convenes and starts working on bills. They open, if I remember right, just after the new year.

So, I may yet get to BrainShare 2009, but it may require my boss to talk a lot faster than normal. We just won't know until we get a feeling for how much pressure the Provosts and Vice-Provosts are under to contain costs.

On the plus side, the SAN upgrade is pretty clearly critical to the function of this university, so that'll get approved. Or there will be riots in the halls outside my office.

That's backwards

On this article is this phrase:
Between following comment threads, checking in with friends on Twitter, reading a few blogs with RSS feeds, and conquering a mountain of e-mail, we have plenty of conversations to keep track of.
Considering how my friends use these sorts of technology, it should read...
Between following comment threads, checking in with friends on Twitter, reading a few emails, and conquering a mountain of blogs with RSS feeds, we have plenty of conversations to keep track of.
Email is now the back-channel of social intercourse. Of course, mailing lists like bugtraq or knitty are still around, but are easily handled through very simple filtering in your email program. But, most of this sort of thing is done over http[s] somehow. I'll use email for unicast asynchronous conversations with someone that I want kept semi-private (i.e. not googleable by the general public). But for the most part, the only email I get is mailing list traffic and comment notifications for my blogs, tracked threads, and whatnot.

Hiring (and travel) freeze, part 2

A longer article on the freeze.

In short, since WWU is not directly under the Governor, our President will have to announce any freezes. Which hasn't happened yet. On the other hand, we have a new President starting real soon. So. This will be interesting. I may get to BrainShare-09 yet, but I'm not counting on it.

Drat!

I may not get to go to Brainshare next year. But, we'll see when October rolls around.

Older, but still a goodie.

Bruce Schneier, whom I've met once, had an essay last year on the state of the art of password guessing. Not cracking, ala rainbow tables, but guessing. If you want to generate passwords that are better cracked than guessed, this is the essay for you.

Outsourcing email

| 1 Comment
Those of you who've been paying attention know that WWU is in the process of trying to outsource our student email. This has been going on for close to two years now, and we're now in the process of trying to get Exchange Labs working. So when I saw this article on Computerworld about this topic I had to read it. The article is more about the Google service, but includes some parts about the other offerings out there.

We went with Exchange Labs in part because we already had everyone in Active Directory, and the migration should go a lot easier that way. Or so we thought. We've been having major issues getting things talking. In theory once the communications channel is set up it should just work. We shall see.

One thing in the article that caught my attention was this:
Drexel University earlier this year launched a pilot project in which it will give some of its 20,000 students a choice of four e-mail systems: its own Exchange-based enterprise e-mail, Gmail, Microsoft Windows Live Hotmail and Microsoft's Exchange Labs, which is a pilot program for online, Exchange-based hosted e-mail that was launched about six months ago and is based on what will be the Exchange 14.0 release.
If we can't get it really working for us, something like this might be a good idea until such time as we can get true automatic provisioning done. It'll be a manual process (this is me wincing), but could get the product in the door while we work out bugs. Eh, who knows if this is where we'll go.