February 2010 Archives

Solving budget problems

| No Comments
Two days ago we got to meet with my great-grand-boss, the Provost for Academic Affairs. She's a fresh transplant from Michigan who has been in public higher-ed for over 20 years. It was a good talk, and I am encouraged.

One of the take-away quotes from that meeting was, "This is one of the most micro-managing legislatures I've ever seen." And then went on into details. One of the things she mentioned is a bill I was aware of but haven't mentioned yet, a plan to force furlough days on state employees such as myself. SB6503 is the bill.

One of the loonier provisions is a list of days to be considered by appropriate institutions for full closure:

For those agencies and institutions of higher education that do not have an approved compensation reduction plan by June 1, 2010, the agency or institution shall be closed on the following dates in addition to the legal holidays specified in RCW 1.16.050:

(a) Monday, June 14, 2010;
(b) Tuesday, July 6, 2010;
(c) Friday, August 6, 2010;
(d) Tuesday, September 7, 2010;
(e) Monday, October 11, 2010;
(f) Friday, November 12, 2010;
(g) Monday, December 27, 2010;
(h) Friday, January 14, 2011;
(i) Friday, February 18, 2011;
(j) Friday, March 11, 2011;
(k) Friday, April 15, 2011;
(l) Friday, May 27, 2011; and
(m) Friday, June 10, 2011.

In the immortal words of Bill Cosby, "Riiiiight." As it happens certain parts of higher ed are exempted from this, not affected by this is, "classroom instruction, operations not funded from state funds or tuition, campus police and security, emergency  management and response, and student health care." So I would be furloughed, but not the teaching staff. And woe unto the faculty member with a problem logging in to Blackboard that day, for they will be alone and all the staff pointedly ignoring their phones that day.

As it happens, this is perhaps the stick to get people to develop an 'approved compensation reduction plan.' This would allow WWU to create its own ways of reducing payroll, be it through head-count reduction, hours reduction, or interspersed furlough days arranged so as to minimally impact function of the University.

What's a furlough? "voluntary and mandatory temporary layoffs," according to this bill. So if I'm on a furlough day, you can guarantee I'm pretending I'm unemployed and will not be responding to anything work-related. The one thing that could keep me in the money during such a 'shutdown' is clause S under the exemptions: "The minimal use of state employees on the specified closure dates as necessary to protect public  assets, information technology systems, and maintain public safety." Right now that's unworkably vague in meaning, but it could mean that a small selection of tech staff could be present to help the teaching function work.

This is another bill we're keeping a close eye on.

Migration headaches

| 4 Comments
Today marked the day we cut over the largest volume we have, the Fac/Staff shared volume. That's 1.9TB of 'disorganized file data' (a.k.a. bog standard file-server) to migrate. This is the last of the major volumes to move, and this was done intentionally. Because of this, we have our system down. Unfortunately, a wrench was thrown into the works. But before I get to the wrench, a description of how we migrated this puppy from NetWare.
  1. At M-18 days, we performed an initial sync of the data via robocopy.
  2. At M-16 days, when the first sync completed (it took about 29 hours) we performed a delta-sync.
  3. At M-17 days we performed another delta sync, 24 hours after the previous, so we could get a feel for how long a daily 'copy the changed files' job would take.
  4. M-16 days, create a daily copy-job (robocopy source dest /mir /r:1 /xo /log:e:somewhere)
  5. M-14 days, we perform the rights migration, and open up the new share to everyone with sufficient rights to change permissions on the volume. Inform these people to fix broken rights on the Microsoft share.
  6. M-12 days, after feedback from the techs, release guidance for how to re-organize directories to better work with Microsoft permissions.
  7. M-12 to M-1 day, Technicians reorganize data and repermission as needed, with our assistance.
  8. M-12 hours, we do a delta sync
  9. Migration: Change login scripts, kick off terminal delta-sync to get net-change.
  10. M+2 hours, 8am arrives, script is done, we are done. Yay! Start working problems as reported.
The problem occurred between steps 8 and 9. One department decided that migration-night was the perfect time to reorganize over 150GB of data. They would have struggled to find a worse time for it. The result of this is that the terminal delta-sync in step 9 will end up taking far, far longer than the 2 hours budgeted.

The problem here is that when people start logging in at 8am, all of their data isn't there. There were some people who worked right up until the M-12 hour mark reorganizing data and were surprised when it wasn't on the new system yet. These people were alphabetically below the department that moved 150GB of data last night, so they hadn't been synced yet. So they're seeing and working with old files while the new ones copy in.

The worry for me is PST and MDB files that have a tendency to be open all day. The copy script will not be able to replace these open files, so they will in effect experience data-loss because of this department. There is not much we can do about that. We can troll through the log file for the files listed as failed-to-copy-due-to-lock and hand copy them afterwards, after clearing locks. In which case they'll lose whatever data they committed to these files during the morning. So these files? There WILL be data-loss, guaranteed.

The other problem we ran into is one department set up their rights to lock us godlike admins out of certain directories, something you can do on Microsoft filesystems since there is no equivalent to Novell's "Supervisor" trustee right. We didn't notice this until step 9 when the log-files filled up with 'access denied' errors, and the 30 second retry it causes, which further delayed execution of the terminal sync script. Obviously, those files will not get synced.

I hate hate hate it when this kind of thing happens.

Migrating from blogger, a guide

| 2 Comments
Migrating from Blogger took a few steps. The steps any new Movable Type installation has to go through aren't important, and you really don't need to know about what all I went through to get the theming right (learning CSS along the way) or a strange publishing fault fixed. Not relevant, and really well covered out there on the Internet.

No, what isn't well covered is how to move an FTP-hosted blogger-blog to MT. There are a few resources out there, but nothing automatic. Two sites started me on the path to what ultimately worked. However, none of them covered how to import existing comments. It isn't easy to do that as I found out, but it can be done.

The steps are in abstract:
  1. Change your Blogger settings so you're not publishing post-pages.
  2. Save your Blogger template.
  3. Change your Blogger template radically.
  4. Change your archive publish location (or change the name of the archive file, either can work)
  5. Run a full-publish, which gets the committed files updated.
  6. On your blogger host run the handy perl script I'll be posting below the fold.
  7. Copy the resulting files to your Movable Type import directory.
  8. In the Movable Type interface for your blog, do an Import.
  9. Review entries to make sure things look right.
  10. Publish entries in batches to post to the site.
The perl script takes the files published by blogger and massages them into a Movable Type export file.This is the step the other instructions don't have. Unfortunately for the masses, the FTP channel dies on March 26th May 1st, so these instructions have a timeout value.

The other thing I learned along the way is that you really don't want your comment templates to be server-side-include files. Really, keep 'em static.

The script: blogger2mt.pl

The ".xxx" TLD on repeal

| No Comments
The top-level domain proposal of .xxx was rejected back in March of 2007, is back again after a review panel determined it was incorrectly rejected. Read about it here. I still think it's a bad idea (as I said back in March 2007), but one that is ultimately unavoidable. As I said back then, once .xxx gets approved, we here at WWU are duty bound to register wwu.xxx ourselves, lest some nefarious elements register it and advertise Hot WWU Coeds on it. Most of the registrations in that domain will be such 'defensive' registrations, and will earn the registrar a lot of money just from that.

ICANN is deliberately opening up new top-level-domains, because there has to be more than just .com/.org/.edu (and sometimes .net) for non-country-specific sites (co.uk, .hk, .ch). As each new TLD comes along, we dutifully register it. And we'll keep doing that, because we have a brand to protect.

Spent money

| 2 Comments
It has been a week plus since we spent a lot of money and the question has been raised, what are we doing with that storage? Exactly?

It isn't fancy storage. In fact, it is the cheapest performant storage we could budget for. It's not SATA, but it is 7.2K RPM SAS. And there are 35TB of it. It's a server, with direct attached storage. Not a dedicated storage unit. Not fibre channel. An off the shelf server, a high quality RAID card, a bunch of storage shelves, and a pair of network ports.

A final decision hasn't been made yet for how we're presenting this storage to consumers, but iSCSI of some kind is the 90% likely choice. Whether that's Linux (a.k.a. the free option) or something else (a.k.a. the pay option) remains to be seen. The whole point of this storage is to be cheap per GB.

We're also adding a pair of Fibre Channel Drive enclosures to our EVA4400 to provide true high-speed low-latency service at a much reduced cost versus the EVA6100. Yes FC drives are EOL in a very short while, but the EVA4400 doesn't support SAS (yet). This is where our ESX cluster is likely to expand into when the time comes, that kind of stuff.

And a new tape library. It's an HP StorageWorks MSL4048. It is LTO4, fibre-attached, and has lots of slots. The native capacity of this guy is 37.5TB which is a whole lot larger than the 7.9TB of our current SDLT320 unit. It only has two drives for now which will limit flexibility somewhat, but it is upgradeable to four drives later when the money tree starts producing again. If we really need to, we can stack another MSL4048 on top of it for even more storage. Because it is drive-limited, we kind of have to stage all backups to disk and then copy from disk to tape; we won't be doing any backups directly to tape.

When it'll get here is anyone's guess. Purchasing is currently digging out of an avalanche of last-minute orders just like ours, so they're w-a-y backed up down there.

Migrated from Blogger

| No Comments


This is my blog. It used to be on Blogger. Now it isn't. Why is that you ask?

we are announcing today that we will no longer support FTP publishing in Blogger after March 26, 2010
Since that's how I'm publishing this blog, and I have no intention of hosting it on Google servers, my lone choice was clear. Get off of Blogger. So I've moved it to Movable Type from SixApart. I chose that over WordPress for a couple of reasons:
  • It allows the static publishing of pages.
  • It doesn't require a database hit on every page-load.
  • It is far less vulnerable to runaway-process slowdowns on this shared server (I'm too cheap for a VPS, much less a co-hosted server).
  • Bloggers I respect also use it.

One problem with using Movable Type is that there wasn't any built in way to import a Blogger-blog hosted on FTP. There are a few web how-tos out there for hacking around it, but they didn't quite fit. Also, none of them allowed importing either comments or tags/labels. I have comments on this blog, and I wanna keep 'em. Soon all the comments I have should be imported, in each entry here that had one, and linked to whatever URL the original comment had.

How I did that is for another post, for when I'm dead certain things actually worked. However, there is a bit of a deadline there. While I'll certainly share the perl script I wrote to do the data-massage, March 26, 2010 is the date it becomes useless. It might be possible to use it to migrate off of blogspot.com by way of creative use of wget/curl. However, migration from THERE will probably have actual plugin support for MT5.0 before too long and it'll be superfluous. But if it helps even one person, I'll feel good.

Spending money

| 2 Comments
Today we spent more money in one day than I've ever seen done here. Why? Well substantiated rumor had it that the Governor had a spending freeze directive on her desk. Unlike last year's freeze, this one would be the sort passed down during the 2001-02 recession; nothing gets spent without OFM approval. Veterans of that era noted that such approval took a really long time, and only sometimes came. Office scuttle-butt was mum on whether or not consumable purchases like backup tapes would be covered.

We cut purchase orders today and rushed them through Purchasing. A Purchasing who was immensely snowed under, as can be well expected. I think final signatures get signed tomorrow.

What are we getting? Three big things:
  1. A new LTO4 tape library. I try not to gush lovingly at the thought, but keep in mind I've been dealing with SDLT320 and old tapes. I'm trying not to let all that space go to my head. 2 drives, 40-50 slots, fibre attached. Made of love. No gushing, no gushing...
  2. Fast, cheap storage. Our EVA6100 is just too expensive to keep feeding. So we're getting 8TB of 15K fast storage. We needs it, precious.
  3. Really cheap storage. Since the storage area networking options all came in above our stated price-point, we're ending up with direct-attached. Depending on how we slice it, between 30-35TB of it. Probably software ISCSI and all the faults inherent in the setup. We still need to dicker over software.
But... that's all we're getting for the next 15 months at least. Now when vendors cold call me I can say quite truthfully, "No money, talk to me in July 2011."

The last thing we have is an email archiving system. We already know what we want, but we're waiting on determination of whether or not we can spend that already ear-marked money.

Unfortunately, I'll be finding out a week from Monday. I'll be out of the office all next week. Bad timing for it, but can't be avoided.

Budget crisis: a new bill

| 1 Comment
House Bill 3178 (read it here) was submitted Monday and sent to committee. My boss has been asked to provide opinion on some amendments the committee is considering, so we know this bill is under active consideration. This is something of a big deal. And this is why...

(4) For the 2009-2011 biennium, the following limitations are established upon information technology procurement:
(a) State agencies are not permitted to purchase or implement new information technology projects without securing prior authorization from the office of financial management. The office of financial management may only approve information technology projects that contribute towards an enterprise strategy or meet a critical, localized need of the requesting agency.
(b) State agencies are not permitted to purchase servers, virtualization, data storage, or related software through their operational funds or through a separate information technology budget item without securing prior authorization from the office of financial management. The office of financial management shall grant approval only if the purchase is consistent with the state's overall migration strategy to the state data center and critical to the operation of the agency.
(c) State agencies are not permitted to upgrade existing software without securing prior approval from the office of financial management. In reviewing requests from state agencies to upgrade software, the office of financial management shall grant approval only if the agency can demonstrate that upgrade of the software is critical to the operation of the agency.

In case your eyes glazed over at that, here are the bullet points:
  • No software upgrades without approval from Olympia (what about service-packs? Is that an 'upgrade'?).
  • No server or storage purchases without approval from Olympia.
  • The State will be greatly incentivising (stick-style) usage of the State's central storage services.
Holy kill-joy, Batman! If everything we here in Technical Services has to spend has to be approved by Office of Financial Management in Olympia, things'll get real slow. But there is more. Several new sections too big to quote here go into detail about other things:
  • A centralized State PC Replacement process with, "at a minimum, a replacement cycle of at least five years," with a master contract containing no more than three providers of PCs with no more than four models on each contract. Which means no more than 12 PC models available at any given time.
  • All mobile phone contracts are to be centralized in OFM. Presumably this includes things like Blackberry Enterprise Server, though that's not stated in the bill yet.
  • The State shall develop a comprehensive data retention policy. That's OK, we probably need one anyway.
  • Establish a centralized tiered storage service for use by State agencies, and all storage purchases have to be approved by OFM.
  • Establish technology project standards for all K-12 school districts, mandated and overseen by OFM.
Apparently the plan is to centralize everything in order to leverage scale for cost savings. Or something. What is not yet clear is just how permissive OFM will be on those items that require approval from OFM.

We will be keeping a close eye on this bill, yes sir.

OpenSUSE Survey

| No Comments
It's time for another openSUSE survey! If you're an openSUSE user (or even a user of SLES/SLED) it's a good idea to take this thing. They set development priorities based on these surveys, so if you have an area that needs buffing up this is the place to tell them. Or if you want to tell them 'works great!' this is where you do it too.

http://www.surveymonkey.com/s/6MJYV7T

Dealing with User 2.0

| 2 Comments
The SANS Diary had a post this morning with the same title as this post. The bulk of the article is about how user attitudes have changed over time, from the green-screen era to today where any given person has 1-2 computing devices on them at all times. The money quote for my purposes is this one:

User 2.0 has different expectations of their work environment. Social and work activities are blurred, different means of communications are used. Email is dated, IM, twitter, facebook, myspace, etc are the tools to use to communicate. There is also an expectation/desire to use own equipment. Own phone, own laptop, own applications. I can hear the cries of "over my dead body" from security person 0.1 through to 1.9 all the way over here in AU. But really, why not? when is the last time you told your plumber to only use the tools you provide? We already allow some of this to happen anyway. We hire consultants, who often bring their own tools and equipment, it generally makes them more productive. Likewise for User 2.0, if using Windows is their desire, then why force them to use a Mac? if they prefer Openoffice to Word, why should't they use it? if it makes them more productive the business will benefit.

Here in the office several of us have upgraded to User 2.0 from previous versions. Happily, our office is somewhat accommodating for this, and this is good. I may be an 80% Windows Administrator these days, but that isn't stopping me running Linux as the primary OS on my desktop. A couple of us have Macs, though they both manage non-Windows operating systems so that's to be expected ;). I have seen more than one iPod touch used to manage servers. Self-owned laptops are present in every meeting we have. See us use our own tools for increased productivity.

The SANS Diary entry closed with this challenge:

So here is you homework for the weekend. How will you deal with User 2.0? How are you going to protect your corporate data without saying "Nay" to things like facebook, IM, own equipment, own applications, own …….? How will you sort data leakage, remote access, licensing issues, malware in an environment where you maybe have no control or access over the endpoint? Do you treat everyone with their own equipment as strangers and place them of the "special" VLAN? How do you deal with the Mac users that insist their machines cannot be infected? Enjoy thinking about User 2.0, if you send in your suggestions I'll collate them and update the diary.


Being a University we've always had a culture that was supportive of the individual, that Academic Freedom thing rearing its head again. So we've had to be accommodating to this kind of user for quite some time. What's more, we put a Default-Deny firewall between us and the internet really late in the game. When I got here in 2003 I was shocked and appalled to learn that the only thing standing between my workstation and the Internet were a few router rules blocking key ports; two months later I was amazed at just how survivable that ended up being. What all this means is that end-user factors have been trumping or modifying security decisions for a very long time, so we have experience with these kinds of "2.0" users.

When it comes to end-user internet access? Anything goes. If we get a DMCA notice, we'll handle that when it arrives. What we don't do is block any sites of any kind. Want to surf hard-core porn on the job? Go ahead, we'll deal with it when we get the complaints.

Inbound is another story entirely, and we've finally got religion about that. Our externally facing firewall only allows access to specific servers on specific ports. While we may have a Class B IP block and therefore every device on our network has a 'routable' address, that does not mean you can get there from the outside.

As for Faculty/Staff computer config, there are some limits there. The simple expedient of budget pressure forces a certain homogeneity in hardware config, but software config is another matter and depends very largely on the department in question. We do not enforce central software there beyond anti-virus. End users can still use Netscape 4.71 if they really, really, really want to.

Our network controls are evolving. We've been using port-level security for some time, which eliminates the ability of students to unplug the ethernet cable connected to a lab machine and plug it into their laptop. That doesn't stop conference rooms where such multi-access is expected. And we only allow one MAC address per end-port, which eliminates the usage of hubs and switches to multiply a port (and also annoy VMWare users). We have a 'Network Access Control' client installed, but all we're doing with it so far is monitor; efforts to do something with it have hit a wall. Our WLAN requires a WWU login for use, and nodes there can't get everywhere on the wired side. Our Telecom group has worked up a LimboVLAN for exiling 'bad' devices, but it is not in use because of a disagreement over what constitutes a 'bad' device.

However, if given the choice I can guarantee certain office managers would simply love to slam the bar down on non-work related internet access. What's preventing them from doing so are professors and Academic Freedom. We could have people doing legitimate research that involves viewing hard core porn, so that has to be allowed. So the 'restrict everything' reflex is still alive and strong around here, it has just been waylaid by historic traditions of free access.

And finally, student workers. They are a second class citizen around here, there is no denying that. However, they are the very definition of 'User 2.0' and they're in our offices providing yet another counter-weight to 'restrict-everything'. Our Helpdesk has a lot of student workers, so we end up with a fair amount of that attitude in IT itself which helps even more.

Universities. We're the future, man.

Free information, followup

| 2 Comments
As for the previous post, my information sharing has in large part been facilitated by my place of work. I work for a publicly funded institution of higher learning. Because of this, I have two biiiig things working in my favor:
  1. Academic freedom. This has been a tradition for longer than 'information wants to be free' has been a catch-phrase. While I'm on the business side rather than the academic side, some of that liberalism splashes over. Which means I can talk about what I do every day.
  2. I work for the state. In theory everything I do in any given day can be published by way of a Freedom of Information Act request, or as they're called here in Washington State a Public Records Request. Which means that even if I wanted to hide what I was doing, any inquisitive citizen could find it out anyway. So why bother hiding things?
If I were working for a firm that has significant trade secrets I'm pretty sure I couldn't blog about a lot of the break/fix stuff I've blogged about. Opinion, yes. Examples from my work life? Not so much.

I passed my 6 year blogaversary earlier last month, and if it is one thing I've learned is that people appreciate examples. It's one thing to describe how to fix a problem, and quite another (more useful) thing to provide the context in which a problem arose. It's the examples that are hard to provide when you have to protect trade secrets.

So, yes. I'm creating free information, in significant part because I work somewhere that values free information.

Free information

| No Comments
Charles Stross had a nice piece this morning about that long time hacker slogan, "Information wants to be free". It's a good read, so I'll wait while you go read it. It focuses on the different definitions of free. One means, "no cost," like those real-estate fliers you see at the grocery store. The other means, "free to move," like Amazon MP3 Store mp3 files. Different, see.

Part of his point is that it is one thing to enable information to be free, and quite another to create free information. Information creation is the ultimate validation of this credo. In his case, he can work with his publishers to release novels in a non-DRMed format; something he has done once and will do again soon.

But he closes with a question:
What have you created and released lately?
That's a very good question. The quick answer to that is this blog. My experiences wrestling with technology have proven useful to others. The search key-words that drive people here have evolved over time, but give a nice snapshot for what issues people are having and are looking for answers about. For a long time that was news about the Novell client for Vista. Right this moment the top trending keywords all include two of the following terms 'cifs', 'Windows 7', 'Netware', and 'OES', strongly suggests people looking for how to connect Vista/Win7 to NetWare/OES. Comments I've received have also proven that what I've posted here has been useful to others.

But what about beyond that? I've written a couple of AppNotes for Novell over the years covering topics that the NetWare-using community didn't have adequate coverage over. Novell has always had a stake in 'community', which fosters this sort of information sharing.

I've also been active on ServerFault, a sort of peer-support community for system administrators. I don't get as good data about what my contributions there are being used for, but I do still get comments on accepted answers months after their original posting. I'm in the top 25 for reputation there, so that's something.

It doesn't look like a lot, but it is free information out there. In both senses of the word.

Budget plans

| No Comments
Washington State has a $2.6 Billion deficit for this year. In fact, the finance people point out that if something isn't done the WA treasury will run dry some time in September and we'll have to rely on short-term loans. As this is not good, the Legislature is attempting to come up with some way to fill the hole.

As far as WWU is concerned, we know we'll be passed some kind of cut. We don't know the size, nor do we know what other strings may be attached to the money we do get. So we're planning for various sizes of cuts.

One thing that is definitely getting bandied about is the idea of 'sweeping' unused funds at end-of-year in order to reduce the deficits. As anyone who has ever worked in a department subject to a budget knows, the idea of having your money taken away from you for being good with your money runs counter to every bureaucratic instinct. I have yet to meet the IT department that considers themselves fully funded. My old job did that; our Fiscal year ended 12/15, which meant that we bought a lot of stuff in October and November with the funds we'd otherwise have to give back (a.k.a. "Christmas in October"). Since WWU's fiscal year starts 7/1, this means that April and May will become 'use it or lose it' time.

Sweeping funds is a great way to reduce fiscal efficiency.

In the end, what this means is that the money tree is actually producing at the moment. We have a couple of crying needs that may actually get addressed this year. It's enough to completely fix our backup environment, OR do some other things. We still have to dicker over what exactly we'll fix. The backup environment needs to be made better at least somewhat, that much I know. We have a raft of servers that fall off of cheap maintenance in May (i.e. they turn 5). We have a need for storage that costs under $5/GB but is still fast enough for 'online' storage (i.e. not SATA). As always, the needs are many, and the resources few.

At least we HAVE resources at the moment. It's a bad sign when you have to commiserate with your end-users over not being able to do cool stuff, or tell researchers they can't do that particular research since we have no where to store their data. Baaaaaad. We haven't quite gotten there yet, but we can see it from where we are.

Other Blogs

My Other Stuff

Monthly Archives