Friday, February 05, 2010

Dealing with User 2.0

The SANS Diary had a post this morning with the same title as this post. The bulk of the article is about how user attitudes have changed over time, from the green-screen era to today where any given person has 1-2 computing devices on them at all times. The money quote for my purposes is this one:

User 2.0 has different expectations of their work environment. Social and work activities are blurred, different means of communications are used. Email is dated, IM, twitter, facebook, myspace, etc are the tools to use to communicate. There is also an expectation/desire to use own equipment. Own phone, own laptop, own applications. I can hear the cries of "over my dead body" from security person 0.1 through to 1.9 all the way over here in AU. But really, why not? when is the last time you told your plumber to only use the tools you provide? We already allow some of this to happen anyway. We hire consultants, who often bring their own tools and equipment, it generally makes them more productive. Likewise for User 2.0, if using Windows is their desire, then why force them to use a Mac? if they prefer Openoffice to Word, why should't they use it? if it makes them more productive the business will benefit.

Here in the office several of us have upgraded to User 2.0 from previous versions. Happily, our office is somewhat accommodating for this, and this is good. I may be an 80% Windows Administrator these days, but that isn't stopping me running Linux as the primary OS on my desktop. A couple of us have Macs, though they both manage non-Windows operating systems so that's to be expected ;). I have seen more than one iPod touch used to manage servers. Self-owned laptops are present in every meeting we have. See us use our own tools for increased productivity.

The SANS Diary entry closed with this challenge:

So here is you homework for the weekend. How will you deal with User 2.0? How are you going to protect your corporate data without saying "Nay" to things like facebook, IM, own equipment, own applications, own …….? How will you sort data leakage, remote access, licensing issues, malware in an environment where you maybe have no control or access over the endpoint? Do you treat everyone with their own equipment as strangers and place them of the "special" VLAN? How do you deal with the Mac users that insist their machines cannot be infected? Enjoy thinking about User 2.0, if you send in your suggestions I'll collate them and update the diary.


Being a University we've always had a culture that was supportive of the individual, that Academic Freedom thing rearing its head again. So we've had to be accommodating to this kind of user for quite some time. What's more, we put a Default-Deny firewall between us and the internet really late in the game. When I got here in 2003 I was shocked and appalled to learn that the only thing standing between my workstation and the Internet were a few router rules blocking key ports; two months later I was amazed at just how survivable that ended up being. What all this means is that end-user factors have been trumping or modifying security decisions for a very long time, so we have experience with these kinds of "2.0" users.

When it comes to end-user internet access? Anything goes. If we get a DMCA notice, we'll handle that when it arrives. What we don't do is block any sites of any kind. Want to surf hard-core porn on the job? Go ahead, we'll deal with it when we get the complaints.

Inbound is another story entirely, and we've finally got religion about that. Our externally facing firewall only allows access to specific servers on specific ports. While we may have a Class B IP block and therefore every device on our network has a 'routable' address, that does not mean you can get there from the outside.

As for Faculty/Staff computer config, there are some limits there. The simple expedient of budget pressure forces a certain homogeneity in hardware config, but software config is another matter and depends very largely on the department in question. We do not enforce central software there beyond anti-virus. End users can still use Netscape 4.71 if they really, really, really want to.

Our network controls are evolving. We've been using port-level security for some time, which eliminates the ability of students to unplug the ethernet cable connected to a lab machine and plug it into their laptop. That doesn't stop conference rooms where such multi-access is expected. And we only allow one MAC address per end-port, which eliminates the usage of hubs and switches to multiply a port (and also annoy VMWare users). We have a 'Network Access Control' client installed, but all we're doing with it so far is monitor; efforts to do something with it have hit a wall. Our WLAN requires a WWU login for use, and nodes there can't get everywhere on the wired side. Our Telecom group has worked up a LimboVLAN for exiling 'bad' devices, but it is not in use because of a disagreement over what constitutes a 'bad' device.

However, if given the choice I can guarantee certain office managers would simply love to slam the bar down on non-work related internet access. What's preventing them from doing so are professors and Academic Freedom. We could have people doing legitimate research that involves viewing hard core porn, so that has to be allowed. So the 'restrict everything' reflex is still alive and strong around here, it has just been waylaid by historic traditions of free access.

And finally, student workers. They are a second class citizen around here, there is no denying that. However, they are the very definition of 'User 2.0' and they're in our offices providing yet another counter-weight to 'restrict-everything'. Our Helpdesk has a lot of student workers, so we end up with a fair amount of that attitude in IT itself which helps even more.

Universities. We're the future, man.

Labels: ,


Wednesday, February 03, 2010

Free information, followup

As for the previous post, my information sharing has in large part been facilitated by my place of work. I work for a publicly funded institution of higher learning. Because of this, I have two biiiig things working in my favor:
  1. Academic freedom. This has been a tradition for longer than 'information wants to be free' has been a catch-phrase. While I'm on the business side rather than the academic side, some of that liberalism splashes over. Which means I can talk about what I do every day.
  2. I work for the state. In theory everything I do in any given day can be published by way of a Freedom of Information Act request, or as they're called here in Washington State a Public Records Request. Which means that even if I wanted to hide what I was doing, any inquisitive citizen could find it out anyway. So why bother hiding things?
If I were working for a firm that has significant trade secrets I'm pretty sure I couldn't blog about a lot of the break/fix stuff I've blogged about. Opinion, yes. Examples from my work life? Not so much.

I passed my 6 year blogaversary earlier last month, and if it is one thing I've learned is that people appreciate examples. It's one thing to describe how to fix a problem, and quite another (more useful) thing to provide the context in which a problem arose. It's the examples that are hard to provide when you have to protect trade secrets.

So, yes. I'm creating free information, in significant part because I work somewhere that values free information.

Labels:


Wednesday, January 20, 2010

A fluff piece

Much has been made in certain circles about the lack of a right-side shift key on certain, typically Asian designed, keyboards. This got me thinking. So I took a look at my own keyboards. The one I'm typing on right now at work has obvious signs of wear, where the textured black plastic has been worn smooth and shiny. Also, some letters are missing. What can I learn by looking at the wear on my keyboard?
  • I use the left-shift key almost exclusively.
  • I use both thumbs for the shift key, with a somewhat preference for my right thumb.
  • The M and C key text are completely erased, as well as the entire left-hand home row, and the U and O keys.
  • The right Ctrl and Alt keys show almost no sign of use.
Now you know. And I'm a lefty. It shows.

Like many people my age, I learned to type on those old IBM clicky keyboards. I don't miss those keyboards, but it does mean I tend to use more force per key-press than I strictly need to. Especially if I'm on a roll with something and let my fingers to the driving. I don't think I could use one of those old keyboards any more, the noise would get to me. I make enough noise as it is, I don't need people two offices down to hear how fast I type.

Labels:


Friday, January 08, 2010

Looking for a new laptop

I've been needing a new laptop for a while now. The left mouse button on the trackpad is getting a bit deaf, and my batteries for it just died a couple weeks ago. I've been waiting for the Arrandale launch for some time, as I wanted both more power and less power usage in my new laptop. The specs for it are pretty simple:
  • No smaller than 15" screen. My eyes are beginning to get old.
  • Vertical screen resolution no smaller than 800px.
  • 2GB RAM minimum
  • The option of an add-in graphics card or integrated (I won't be gaming with this thing)
  • A wireless card with good Linux support
  • 4 hours of independently benchmarked battery life
  • 7200 RPM disk options
  • Core i5-mobile
The above laptop doesn't quite exist. There are plenty in the 16+" category that all have primo graphics cards, 4 hour battery life if they're lucky, and cost over $1200. There are a few in the desktop-replacement category, in which they don't have as hot-shot graphics cards but still pretty expensive. Then there are the 'ultra-portables' which never come in a screen size larger than 14".

Unfortunately for me, Intel released some ASUS laptops to benchmarkers a while back. Intel lifted the benchmark embargo earlier this week and the results were disappointing. More processing power, absolutely. Less juice sucked, not so much. Or more specifically, the processor performed the same amount of work for about the same power requirement, but performed that work faster. Since the work-per-battery-charge ratio is not changing, this new processor is not going to give us really performant laptops that can run for 6 hours. At least not without improved battery tech, that is.

So, very sadly, I just may have to put off this purchase until this summer. That's when the power-enhanced versions of these chips will drop. At that point, laptop makers will start making laptops I want in sufficient quantities to allow competition.

Labels:


Thursday, December 31, 2009

Points of view

Per BoingBoing: http://www.wilmott.com/blogs/eman/index.cfm/2009/12/29/Trading-Places

Headlined, "I had a fantasy in which the Fed and the TSA (Transportation Security Administration) switched roles."

It's short, so go read it. Then come back.

Ah, The Fed is charged with the security of the Financial System, where the TSA is charged with the security of people moving around. Different goals. The Fed's goal is to ensure that a) money keeps moving around, and b) does so in a minimally fraudulent way. The TSA's goal is to ensure that 1) people don't die, and 2) they have the option to move around if they want to.

The TSA would really prefer is if people just stayed put, as it would make their job a lot easier.

The Fed would really prefer if money keeps moving around, because without it doing that we end up with things like negative growth.

I'd call these diametrically opposed goals. One needs to keep things moving around, the other would rather they didn't at all.

So if a terrorist bombs a flight, the TSA moves to restrict that activity by making things (hopefully) more inconvenient for future would-be bombers (and everyone else). If a bank fails the Fed steps in to ensure that depositors get their money back, and takes steps to ensure people don't think the entire system is at fault in order to maintain confidence.

It all depends on your definition of security. "Bad things don't happen," is rather nebulous, you see, so you need more focused definitions. "People don't die using this service," is one the TSA subscribes to. "Ensuring business as usual continues," is one the Fed subscribes to. People die from financial crisis all the time (suicide in the face of mounting debts, retirement incoming coming in w-a-y under projection forcing skimped health-care, loss of a job forcing a person to live on the streets), and yet this isn't really on the Fed's radar. At the same time, the TSA is not at all afraid to change 'continues as usual'; witness their dictum banning in-flight communications.

And yet both do take in to account confidence in their respective systems. For the Fed, confidence is of paramount importance, for without confidence there is no lending, and without lending, there is no wealth creation. The TSA also takes in to account confidence, as their political existence is dependent upon it. For the Fed, even a little loss of confidence is a very bad thing. For the TSA, a little loss of confidence is a small price to pay to prevent future tragedy, and they can afford to wait until confidence is rebuilt due to lack of tragedy.

But it still doesn't help make flying any less of a pain in the neck.

Labels:


Thursday, December 24, 2009

10 technologies to kill in 2010

Computer world has the opinionated list.

  1. Fax Machines. I'm with him here. But they still linger on. Heck, I file my FSA claims via FAX simply because photocopying receipts and the signed claim form works! However... the copy machine in the office is fully capable of emailing me the scanned versions of whatever I place on the platen, which I could email to the FSA claim processor. If they supported that. Which they don't. Until more offices (and companies) start getting these integrated solutions into use and worked into their business processes, they'll stick around.
  2. 12v 'cigarette' plugs in vehicles. These are slowly changing. New cars are increasingly coming with 120v outlets. However, the sheer preponderance of this form-factor out there will guarantee support for many, many years to come. As more vehicles come with 48v systems instead of 12v systems, these new standard 120v outlets will be able to support more wattage.
  3. The 'www' in web-site addresses. Ah, mind-share. Old-schoolers like me still reflexively type 'http://www' before any web-address, simply because I spent so many years typing it it's hard to untrain my fingers. I'll get there in the end. And when geeks like me pick site addresses, the 'www' is kind of a default. But then, I'm not a web-dev so I don't think about things like this all day.
  4. Business Cards. Oh, they'll stick around. How else will I enter 'free lunch' contests? The Deli counter isn't going to get email any time soon, and dropping my card in a fish-bowl does the same thing easier. That said, the 100-pack of business cards I was issued in 2003 is still serving me strong, so I don't go through them much. One deli counter around my old job had a Handspring Visor set out so people could beam business cards to it as a way to enter the free lunch contest. Now THAT'S rocking it old-school!
  5. Movie Rental Stores. For people stuck on slow internet connections, they're still the only way to get video content. They still serve an under-served population. Like check-cashing stores.
  6. Home entertainment remotes. Word. I am in lust for an internet-updatable universal remote like the eHarmony ones.
  7. Landline phones. I still have one, because until a few months ago cell reception at my house was spotty. Also, they're 'always on' even in a power outage. An extended power-outage will cause even cell phones to run out of juice, and then where will you be? Also, cell service still isn't everywhere yet. More with the serving of under-served populations.
  8. Music CD's. They're going the way of vinyl records. Soon to be a scorned format, but their utility for long term media backup is not to be denied. What's really going way is the 'album' format! Kids These Days are going to remember the CD in much the same way I remember the 5.25" floppy disk.
  9. Satellite Radio. The long-haul industry is very much a fan of these services, as you can get the same station coast to coast. Some people like live talk radio, which you can't get on your ipod. Recorded talk radio? Sure, they're called "podcasts". It is no mystery why half of Sirius' channel lineup is talk or other non-music. Satellite radio is here for the long-haul.
  10. Redundant registration. Word.

Labels:


Tuesday, December 22, 2009

It's the little things

One thing that Microsoft's Powershell does that I particularly like is that they alias common unix commands. No longer do I have to chant to myself:
ls on linux, dir on dos.
ls on linux, dir on dos.
ls works in powershell. The command line options are different, of course, but at least I don't have to retrain my fingers when doing a simple file list. This is especially useful when I'm fast switching between the two environments. Such as when I'm using, say, Perl to process a text file, and then Powershell to do a large Exchange operation.

Labels: ,


Monday, December 21, 2009

How I got into this in the first place

How did you get into sysadmin stuff?

The flip answer is, "A case of mono in 1998."

The full answer is that I intended to get into system administration right out of college. When I made the decision to not pursue graduate school, I chose to join the real world. There were several reasons for this, chief among them being that the field I was interested in involves lot of math and I was strictly a C student there. As for what I'd do in the real world, well... by this time I had learned something about myself, something taught ably by the CompSci program I was getting around to finishing.

Broken code makes me angry. Working on broken code all day makes me angry all around. Since programming involves working on code that is by definition always broken, it didn't seem like the right career for me.

Since this was early 1996 when I realized this, I made this decision at a time when I had friends who had skipped college all together to work in internet startups, get paid in stock options, and otherwise make a lot of money. I didn't see much of them except online (those wacky startup death-marches). That wasn't a gravy train I could get on and survive sane. So, no programming career for me. SysAdmin it was!

I paid for my own Certified Novell Administrator (NW4.10 IIRC) that September while I was working temp jobs. One of the temp jobs went permanent in January of 1997, and I was hired at the bottom rung: Help Desk.

This wasn't all bad, as it happened. Our helpdesk had all of 4 people on it at the time, one dispatcher who half-timed with Solaris and Prime admin work, and three technicians. We pretty much did it all. Two of 'em handled server side stuff (NetWare exclusively) when server stuff needed handling, and all three of us dealt with desktop stuff.

Then I got mono in the summer of 1998. I was out for a week. When I came back, my boss didn't believe I was up for the full desktop rotation and grounded me to my desk to update documentation. Specifically, update our Windows 95 installation guides. What was supposed to take a week took about 6 hours. Then I was bored.

And there was this NetWare 3.11 to NetWare 4.11 upgrade project that had been languishing un-loved due to lack of time from the three of us. And here I was desk-bound, and bored. So I dug into it. By Thursday I had a full migration procedure mapped out, from server side to things that needed doing on the desktop. We did the first migration that August, and it worked pretty much like I documented. The rest of the NW3.x to NW4.11 migrations went as easily.

From there it was a slam-dunk that I get into NetWare Sysadmin work. I got into Windows admin that December while I was attending my Windows NT Administration classes. On Monday of Week 2 (the advanced admin class if I remember right) I got a call from my boss telling me that the current NT administrator had given 2 weeks notice and announced he was going on 2 weeks of vacation, and I'd be the new NT guy when I got back from class.

In his defense, he was a Solaris guy from way back and was actively running Linux at home and other places. He had, "I don't do Windows," in his cube for a while before management tapped him to become the NT-guy. When I got his servers after he left I found the Cygwin stack, circa 1998, on all of them. He had his preferences. And he left to do Solaris admin Somewhere Else. He really didn't want to do Windows.

So within 8 months of getting a fortuitous case of mononucleosis, I was a bone-fide sysadmin for two operating systems. Sometimes life works that way.

Labels: ,


Thursday, December 10, 2009

Old hardware

Watching traffic on the opensuse-factory mailing list has brought home one of the maxims of Linuxdom that has been true for over a decade: People run Linux on some really old crap. And really, it makes sense. How much hardware do you really need for a router/firewall between your home network and the internet? Shoving packets is not a high-test application if you only have two interfaces. Death and fundamental hardware speed-limits are what kills these beasts off.

This is one feature that Linux shares with NetWare. Because NetWare gets run on some really old crap too, since it just works, and you don't need a lot of hardware for a file-server for only 500 people. Once you get over a 1000 or very large data-sets the problem gets more interesting, but for general office-style documents... you don't need much. This is/was one of the attractions for NetWare, you need not much hardware and it runs for years.

On the factory mailing list people have been lamenting recent changes in the kernel and entire environment that has been somewhat deleterious for really old crap boxes. The debate goes back and forth, but at the end of the day the fact remains that a lot of people throw Linux on hardware they'd otherwise dispose of for being too old. And until recently, it has just worked.

However, the diaspora of hardware over the last 15 years has caught up to Linux. Supporting everything sold in the last 15 years requires a HELL of a lot of drivers. And not only that, but really old drivers need to be revised to keep up with changes in the kernel, and that requires active maintainers with that ancient hardware around for testing. These requirements mean that more and more of these really old, or moderately old but niche, drivers are drifting into abandonware-land. Linux as an ecosystem just can't keep up anymore. The Linux community decries Windows for its obsession with 'backwards compatibility' and how that stifles innovation. And yet they have a 12 year old PII box under the desk happily pushing packets.

NetWare didn't have this problem, even though it's been around longer. The driver interfaces in the NetWare kernel changed a very few times over the last 20 years (such as the DRV to HAM conversion during the NetWare 4.x era, and the introduction of SMP later on) which allowed really old drivers to continue working without revision for a really long time. This is how a 1998 vintage server could be running in 2007, and running well.

However, Linux is not NetWare. NetWare is a special purpose operating system, no matter what Novell tried in the late 90's to make it a general purpose one (NetWare + Apache + MySQL + PHP = a LAMP server that is far more vulnerable to runaway thread based DoS). Linux is a general purpose operating system. This key difference between the two means that Linux got exposed to a lot more weird hardware than NetWare ever did. SCSI attached scanners made no sense on NetWare, but they did on Linux 10 years ago. Putting any kind of high-test graphics card into a NetWare server is a complete waste, but on Linux it'll give you those awesome wibbly-windows.

There comes a time when an open source project has to cut away the old stuff. Figuring this out is hard, especially when the really old crap is running under desks or in closets entirely forgotten. It is for this reason that Smolt was born. To create a database of hardware that is running Linux, as a way to figure out driver development priorities. Both in creating new, missing drivers, and keeping up old but still frequently used drivers.

If you're running a Pentium 2-233 machine as your network's NTP server, you need to let the Linux community know about it so your platform maintains supportability. It is no longer good enough to assume that if it worked in Linux once, it'll always work in Linux.

Labels: , , ,


Wednesday, December 09, 2009

Budget realities, 2010 version

Well, the Governor just presented her budget proposal for the coming year. And it's not good. WWU's budget is slated for a further 6.2% whack of state-funds. We survived last year in large part due to the traditional budgetary techiniques of cost shifting, reserve funds, and one-time funds. Those are pretty much gone now, so this will come out of bone.

Dire? Yes.

Labels: ,


Wednesday, December 02, 2009

OpenSUSE depreciates SaX2

There is an ongoing thread on opensuse-factory right now about the announcement to ditch SaX2 from the distro. The reason for this is pretty well laid out in the mail message. Xorg now has vastly better auto-detection capabilities, so a tool that generates a static config isn't nearly as useful as it once was. Especially if its on a laptop that can encounter any number of conference room projectors.

This is something of the ultimate fate of Xwindows on Linux. Windows has been display-plug-n-play (and working plug-n-play at that) for many, many years now. The fact that XWindows still needed a text file to work right was an anachronism. So long as it works, I'm glad to see it. As it happens, it doesn't work for me right now, but that's beside the point. Display properties, and especially display properties in an LCD world, should be plug-n-play. 

As one list-member mentioned, the old way of doing it was a lot worse. What is the old way? Well... when I first started with Linux, in order to get my Xwindows working, which was the XFree86 version of Xwin by the way, I needed my monitor manual, the manual for my graphics card, a ruler, and a calculator. I kid you not. This was the only way to get truly accurate ModeLines, and it is the ModeLines that tell Xwindows what resolution, refresh rate, and bit-depth combinations it can get away with.

Early tools had databases of graphics cards and monitors that you could sort through to pick defaults. But fine tuning, or getting a resolution that the defaults didn't think was achievable, required hand hacking. In large part this was due to the analog nature of CRTs. Now that displays have all gone digital, the sheer breadth of display options is now greatly reduced (in the CRT days you really could program an 800x800x32bit display into Xwin on a regular old 4:3 monitor, and it might even have looked good. You can't do that on an LCD.).

In addition, user-space display tools in both KDE and Gnome have advanced to the point that that's what most users will ever need. While I have done the gonzo-computing of a hand-edited xwindows config file, I do not miss it. I am glad that Xwindows has gotten to this point.

Unfortunately, it seems that auto-detect on Xwindows is about as reliable as Windows ME was about that. Which is to say, most of the time. But there are enough edge cases out there where it doesn't work right to make it feel iffy. It doesn't help that people tend to run Linux on their crappiest box in the house, boxes with old CRTs that don't report their information right. So I believe SaX2 still has some life in it, until the 8 year old crap dies off in sufficient numbers that this tool that dates from that era won't be needed any more.

Labels: , ,


Thursday, November 19, 2009

A disturbance in the force.

A friend of mine's experiences with GIMP vs Photoshop are telling. Like many, she tried switching but found GIMP less than useful for any number of things. Such as no 'draw a hollow square' tool, among many, many others. When poking the developers about this the reply came back, in essence, "GIMP is not a Photoshop replacement, it's for photo manipulation." 

Well, it seems that the Ubuntu distribution managers agree with my friend more than the GIMP developers, as they're dropping GIMP from the default install. Why? Well...
  • the general user doesn't use it
  • its user-interface is too complex
  • it's an application for professionals
  • desktop users just want to edit photos and they can do that in F-Spot
  • it's a photoshop replacement and photoshop isn't included by default in Windows...
  • it takes up room on the disc
If the most popular desktop linux on the planet calls GIMP a Photoshop replacement, then... it just might be a Photoshop replacement. No matter what the Devs think. It will be interesting to see what openSUSE and Fedora do in their next dev-cycles. If they keep GIMP, things will probably continue as usual. The same if a user revolt forces it back into the Ubuntu defaults. On the other hand, if Fedora and openSUSE follow suit, this will be a radical change in the GIMP community environment. They may start addressing the UI issues. Who knows.

As it stands, Adobe has nothing to fear from GIMP. Anyone well versed in Photoshop will find the UI conventions of GIMP wildly different, and the same applies to methods to solve certain image problems(*). Adobe only needs to fear the people who a) don't want to pay the umpty hundred bucks for Photoshop, b) aren't willing to pirate a copy, and c) are willing to tough it out and learn a completely new package with radically different UI metaphors. There aren't that many of those.

Me? I've never used Photoshop. Or if I have, it was back in the early 90's. I've been using GIMP all this time and that's what I know. I intend to keep it that way since I really don't want to start paying Adobe all that money. That said, I totally understand why people don't like it. I also miss simple tools like 'draw a square', and 'draw a hollow circle'.

(*) Some of these solution paths are patented by Adobe, so no one else can do it that way if they wanted to. This is what closed source software brings you.

Labels: ,


Monday, November 09, 2009

The Firefox anniversary

Firefox turns 5 today. I'm sure you already knew that, what with it being widely covered industry-wide and all. This has caused me to look back on my own usage of Firefox over the years.

In the beginning, there was Mozilla. And I used it. And it was good. It had a nifty integrated html editor that I used on occasion. And I had used it for many a year.

I noticed the dev-work on Phoenix/Firebird and used it a bit at home on my Linux machine. Never did any serious browsing with it, but I did use it.

And then there was Firefox. When Mozilla announced that they were killing Mozilla-the-browser and replacing it with Firefox, I dutifully switched to that for day to day usage. I believe that was the 1.0.

And then there were the fights. Firefox did things differently than Mozilla did. I tried to take things in stride, but it was hard. Cookie handling was a big pet-peeve of mine (since remedied). The other one is still true.

I flirted with Opera briefly, but it was annoying in different ways. Sad.

And then there was the breakup. Which I blogged about here. You see, I'd learned about SeaMonkey, which is an OpenSource project aimed at bringing the defunct Mozilla-browser into the future. It had the experience I was used to, and worked with most of the Firefox extensions too! What's NOT to like? I was hooked and made the switch. Good by, Firefox! Won't miss you.

And then I moved to openSUSE as my primary desktop. This required a certain amount of Firefox usage simply because that was the 'built in' browser. Mostly I ignored it, since they had SeaMonkey as an option.

And then SeaMonkey started getting stale. The same UI for, like, 5 years gets old. And the little bits where it differed from the IE/FF experience were growing. So I started using FireFox on the side at work, as a way to do things like run my Google apps in a separate browser so I could do all of my other searching without directly associating my search terms with my Google account.

And then Firefox 3.5 came out. And it sucked less. I converted to FF3.5 at home, but still kept with SeaMonkey at work. It still involved some nose-holding in various spots, but I was determined to bull through. I got used to the popularity contest in the drop-down bar. I still miss the way typed in (or pasted in) URLs never show up in that list, but I got used to it.

And then SeaMonkey looked to be in PermaBeta for 2.0. Knowing I am a very small minority of web users by using SeaMonkey (0.58% of viewers of this blog, which is less than the 2.08% of you still using Mozilla), I had doubts about the long term prospects of SM. My Firefox usage ticked up again. And when Opera 10 came out, I gave it a real going over. For work stuff it didn't cut it, but it just might for home use.

And then SeaMonkey 2.0 actually released. Download it now! It integrated some of the more annoying-but-need-to-have features of Firefox (like the SSL handling) but kept the drop-down sort the way I like it. An MRU list.

And that brings me to today. At work Firefox is the browser I keep logged in to Google for various things, and still use SM for all of my other browsing. I find that handy.

And now you know.

Labels:


Wednesday, September 30, 2009

I have a degree in this stuff

I have a CompSci degree. This qualified me for two things:
  • A career in academics
  • A career in programming
You'll note that Systems Administration is not on that list. My degree has helped my career by getting me past the "4 year degree in a related field" requirement of jobs like mine. An MIS degree would be more appropriate, but there were very few of those back when I graduated. It has indirectly helped me in troubleshooting, as I have a much better foundation about how the internals work than your average computer mechanic.

Anyway. Every so often I stumble across something that causes me to go Ooo! ooo! over the sheer computer science of it. Yesterday I stumbled across Barrelfish, and this paper. If I weren't sick today I'd have finished it, but even as far as I've gotten into it I can see the implications of what they're trying to do.

The core concept behind the Barrelfish operating system is to assume that each computing core does not share memory and has access to some kind of message passing architecture. This has the side effect of having each computing core running its own kernel, which is why they're calling Barrelfish a 'multikernel operating system'. In essence, they're treating the insides of your computer like the distributed network that it is, and using already existing distributed computing methods to improve it. The type of multi-core we're doing now, SMP, ccNUMA, uses shared memory techniques rather than message passing, and it seems that this doesn't scale as far as message passing does once core counts go higher.

They go into a lot more detail in the paper about why this is. A big one is hetergenaity of CPU architectures out there in the marketplace, and they're not just talking just AMD vs Intel vs CUDA, this is also Core vs Core2 vs Nehalem. This heterogenaity in the marketplace makes it very hard for a traditional Operating System to be optimized for a specific platform.

A multikernel OS would use a discrete kernel for each microarcitecture. These kernels would communicate with each other using OS-standardized message passing protocols. On top of these microkernels would be created the abstraction called an Operating System upon which applications would run. Due to the modularity at the base of it, it would take much less effort to provide an optimized microkernel for a new microarcitecture.

The use of message passing is very interesting to me. Back in college, parallel computing was my main focus. I ended up not pursuing that area of study in large part because I was a strictly C student in math, parallel computing was a largely academic endeavor when I graduated, and you needed to be at least a B student in math to hack it in grad school. It still fired my imagination, and there was squee when the Pentium Pro was released and you could do 2 CPU multiprocessing.

In my Databases class, we were tasked with creating a database-like thingy in code and to write a paper on it. It was up to us what we did with it. Having just finished my Parallel Computing class, I decided to investigate distributed databases. So I exercised the PVM extensions we had on our compilers thanks to that class. I then used the six Unix machines I had access to at the time to create a 6-node distributed database. I used statically defined tables and queries since I didn't have time to build a table parser or query processor and needed to get it working so I could do some tests on how optimization of table positioning impacted performance.

Looking back on it 14 years later (eek) I can see some serious faults about my implementation. But then, I've spent the last... 12 years working with a distributed database in the form of Novell's NDS and later eDirectory. At the time I was doing this project, Novell was actively developing the first version of NDS. They had some problems with their implementation too.

My results were decidedly inconclusive. There was a noise factor in my data that I was not able to isolate and managed to drown out what differences there were between my optimized and non-optimized runs (in hindsight I needed larger tables by an order of magnitude or more). My analysis paper was largely an admission of failure. So when I got an A on the project I was confused enough I went to the professor and asked how this was possible. His response?
"Once I realized you got it working at all, that's when you earned the A. At that point the paper didn't matter."
Dude. PVM is a message passing architecture, like most distributed systems. So yes, distributed systems are my thing. And they're talking about doing this on the motherboard! How cool is that?

Both Linux and Windows are adopting more message-passing architectures in their internal structures, as they scale better on highly parallel systems. In Linux this involved reducing the use of the Big Kernel Lock in anything possible, as invoking the BKL forces the kernel into single-threaded mode and that's not a good thing with, say, 16 cores. Windows 7 involves similar improvements. As more and more cores sneak into everyday computers, this becomes more of a problem.

An operating system working without the assumption of shared memory is a very different critter. Operating state has to be replicated to each core to facilitate correct functioning, you can't rely on a common memory address to handle this. It seems that the form of this state is key to performance, and is very sensitive to microarchitecture changes. What was good on a P4, may suck a lot on a Phenom II. The use of a per-core kernel allows the optimal structure to be used on each core, with changes replicated rather than shared which improves performance. More importantly, it'll still be performant 5 years after release assuming regular per-core kernel updates.

You'd also be able to use the 1.75GB of GDDR3 on your GeForce 295 as part of the operating system if you really wanted to! And some might.

I'd burble further, but I'm sick so not thinking straight. Definitely food for thought!

Labels: , , , , ,


Friday, September 25, 2009

More thoughts on the Novell support change

Something struck me in comments on the last post about this that I think needs repeating on a full post.

Novell spent quite a bit of time attempting to build up their 'community' forums for peer-support. Even going so far as to seed the community with supported 'sysops' who helped catalyze others into participating, and creating a vibrant peer support community. This made sense because it built both goodwill and brand loyalty, but also reduced the cost-center known as 'support'. All those volunteers were taking the minor-issue load off of the call-in support! Money saved!

Fast forward several years. Novell bought SuSE and got heavily into Open Source. Gradually, as the OSS products started to take off commercially, the support contracts became the main money maker instead of product licenses. Just as suddenly, this vibrant goodwill-generating peer-support community is taking vital business away from the revenue-stream known as 'support'. Money lost!

Just a simple shift in the perception of where 'support' fits in the overall cost/revenue stream makes this move make complete sense.

Novell will absolutely be keeping the peer support forums going because they do provide a nice goodwill bonus to those too cheap to pay for support. However.... with 'general support' product-patches going behind a pay-wall, the utility of those forums decreases somewhat. Not all questions, or even most of them for that matter, require patches. But anyone who has called in for support knows the first question to be asked is, "are you on the latest code," and that applies to forum posts as well.

Being unable to get at the latest code for your product version means that the support forum volunteers will have to troubleshoot your problem based on code they may already be well past, or not have had recent experience with. This will necessarily degrade their accuracy, and therefore the quality of the peer support offered. This will actively hurt the utility of the peer-support forums. Unfortunately, this is as designed.

For users of Novell's active-development but severe underdog products such as GroupWise, OES2, and Teaming+Conferencing, the added cost of paying for a maintenance/support contract can be used by internal advocates of Exchange, Windows, and SharePoint as evidence that it is time to jump ship. For users of Novell's industry-leading products such as Novell Identity Management, it will do exactly as designed and force these people into maintaining maintenance contracts.

The problem Novell is trying to address are the kinds of companies that only buy product licenses when they need to upgrade, and don't bother with maintenance unless they're very sure that a software upgrade will fall within the maintenance period. I know many past and present Novell shops who pay for their software this way. It has its disadvantages because it requires convincing upper management to fork over big bucks every two to five years, and you have to justify Novell's existence every time. The requirement to have a maintenance contract in order for your highly skilled staff to get at TIDs and patches, something that used to be both free and very effective, is a real-world major added expense.

This is the kind of thing that can catalyze migration events. A certain percentage will pony up and pay for support every year, and grumble about it. Others, who have been lukewarm towards Novell for some time due adherence to the underdog products, may take it as the sign needed to ditch these products and go for the industry leader instead.

This move will hurt their underdog-product market-share more than it will their mid-market and top-market products.

If you've read Novell financial statements in the past few years you will have noticed that they're making a lot more money on 'subscriptions' these days. This is intentional. They, like most of the industry right now, don't want you to buy your software in episodic bursts every couple years. They want you to put a yearly line-item in your budget that reads, "Send money to Novell," that you forget about because it is always there. These are the subscriptions, and they're the wave of the future!

Labels: , ,


Tuesday, September 08, 2009

DNS and AD Group Policy

This is aimed a bit more at local WWU users, but it is more widely applicable.

Now that we're moving to an environment where the health of Active Directory plays a much greater role, I've been taking a real close look at our DNS environment. As anyone who has ever received any training on AD knows, DNS is central to how AD works. AD uses DNS the way WinNT used WINS, the way IPX used SAPs, or NetWare uses SLP. Without it things break all over the place.

As I've stated in a previous post our DNS environment is very fragmented. As we domain more and more machines, the 'univ.dir.wwu.edu' domain becomes the spot where the vast majority of computing resources is resolveable. Right now, the BIND servers are authoritative for the in-addr.arpa reverse-lookup domains which is why the IP address I use for managing my AD environment resolves to something not in the domain. What's more, the BIND servers are the DNS servers we pass out to every client.

That said, we've done the work to make it work out. The BIND servers have delegation records to indicate that the AD DNS root domain of dir.wwu.edu is to be handled by the AD DNS servers. Windows clients are smart enough to notice this and do the DNS registration of their workstation name against the AD DNS servers and not the BIND servers. That said, the in-addr.arpa domains are authoritative on the BIND servers so the client's attempt to register the reverse-lookup records all fail. Every client on our network has Event Log entries to this effect.

Microsoft has DNS settings as a possible target for management through Group Policy. This could be used to help ensure our environment stays safe, but will require analysis before we do anything. Changes will not be made without a testing period. What can be done, and how can it help us?

Primary DNS Suffix
Probably the simplest setting of the lot. This would allow us to force all domained machines to consider univ.dir.wwu.edu to be their primary DNS domain and treat it accordingly for Dynamic DNS updates and resource lookups.

Dynamic Update
This forces/allows clients to register their names into the domain's DNS domain of univ.dir.wwu.edu. Most already do this, and this is desirable anyway. We're unlikely to deviate from default on this one.

DNS Suffix Search List
This specifies the DNS suffixes that will be applied to all lookup attempts that don't end in period. This is one area that we probably should use, but don't know what to set. univ.dir.wwu.edu is at the top of the list for inclusion, but what else? wwu.edu seems logical, and admcs.wwu.edu is where a lot of central resources are located. But most of those are in univ.dir.wwu.edu now. So. Deserves thought.

Primary DNS Suffix Devolution
This determines whether to include the component parts of the primary dns suffix in the dns search list. If we set the primary DNS suffix to be univ.dir.wwu.edu, then the DNS resolver will also look in dir.wwu.edu, and wwu.edu. I believe the default here is 'True'.

Register PTR Records
If the in-addr.arpa domains remain on the BIND servers, we should probably set this to False. At least so long as our BIND servers refuse dynamic updates that is.

Registration Refresh Interval
Determines how frequently to update Dynamic registrations. Deviation from default seems unlikely.

Replace Addresses in Conflicts
This is a setting for handling how multiple registrations for the same IP (here defined as multiple A records pointing to the same IP) are to be handled. Since we're using insecure DNS updates at the moment, this setting deserves some research.

DNS Servers
If the Win/NW side of Tech Services wishes to open warfare with the Unix side of Tech Services we'll set this to use the AD DNS servers for all domained machines. For this setting overrides client-side DNS settings with the DNS servers defined in the Group Policy. No exceptions. A powerful tool. If we set this at all, it'll almost definitely be the BIND DNS servers. But I don't think we will. Also, it may be true that Microsoft has removed this from the Server 2008 GPO, as it isn't listed on this page.

Register DNS Records with Connection-Specific DNS Suffix
If a machine has more than one network connection (very, very few non VMWare host-machines will) allow them to register those connections against their primary DNS suffix. Due to the relative derth of configs, we're unlikely to change this from default.

TTL Set in the A and PTR Records
Since we're likely to turn off PTR updates, this setting is redundant.

Update Security Level
As more and more stations domain, there will come a time when we may wish to cut out the non-domained stations from updating into univ.dir.wwu.edu. If that times come, we'll set this to 'secure only'. Until then, won't touch it.

Update Top Level Domain Zones
This allows clients to update a TLD like .local. Since our tree is not rooted in a TLD, this doesn't apply to us.

Some of these can have wide ranging effects, but are helpful. I'm very interested in the search-list settings, since each of our desktop techs has tens of DNS domains to chose from depending on their duty area. Something here might greatly speed up resouce resolution times.

Labels: , ,


Thursday, August 20, 2009

On databases and security

Charles Stross has a nice blog post up about the UK DNA database, database security, and the ever dropping price of gene sequencing and replication. The UK has a government DNA database of anyone ever booked by anything by the police. Because of how these things work, lots of entities have access to it for good reasons. Like the US No Fly List, being on it is seen as a black mark on your trustability. He posits some scenarios for injecting data into the DNA database through wireless and other methods.

Another thing he points out is that the gear required to reproduce DNA is really coming down in price. In the not too distant future, it is entirely possible that the organized criminal will be able to plant DNA on the scene of a crime. This could result in pranks ("How'd the Prime Minister get to Edinburgh and back to London in time to jiz on a shop window?") to outright frame jobs.

Which is to say, once DNA reproduction gets into the hands of the criminal elements, it'll no longer be a good single-source biometric identifier. Presuming of course that the database backing it hasn't been perved.

Labels: ,


Tuesday, August 11, 2009

Non-paid work hours

Ars Technica has an article up today about workers who put in a lot of unpaid hours thanks to their mobile devices. This isn't a new dynamic by any means, we had a lot of this crop up when Corporate web-mail started becoming ubiquitous, and before that with the few employees using remote desktop software (PCAnywhere anyone?) to read email from home over corporate dialup. The BlackBerry introduced the phenomena to the rest of the world, and the smartphone revolution is bringing this to the masses.

My old workplace was union, so was in the process of figuring out how to compensate employees for after-hours call-out shortly after we got web-mail working. There were a few state laws and similar rulings that directed how it should be handled, and ultimately they decided on no less than 2-hours overtime pay for issues handled on the phone, and no less than 4-hours overtime pay for issues requiring a site-visit. Yet, no payment for being officially on-call with a mandatory response time; it was seen that actually responding to the call was the payment. Even if being on-call meant not being able to go to a child's 3 hour Dance recital.

Now that I'm an exempt employee, I don't get anything like overtime. If I spend 36 hours in a weekend shoving an upgrade into our systems through sheer force of will, I don't automatically get Monday off or a whonking big extra line-item on my next paycheck. It's between me and my manager how many hours I need to put in that week.

As for on-call, we don't have a formal on-call schedule. All of us agree we don't want one, and strive to make the informal one work for us all. No one wants to plan family vacations around an on-call schedule, or skip out of town sporting events for their kids just so they can be no more than an hour from the office just in case. It works for us, but all it'll take to force a formal policy is one bad apple.

For large corporations with national or global workforces, such gentleman's agreements aren't really doable. Therefore, I'm not at all surprised to see some lawsuits being spawned because of it. Yes, some industries come with on-call rotations baked in (systems administration being one of them). Others, such as tech-writing, don't generally have much after-hours work, and yet I've seen second hand such after hours work (working on docs, conference calls, etc) consume an additional 6 hours a day.

Paid/unpaid after hours work gets even more exciting if there are serious timezone differences involved. East Coast workers with the home-office on the West Coast will probably end up with quite a few 11pm conference calls. Reverse the locations, and the West Coast resident will likely end up with a lot of 5am conference calls. Companies that have drank deeply from the off-shoring well have had to deal with this, but have had the benefit of different labor laws in their off-shored countries.

"Work" is now very flexible. Certain soulless employers will gleefully take advantage of that, which is where the lawsuits come from. In time, we may get better industry standard practice for this sort of thing, but it's still several years away. Until then, we're on our own.

Labels: ,


Friday, August 07, 2009

Why aren't schools using open-source?

From http://blogs.techrepublic.com.com/opensource/?p=811

This article was primarily aimed at K-12, which is a much different environment than higher ed. For one, the budgets are a lot smaller per-pupil. However, some of the questions do apply to us as well.

As it happens, part of our mandate is to prepare our students for the Real World (tm). And until very recently, Real World meant MS Office. We've been installing Open Office along side MS Office on our lab images for some time, and according to our lab managers they've seen a significant increase on OO usage. I'm sure part of this is due to the big interface change Microsoft pushed with Office 2007, but this may also be reflective of a shift in mind-share on the part of our incoming students. Parallel installs just work, so long as you have the disk space and CPU power it is very easy to set up.

Our choice of lab OS image has many complexities, not the least of which is a lack of certain applications. There are certain applications, of which Adobe Photoshop is but one, that don't have Linux versions yet. Because of this, Windows will remain.

We could do something like allow dual-boot workstations, or have a certain percentage of each lab as Linux stations. Hard drive sizes are big enough these days that we could dual-boot like that and still allow local-partition disk-imaging, and it would allow the student a choice in environments they can work in. Now that we're moving to a Windows environment, that actually better enables interoperability (samba). Novell's NCP client for Linux was iffy performance-wise, and we had political issues surrounding CIFS usage.

However... one of the obstacles in this is the lack of Linux workstation experience on the part of our lab managers. Running lab workstations is a constant cat and mouse game between students trying to do what they want, malware attempting to sneak in, and the manager attempting to keep a clean environment. You really want your lab-manager to be good at defensive desktop management, and that skill-set is very operating system dependent. Thus the reluctance regarding wide deployment of Linux in our labs.

Each professor can help urge OSS usage by not mandating file formats for homework submissions. The University as a whole can help urge it through retraining ITS staff in linux management, not just literacy. Certain faculty can promote it in their own classes, which some already do. But then, we have the budget flexibility to dual stack if we really want to.

Labels: ,


Identity Management in .EDU land

We have a few challenges when it comes to an identity management system. As with any attempt to automate identity management, it is the exceptions that kill projects. This is an extension of the 80/20 rule, where 80% of the cases will be dead easy to manage, and it's the 20% that are special are where most of the business-rules meeting-time will be spent.

In our case, we have two major classes of users:
  • Students
  • Employees
And a few minor classes littered about like Emeritus Professors. I don't quite know enough about them to talk knowledgeably.

The biggest problem we have are how to handle the overlaps. Student workers. Staff who take classes. We have a lot of student workers, but staff who take classes are another story. The existence of these types of people make impossible having the two big classes as exclusive.

Banner handles this case pretty well from what I understand. The systems I manage, however, are another story. With eDirectory and the Novell Client, we had two big contexts named Students and Users. If your object was in one, that's the login script you ran. Active Directory was until recently Employee-only because of Exchange. We put the students in there (with no mailboxes of course) two years ago, largley because we could and it made the student-employee problem easier to manage.

One of the thorniest questions we have right now is defining, "when is a student a student with a job, and when is a student an employee taking classes." Unfortunately, we do not have a handy business rule to solve that. A rule, for example, like this one:
If a STUDENT is taking less than M credit-hours of classes, and is employed in a job-class of C1-F9, then they shall be reclassed EMPLOYEE.
That would be nice. But we don't have it, because the manual exception-handling process this kicks off is not quite annoying enough to warrant the expense of deciding on an automatable threshold. Because this is a manual process, people rarely get moved back across the Student/Employee line in a timely way. If the migration process were automated, certain individuals would probably flop over the line every other quarter.

This one nice example of the sorts of discussions you have to have when rolling out an identity management automation system. If we were given umpty thousand dollars to deploy Novell IDM in order to replace our home-built system, we'd have to start having these kinds of discussions again. Even though we've had some kind of identity provisioning system since the early 90's. Because we DO have an existing one, some of the thornier questions of data-ownership and workflow are already solved. We'd just have to work through the current manual-intervention edge cases.

Labels: ,


Monday, August 03, 2009

The obsolecence of Word

Ars Technica had a nice opinion essay posted today called, "The prospects of Microsoft Word in the wiki-based world." In case you didn't catch it, the actual page name for the link is, "microsoft-word-1983---2008-rest-in-peace.ars". Clearly, they're predicting the death of Word as a major force.

And it isn't OpenOffice that's doing it, it's the cloud. Google Docs. MediaWiki. Anything with a RichEditor text interface. And for those things that just aren't usable in those interfaces, there are specialized tools that do that job better than Word does.

The second page of the essay goes into some detail about how the author was able to replace an old school file-server with a MediaWiki. MediaWiki, it seems, is an excellent document-management product. Most people already know how to use it (thank you Wikipedia), anything entered is indexed with the built in search tools, and there is integrated change-tracking. Contrast this with a standard File Server, where indexing is a recent add-on if it exists at all, change tracking is done at the application level if at all, and files just get lost and forgotten. MediaWiki just does it better.

I never expected, "MediaWiki is the Word killer," to be made as an argument, but there are some good points in there. I do very little editing in any word processor at work. I do much more spreadsheet work, as that's still a pretty solid data manipulation interface. Tech Services has a Wiki now, and we're slooooly increasing usage of it.

And yet, there are still some areas of my life where I still use a stand-alone word processor. If I really, truly need better type-setting than can be provided by javascript and CCS driven HTML, a stand-alone is the only way to go. If I'm actually going to print something off, perhaps because I have to fax it, I'm more likely to use a word processor. There are some cultural areas where solidly type-set documentation is still a must; wedding invitations, birth announcements, resumes, cover letters. And even these are going ever more electronic.

The last time I seriously job-searched (back in 2003) I spent hours polishing the formatting of my resume. Tweaking margins so the text would flow cleanly from one page to the next. Picking a distinctive yet readable font. Fine tuning the spacing to help fit the text better. Inserting subtle graphic elements line horizontal lines. Inserting small graphics, such as my CNE logo. In the end I had a fine looking document! I even emailed it to HR when I applied. The cover letter got much the same treatment, but less focus on detailed formatting.

If I were to start looking today, it is vastly more likely that I'd attach the document (a PDF by preference, to preserve formatting, but DOC is still doable) to an online job application submission system of some kind. Or worse yet, be presented a size-limited ASCII text-entry field I'd have to cut-and-paste my resume into. The same would go for the cover letter. One of these two still encourages finely tuned type-setting like I did in 2003. The other explicitly strips everything but line feeds out.

Even six years ago there was no actual paper involved.

So I'll close with this. If you need typesetting, which is distinct from text formatting, then you still need offline tools for processing words. This is because you're doing more than simple word processing, you're also processing the format of it all. But if all you're doing is bolding, highlighting, changing text sizes, and creating the odd table, then the online tools as they exist now are well and truly all you need. It has been a SysAdmin addage for years that most people could use WordPad instead of Word for most of what they do, and these days everything WordPad can do is now in your browser.

Labels:


Wednesday, July 29, 2009

One of the perks of working here

Even though WWU is located smack in the middle of Bellingham, WA, it abuts an arboretum. Sehome Hill Arboretum. It so happens that the shortest-by-distance pedestrian route between my office and campus takes me through there on the foot-paths. Here is the trail guide.

Even though today is going to get very hot for up here, I stilled walked up and back this morning to do some work on printing in the labs by way of Microsoft/PCounter. Along the way, I ran into this:
Tree arch in the Sehome Arboretum
The trail I was on crossed this road.

The trail itself is one of the main paths between the Bernam Wood dorms and campus. During regular session there is a fair amount of traffic on this trail. I think I was the first one down it today, as I ran into more than a few spider webs. Also? Even though this route is hillier than the slightly longer one, it's a good 5-7 degrees F cooler under the trees than by the main road.

It was a nice walk. I'll do it again tomorrow to head up to a meeting on campus. I believe that meeting is in a building that currently lacks AC. That could be a very sweaty meeting.

Labels:


Wednesday, July 15, 2009

Where DIY belongs

The question of: "When should you built it your self and when should you get it off the shelf?" is one that varies from workplace to workplace. We heard several different variants of that when were interviewing for the Vice Provost for IT last year. Some candidates only did home-brew when no off the shelf package was available, others looked at the total cost of both and chose from there. This is a nice proxy question for, "What is the role of open source in your environment," as it happens.

Backups are one area where duct tape and bailing wire is to be discouraged most emphatically.

And now, a moment on tar. It is a very versatile tool, and is what a lot of unixy backup packages are built around. The main problem with backup and restore is not getting data to the backup medium, it is keeping track of what data is on which medium. Also in these days of the backup-to-disk, de-duplication is also in the mix and that's something tar can't do yet. So while you can build a tar-and-bash backup system from scratch without paying a cent, it will be lacking in certain very useful features.

Also? Tar doesn't work nearly as well on Windows.

Your backup system is one area you really do not want to invest a lot of developer creativity. You need it to be bullet proof, fault tolerant, able to handle a variety of data-types, and easy to maintain. Even the commercial packages fail some of these points some of the time, and the home brew systems fall apart much more often relative to these. The big backup boys have agents that allow backups of Oracle DBs, Linux filesystems, Exchange, and Sharepoint all to the same backup system, a home-brew application would have to get very creative to do the same thing; the problem gets even worse when it comes to restore.

Disaster Recovery is another area in which duct tape and bailing wire are to be discouraged most emphatically.

There are battle-tested open-source packages out there that will help with this (DRBD for one), depending on your environment. They're even widely used so finding someone to replace the sysadmin who just had a run in with a city bus is not that hard. Rsync can do a lot as well, so long as the scale is small. Most single systems can have something cobbled together.

Problems arise when you start talking Windows, very complex installations, or money is a major issue. If you throw enough money at a problem, most disaster recovery problems become a lot less complex. There is a lot of industry investment in DR infrastructure, so the tools are out there. Doing it on a shoe-string means that your disaster recovery also hangs by a shoe-string. If you're doing DR just to satisfy your auditors and don't plan on ever actually using it, that's one thing. But if you really expect to recover from a major disaster on that shoe-string you'll be sorely surprised when that string snaps.

Business Continuity is an area where duct tape and bailing wire should be flatly refused.

BC is in many ways DR with a much shorter recovery time. If you had problems getting your DR funded correctly, BC shouldn't even be on the timeline. Again, if it is just so you can check a box on some audit report, that's one thing. Expecting to run on such a rig is quite another.

And finally, if you do end up cobbling together backup, disaster recovery, or business continuity systems from their component parts, testing the system is even more important. In many cases testing DR/BC takes a production outage of some kind, which makes it hard to schedule tests. But testing is the only way to find out if your shoe-string can stand the load.

Labels: , ,


Wednesday, July 08, 2009

Google and Microsoft square off

As has been hard to avoid lately, Google has announced that it's releasing an actual operating system. And for hardware you can build yourself, not just on your phone. Some think this is the battle of the titans we've been expecting for years. I'm... not so sure of that.

What the new Google OS, called Chrome to make it confusing, is under the hood is Linux. They created, "a new windowing system on top of a Linux kernel." This might be a replacement for X-Windows, or it could just be a replacement for Gnome/KDE.

To my mind, this isn't the battle of the titans some think it is. Linux on the net-top has been around for some time. Long enough for a separate distribution to gain some traction (hello Moblin). What google brings to the party that moblin does not is its name. That alone will drive up adoption, regardless of how nice the user experience ends up being.

And finally, this is a distribution aimed at the cheapest (but admittedly fastest growing) segment of the PC market: sub-$500 laptops. Yes, Chrome could further chip away at the Microsoft desktop lock-in, but so far I have yet to see anything about Chrome that could actually do something significant about that. Chrome is far more likely to chip away at the Linux market-share than it is the Windows market-share, since it shares an ecosystem with Linux.

Microsoft is not quaking in its boots about this anouncement. With the release of Android, it was pretty clear that a move like this was very likely. Microsoft itself has admitted that it needs to do better in the, "slow but cheap," hardware space. They're already trying to compete in this space. Chrome will be another salvo from Google, but it won't make a hole below the water-line.

Labels: , ,


Monday, June 29, 2009

Changes are coming

Due to technical reasons I'll be getting to in a moment, this blog will be moving off of WWU's servers in the next few weeks. I have high confidence that the redirects I'll be putting in place will work and keep any existing links to the existing content still ultimately pointing at their formal home. In fact, those of you reading by way of the RSS or Atom feeds won't even notice. Images I link in will probably load a bit slower(+), and that's about it.

And now for the technical reasons. I've been keeping it under my hat since it has politics written all over it and I so don't go there on this blog. But WWU has decided (as of last September actually) that they're dropping the Novell contract and going full Microsoft to save money. And really, I've seen the financials. Much as it pains this red heart, the dollars speak volumes. It really is cheaper to go Microsoft, to the tune of around $83,000. In this era of budget deficits, that's most of an FTE. Speaking as the FTE most likely to get cut in this department, that makes it kind of personal.

Microsoft? The cheap option?

Yes, go fig. But that's how the pricing is laid out. We were deep enough into the blue beast already (Exchange, MS-SQL, SharePoint is embryonic but present and going to grow, there is Office on every Windows desktop) that going deeper wasn't much of an extra cost per year. To put it even more bluntly, "Novell did not provide enough value for the cost."

The question of what's happening to our SLES servers is still up for debate. We could get those support certificates from Microsoft directly. Or buy them retail from Novell. I don't know what we're doing there.

Which means that we're doing a migration project to replace the WUF 6-node NetWare cluster with something on Windows that does the same things. NetStorage is the hardest thing to replace (I know I'm going to miss it), but the file-serving and printing are challenging but certainly manageable. The "myweb" service will continue, and be served by a LAMP server with the home directories Samba-mounted to it, so it will continue as Apache. It could have been done with IIS, but it was an ugly hack.

As soon as we get hardware (7/1 is when the money becomes available) we'll be hitting the fast phase of the project. We hope to have it all in place by fall quarter. We'll still maintain the eDirectory replica servers for the rest of the Novell stuff on campus that is not supported (directly) by me. But for all intents and purposes, Technical Services will be out of the NetWare/OES business by October.

OH MY GOD! YOU'RE LEAVING! THAT'S WHY YOU'RE MOVING THE BLOG!

No, no. That's not the reason I'm moving this blog. Unfortunately for this blog, there was exactly one regular user of the SFTP service we provided(*). Me. So that's one service we're not migrating. It could be done with cygwin's SSH server and some cunning scripting to synchronize the password database in cygwin with AD, if I really wanted to. But... it's just me. Therefore, I need to find an alternate method for Blogger to push data at the blog.

Couple that with some discrete hints from some fellow employees that just maybe, perhaps, a blog like mine really shouldn't be run from Western's servers, and you have another reason. Freedom of information and publish-or-perish academia not withstanding, I am staff not tenured faculty. Even with that disclaimer at the top of the blog page (that you RSS readers haven't seen since you subscribed) that says I don't speak for Western, what I say unavoidably reflects on the management of this University. I've kept this in mind from the start, which is why I don't talk about contentious issues the University is facing on any term other than how they directly affect me. And also why this is the first time I've mentioned the dropping of the Novell contract until it is effectively written in stone.

So. It's time to move off of Western's servers. The migration will probably happen close to the time we cut-over MyWeb to the new servers. Which is fitting, really, as this was the first web-page on MyWeb. This'll also mean that this blog will no longer be served to you by a NetWare 6.5 server. Yep, for those that didn't know this blog's web-server is Apache2 running on NetWare 6.5.

(+) Moving from a server with an effective load-average of 0.25 to one closer to 3.00 (multi-core, though) does make a difference. Also, our pipes are pretty clean relatively speaking.

(*) Largely because when we introduced this service, NetWare's openssh server relied on a function in libc that liked to get stuck and render the service unusable until a reboot. MyWeb was also affected by that. That was back in 2004-06. The service instability drove users away, I'm sure. NetStorage is more web-like anyway, which users like better.

Labels: ,


Friday, June 19, 2009

End of an era

From internal email:
As some of you may know, the WWU dial-up modem Student and Faculty/Staff pools are being discontinued on Monday, June 29, 2009.
That's right. We're getting out of the modem business. We still had a couple regular users. If I remember right, the modem gear we had was bought with student-tech-fee money a LONG time ago and was beginning to fail. And why replace modem gear in this age? We're helping the few modem users onto alternate methods.

Like many universities, we were the sole ISP in town for many years. That changed when the Internet became more than just the playground of universities and college kids, but it was true for a while. The last vestige of that goes away on the 29th.

Labels:


Wednesday, June 10, 2009

Outlook for everything

Back when I worked in a GroupWise shop, we'd get the occasional request from end users to see if Outlook could be used against the GroupWise back-end. Back in those days there was a MAPI plug-in for Outlook that allowed it to talk natively to GroupWise and it went rather well. Then Microsoft made some changes, Novell made some changes, and the plug-in broke. GW Admins still remember the plug-in, because it allowed a GroupWise back end to look exactly like and Exchange back end to the end users.

Through the grapevine I've heard tales of Exchange to GroupWise migrations almost coming to a halt when it came time to pry Outlook out of the hands of certain highly placed noisy executives. The MAPI plugin was very useful in quieting them down. Also, some PDA sync software (this WAS a while ago) only worked with Outlook, and using the MAPI plugin was a way to allow a sync between your PDA and GroupWise. I haven't checked if there is something equivalent in the modern GW8 era.

It seems like Google has figured this out. A plugin for Outlook that'll talk to Google Apps directly. It'll allow the die-hard Outlook users to still keep using the product they love.

Labels: ,


Friday, May 29, 2009

Behavior modification

First off, this isn't sparked by anything in the office. We're pretty mellow around here.

Praise in public, correct in private

Simple words, yet we've all run into violations of it. Such as the following from a hypothetical manager:
We've been having some problems lately with people using their cell-phones excessively during working hours. We need to try and do better.
Which everyone knows that We actually means [person], and manager is too gutless to talk to [person] about their excessive cell-phone use. Instead preferring to issue a semi-anonymous dictate from on high. In my opinion, this is probably the most common manifestation of failing to follow this. Less common are the outright public criticisms, as most people realize that isn't a winning strategy (unless the manger intends to rule by fear, which some do).

That said, there are some circumstances in which a public correction is called for. Such as when the behavior requiring correction was highly public, highly sensitive in some way, or if failure to address the problem in public will significantly impact morale. In a sense, this is a failure on the part of the errant party to keep their transgressions reasonably private. Excessive local-call abuse? Pretty private, worthy of a mangerial talk in person. Surfing porn during working hours? Pretty private until you get caught, then it gets public real fast (at least in America).

If your transgression was big enough that the manager has to save face, you will not get a private rebuke.

As I said, this office is pretty mellow and this hasn't come up. But I still hear stories of other places.

Labels:


Wednesday, May 27, 2009

A new addictive site

The site ServerFault has just gone public. This is a new project from the folks that brought you StackOverflow. It is a place for SysAdmin types to ping each other for questions, in a format that's very google-friendly. Handy when you're googling for a strange issue. It has a reputation management system that looks pretty good as well. Very nifty.

Labels:


Wednesday, May 13, 2009

Thinking of an Atom-based netbook?

Anandtech is running a review of an Atom based motherboard from NVidia right now. What grabbed my attention was Page 5 of the review where Anand compares Atom performance against and old-school Pentium 4 processor. It was very interesting! This particular motherboard has a pair of Atom processors, but the review also included benchmarks from a single-cpu Atom motherboard of the same kind used in most netbooks.

If you're thinking about an Atom-based netbook, this should give you a fair idea about how it should perform.

Labels:


Tuesday, April 28, 2009

LinuxFest Northwest: gender imbalance

I went to LinuxFest Northwest this weekend. It was interesting! OpenSUSE, Ubuntu, Fedora, and FreeBSD were all there and passing out CDs. I learned stuff.

In one of the sessions I went to, "Participate or Die!" the question was asked of the presenter, a Fedora guy, about whether he is seeing any change in the gender imbalance at Linux events. He hemmed and hawed and said ultimately, 'not really'. I've been thinking about that myself, as I've noticed a similar thing at BrainShare.

Looking at what I've seen in amongst my friends, the women ARE out there. They're just not well represented in the ranks of the code-monkeys. Among closed source shops, I see women many places.

I have known several women involved with technical writing and user-factors. In fact, I don't know any men involved in these roles. Amongst all but the largest and well funded of open-source projects, tech-writing is largely done by the programmers themselves or by the end-user community on a Wiki. Except for the Wiki, the same can be said for interface design. As the tech-writers I know lament, programmers do a half-assed job of doc, and write for other programmers not somewhat clueless end-users. At the same time, the UI choices of some projects can be downright tragic for all but fellow code-monkeys. This is the reason the large pay-for-it closed-source software development firms employ dedicated Technical Writers with actual English degrees (you hope) to produce their doc.

I've also known some women involved with QA. In closed-source shops QA is largely done in house, and there may be a NDA-covered beta among trusted customers towards the end of the dev-cycle. In the land of small-project open-source, QA is again done by developers and maybe a small cadre of bug-finders. The fervent hope expectation is that bug-finders will occasionally submit a patch to fix the bug as well.

I also know women involved with defining the spec for software. Generally this is for internal clients, but the same applies elsewhere. These are the women who meet with internal customers to figure out what they need, and write it up as a feature list that developers can code against. These women also frequently are the liaison between the programmers and the requesting unit. In the land of open-source, the spec is typically generated in the head a programmer who has a problem that needs solving, who then goes out to solve it, and ultimately publishes the problem-resolution as an open source project in case anyone else wants that problem solved too.

All of this underlines one of the key problems of Linux that they've been trying to shed of late. For years Linux was made by coders, for coders, and it certainly looked and behaved like that. It has only been in recent years when a concerted effort has been made to try and make Linux Desktops look and feel comprehensible to tech-shy non-nerds. Ubuntu's success comes in large part due to these efforts, and Novell has spent a lot of time doing the same through user-factors studies.

Taking a step back, I see women in every step of the closed-source software development process, though they are underrepresented in the ranks of the code-monkeys. The open-source dev-process, on the other hand, is almost all about the code-monkey in all but the largest projects. Therefore it is no surprise that women are significantly absent at Linux-conventions.

I've read a lot of press about the struggles Computer Science departments have in attracting women to their programs. At the same time, Linux as an ecosystem has a hard time attracting women as active devs. Once more women start getting degreed, or not scared off in the formative teen years which is something Linux et. al. can help with, we'll see more of them among the code-slingers.

Something else that might help would be to tweak the CompSci course offerings to perhaps partner with the English departments or Business departments to produce courses aimed at the hard problem of translating user requirements into geek, and geek into manuals. Because the Software Engineering process involves more than just writing code in teams, it involves:
  • Building the spec
  • Working to the spec
  • Testing the produced code against the spec
  • Writing the How-To manual against the deliverable code
  • Delivery
This is an interdisciplinary process involving more than just programmers. The programmers need to be familiar with each step, of course, but it would be better if other people gave the same focus to those steps as the programmers have to focus on producing the code. Doing this in class might just inspire more non-programmers to participate on projects in such key areas as helping guide the UI, writing how-tos and man-pages, and creatively torturing software to make bugs squeal. And maybe even inspire some to give programming a try, some who never really looked at it before due to unfortunate cultural blinders. That would REALLY help.

Labels: ,


Friday, April 03, 2009

Open-sourcing eDirectory?

The topic of open-sourcing eDirectory comes up every so often. The answer is always the same, it can't be done. Novell NDS and the eDirectory that followed it uses technology licensed from RSA, and RSA will not allow their code to be open-sourced. And that's it.

However... it isn't the RSA technology that allows eDirectory to scale as far as it does. To the best of my knowledge, that's pure Novell IP, based on close to 20 years of distributed directory experience. The RSA stuff is used in the password process, specifically the NDS Password, as well as authenticating eDirectory servers to the tree and each other. The RSA code is a key glue to holding the directory together.

If Novell really wanted to, they could produce another directory that scales as far as eDirectory does. This directory would be fundamentally incompatible with eDir because it would have to be made without any RSA code, which eDirectory requires. This hypothetical open-source directory could scale as far as eDir does, but would have to use a security process that is also open-source.

This would take a lot of engineering on the part of Novell. The RSA stuff has been central to both NDS and eDir for all of those close to 20 years, and the dependency tree is probably very large. The RSA code even is involved in the NCP protocol that eDir uses to talk with other eDir servers, so a new network protocol would probably have to be created from scratch. At the pace Novell is developing software these days, this project would probably take 2-3 years.

Since it would take a lot of developer time, I don't see Novell creating an open-source eDir-clone any time soon. Too much effort for what'll essentially be a good-will move with little revenue generating potential. That's capitalism for you.

Labels: , ,


Tuesday, March 31, 2009

When perfection is the standard

The disaster recovery infrastructure is an area where perfection is the standard, and anything less than perfection is a fault that needs fixing. It shares this distinction with other things like Air Traffic Control and sports officiating. In any area where perfection is the standard, any failure of any kind brings wincing. There are ways to manage around faults, but there really shouldn't be faults in the first place.

In ATC there are constant cross-checks and procedures to ensure that true life-safety faults only happen after a series of faults. In sports officiating, the advent of 'instant replay' rules assist officials in seeing what actually happened from angles other than the ones they saw, all as a way to improve the results. In DR, any time a backup or replication process fails, it leaves an opening through which major data-loss can possibly occur. Each of these have their unavoidable, "Oh *****," moments. Which leads to frustration when it happens too often.

At my old job we had taken some paperwork steps towards documenting DR failures. We didn't have anything like a business-continuity process, but we did have tape backup. When backups failed, there was a form that needed to be filled out and filed, explaining why the fault happened and what can be done to help it not happen again. I filled out a lot of those forms.

Yeah, perfection is the standard for backups. We haven't come even remotely close to perfection for many, many months. Some of it is simple technology faults, like DataProtector and NetWare needing tweaking to talk to each other well or over-used tape drives giving up the ghost and requiring replacement. Some of it is people faults, like forgetting to change out the tapes on Friday so all the weekend fulls fail due to a lack of non-scratch media. Some of it is management process faults, like discovering the sole tape library fell off of support and no one noticed. Some of it is market-place faults, like discovering the sole tape library will be end-of-lifed by the vendor in 10 months. Some of these haven't happened yet, but they are areas that can fail.

If the stimulus fairy visits us, backup infrastructure is top of the list for spending.

Labels: ,


Friday, March 27, 2009

Computer labs in a ubiquitous computing world

Ars Technica has an article up called, When every student has a laptop, why run computer labs?

It's a good question. But before I go into it, I should mention something. What I do for WWU doesn't have a lot to do with our labs. The biggest interaction I have with them is for printing and maybe some Zen or GPO policies. I also know some of the people who support them, and I sit in meetings where other people gripe about them. So I'm speaking as someone who works around people who deals with them, not as someone who deals with them or has any decision making power.

Why run computer labs?

In the beginning it was to provide computers to students who didn't have one.
Then, it was to provide on-campus computers to students who didn't have a laptop.

Now that almost every student has a computer, and most of those laptops, it makes a less sense. Centralized printers where they can print off assignments from their own hardware? Yes. 60 seat general computing labs? Um.

The point is made in the Ars Technica article that specialized software that students generally wouldn't have, such as SPSS or the full Adobe Acrobat suite, are a good reason to have them. This is true. We have not only the general computing labs run by ATUS, but we also have special purpose labs run by ATUS and the various colleges. We now have a lab that has a large format printer, something I guarantee no student has in their dorm or apartment, and a flat-bed scanner. One non-ATUS lab has VMWare Workstation installed on all the workstations. Some of the general computing labs are actual classrooms some of the time.

In our specific case, we have one software package in universal use that greatly encourages the existence of the general computing lab.

The Novell Client.

In order to get drive-map access to the NetWare cluster, you need that. This is not a package you want to inflict on a home machine without the victim knowing what they're in for. So we need to provide computers with the client installed so students can get at their files simply. WebDav through NetStorage goes some of the way, but it can be tricky to set up.

If we were a pure Windows network, it wouldn't be so bad. Both OSX and all the major Linuxes come with Samba pre-installed, which eases access to Windows networks. Printing isn't quite as convenient, but at least you can get at your files easy enough once you're inside the firewall.

In the end, except for our NCP dependencies, we could possibly close some of our GC labs to save money. However, we do track lab utilization, and those numbers may tell a different story. I know some students don't bother hauling their laptop to campus so long as they can use a lab machine for a quick social-networking fix. If we start closing labs those students will start hauling their gear to campus and we can save money. I still think we need to provide general access printers at various spots, which is something that Novell iPrint is rather good for. We also need to provide access to the special software packages that are needed for teaching, things like SPSS and MatLab.

The role of the computer lab has changed now that all but a few students have laptops. We still need them for specialized teaching functions, but general access to computing is no longer a primary function. The convenience factor of simple internet access drives some usage, and it may even be a majority. But the labs aren't going away any time soon. Their printers, even less so.

Labels: ,


Tuesday, March 24, 2009

Budgetary efficiency

As I've mentioned, there is a major budget shortfall coming real soon. In the past two weeks various entities have gone before the Board of Trustees discussing the effects of budget cuts on their units. The documents submitted can be found here. The Vice Provost of Information and Telecommunication, my grand-boss, also had a presentation. Which you can view here. The especially nosy can even listen to the presentation here, the 12:45 file and starts about a minute-plus in.

There were some interesting bits in the presentation:
"In 2007, our central IT staff (excluding SciTech & Secretarial) totaled 73 persons. The average for our peer group (with greater than 10,000 student FTE) was 81 people. While a difference of 8 FTE may not seem great, it has a significant impact on our ability to support our users. This is compounded if we consider that student FTE grew a cumulative 16% in the past decade; faculty and staff FTE grew at a cumulative 14% while ITS staff declined 3% "
So, our supported environment grew, and we lost people. Right. Moving on...
"Similarly the budget numbers reveal the same trend. Western's 2007 operating budget for ITS was 6.65 million. The average for our peer institutions (with greater than 10,000 student FTE) was 8.17 million. Total budgets including recharge and student technology fees were 7.8 million for Western and 10.3 million for our peer group."
And we're under-resourced compared to our peer institutions. Right.

This can be spun a couple of ways. The spin being given right now, when we're being faced with a major budget cut, is that we're already running a very efficient operation, and cutting now would seriously affect provided services. A couple years ago when times were more flush, the spin was that we're under resourced compared to our peer institutions, and this is harming service robustness.

Both, as it happens, are true. We are running a very lean organization that gets a lot done for the dollars being spent on it. At the same time, the very same shoe-string attitude has overlooked certain business continuity concerns that worry those of us who will have to rebuild everything in the case of a major disaster. Like the facilities budget, we also run a 'deferred maintenance' list of things we'd like to fix now but aren't critical enough to warrant emergency spending. Since every dollar is a dear dollar, major purchases such as disk arrays or tape libraries have to last a long, long time. We still have some HP ML530 servers in service, and that is a 9 year old server (old enough that HP lists Banyan Vines drivers for the server).

This is continually vexing to vendors who cold-call me. Even in more flush times, anything that costs more than $3000 required pushing to get, and anything that cost over $20,000 was pretty much out of the question. Storage arrays that even on academic discount cost north of $80,000 require exceptional financing and can take several years to get approved. In budget constrained times such as these, anything that costs over $1000 has to go before a budget review process.

It is continually aggravating to work in an organization as under resourced as we are. Our disaster recovery infrastructure is questionable, and business-continuity is a luxury we just plain can't afford. Two years ago there was a push for some business continuity, but it ran smack into the shoe-string. The MSA1500 that I've railed about so long was purchased as a BC device, but it is fundamentally unsuited to the task. Getting data onto it was a mish-mash of open source and hand-coded rigging. We've since abandoned that approach as it looks like 2012 may be the earliest we can afford to think about it again.

As a co-worker once ranted, "They expect enterprise level service for a small-business budget."

You'd think this would be the gold plated opening for open source software. It hasn't been. Our problem isn't so much software as it is hardware. If we can GET the hardware for business continuity, it'll probably be open source software that actually handles the data replication. Replacing Blackboard with Moodle will require new hardware, since we will have to dual-stack for two years in order to handle grade challenges for classes taught on Blackboard. Moodle would also require an additional FTE due to the amount of customizations required to make it as Blackboard-like as possible. And these are only two examples.

It was very encouraging to see that the top level of our organization (that Vice Provost) is very aware of the problem.

Labels: ,


This page is powered by Blogger. Isn't yours?