May 2007 Archives

Web site statistics

We use Urchin 5.6 for our web site statistics. This works better for us than Google Analytics for a number of reasons, which is why it is somewhat irksome that a newer version of the Urchin software hasn't come out. I hear reports that Google, who bought Urchin a while back, is working on a new software based version of their statistics software but I haven't heard much.

I hope it comes out.

Google Analytics is unabashedly designed around advertising-related statistics. No surprise, since that's where the money is to be made. And for that, it works great.

What it doesn't do is tell me a few, very key things:
  • How many total bytes did this web-server serve in this time period? Network monitoring will give me the whole server, but this will give me the specific web-server itself.
  • What are the top 10 hit files?
  • What are the top 10 files generating traffic?
These are things I'm concerned about as a webmaster. This is stuff you can only get by parsing web-server logs.

Of the top 10 hit files on student MyWeb, 6 of them would be revealed with Google Analytics.
Of the top 10 files on student MyWeb generating traffic, which consists of 81% of total data transfer, not a single one would be revealed by Google Analytics.

The top file last week for student MyWeb is an MP3 file generating 31% of total data transfer traffic. After digging into the actual log-files to see what is referring that traffic, I learned that there is a new flash-based music search service out there. While Analytics would track the loading of the flash file itself on those not-WWU servers, it won't track the transfer from my server. That Flash prog definitely doesn't execute custom Javascript.

Google Analytics and server-log parsing programs serve different market segments. Google, understandably, is only interested in the ad-driven segment. I just wish they'd get off their butts and release a new version of the log-parsing Urchin software.
I've been asked how I've been managing a NetWare network if my primary workstation is OpenSUSE. That's a good question. Right now I'm doing it through two methods:
  1. I have WinXP running in a Xen VM that I use for 90% of it. Network throughput blows chunks, though.
  2. NetStorage (MyFiles for you WWU people) supports WebDav, and so does Nautilus.
Screen-cap of nautilus using webdav on netstorage
As you can see. In fact, I used the above screen to drop the screen-cap I made into my myweb folder for publication.

That said, there IS a way to get the Novell Client for Linux 1.2 installed to openSUSE, I just haven't done it yet. You can find instructions in the Novell newsgroups, which google handilly archives. I must mention that NCL1.2 will be getting service-packed when SLES10 SP1 comes up within the next month. These instructions may not be valid once SP1 is being published from download.novell.com.

A peeve

| 1 Comment
I've ranted previously about why I don't like Firefox. I use Seamonkey.

I'm also using openSUSE 10.2 for my work desktop.

OpenSUSE has compiled Seamonkey as a 64-bit package, rather than 32-bit. This made flash a rather dodgy thing until Adobe released v9 for Linux. Unfortunately, Adobe has yet to release a 64-bit version of flash so I'm stuck using NSPluginwrapper to get flash. And since flash is on about, oh, 80% of commercial web pages it gets loaded a lot.

Something, somewhere causes nspluginwrapper to hang in such a way as to consume 100% CPU. I have a dual core, so this is livable. It also happens often enough that I've modified my seamonkey launcher to "nice" the seamonkey process to as low priority as I can get it. I don't know what causes it to spike like that, but cnn.com seems to trigger it, and YouTube vids are very likely to trigger it too. I've taken to using Firefox, 32-bit on 10.2, to view that sort of thing if I have to.

Once adobe gets off their butts and released a 64-bit flash plugin for Linux I'll be a very happy camper.

The sky has not fallen

| 1 Comment
Today is the day we're flipping the switch and expiring passwords that haven't been changed in X days. There have been a metric ton of emails about this, and we've notified everyone we know who hasn't changed their password (and we can tell who they are) many, many times. Of the 21,000 or so accounts, I think I heard that 3,000 hadn't changed passwords yet. Those 3K people will have their passwords expired randomly over the next two weeks.

This morning we haven't had a call from either Helpdesk! Either these people don't log in as often as we thought, more likely, or they haven't had any issues with the password change screen.

Here is hoping it keeps up!

Editorial: patch Tuesday.

| 1 Comment
From Slashdot:

http://searchsecurity.techtarget.com/columnItem/0,294698,sid14_gci1254457,00.html
http://it.slashdot.org/article.pl?sid=07/05/10/1652200

In specific, an opinion that Microsoft should get rid of their regularly scheduled patch release and go to opportunistic patch releases. The argument stems from the damage the MS-DNS flaw has caused. Microsoft had a patch for it, why didn't they release it or some such.

He closes with the statement:
The value of the predictability of the monthly schedule simply doesn't outweigh the danger to customers posed by the flaws that go unpatched for three or four weeks between cycles.
There is a problem with this. I bring to your attention a post on Bugtraq yesterday from iDefense about the Exchange 2000 IMAP vulnerability. I quote the key piece, which is in section 7:
VIII. DISCLOSURE TIMELINE

01/10/2007 Initial vendor notification
01/22/2007 Initial vendor response
05/08/2007 Coordinated public disclosure
Note the times there. The disclosure was done to Microsoft in January, and it was in May before the fix was released. The time spent between 'initial vendor response' and 'coordinated public disclosure' was spent by Microsoft developing a fix, testing the fix, and integrating the fix into the patch release pipeline. This is part of 'responsible disclosure', which is telling the vendor about a problem, and not telling anyone else about it until the vendor has produced a patch.

Some people quibble about how long it takes MS to come up with a patch after disclosure (responsible or otherwise), but that's not quite relevant to this particular discussion. Because it DOES take a while for the Microsoft patch pipeline to produce production-quality code, doing a staged release schedule like what they do right now makes all the sense in the world. They can do short-cycle patches, but even then it STILL takes weeks to produce a patch.

I've been at this game long enough to have been around for the opportunistic patch schedule Microsoft followed before they started regulating when they released. And let me tell you, having a schedule for these things helps immensely. We know patches from MS come out on Tuesdays, so we've built into our schedule a 'change management' window Tuesday night expressly for that. This is pre-arranged with our users, we don't have to go to them to take their systems down so long as we do it Tuesday night. (As a side note, our NetWare servers also benefit from this time window).

Under the old regime we'd get a hot patch from Microsoft on Wednesday morning. It is a patch that fixes a problem that is being actively exploited. I go to my management and explain the situation, and I'd have to convince them that the pain experienced by not patching exceeds the pain of downtime in order to get a patching window approved. Or I have to wait for the next change-management window to get the code in, which may be too late.

One thing that is exceedingly clear these days is that when patches from MS are released, the black-hat community falls on them with glad cries to reverse engineer them. Once they have the underlying flaw, which may even be disclosed by the reporting party on release-day such as what happened with the Exchange iCal flaw this time around, Bad Stuff can be coded up to exploit vulnerable systems, a new Metasploit plugin developed, all that fun stuff. In short, waiting for a week or so after a patch is released is becoming more and more a vulnerability in and of itself.

Microsoft's claim that doing it on a release schedule increases the patch uptake rate is a very valid one. Because so many of those patches require downtime to apply, patch application has to be built into the IT management environment. Microsoft is getting better about no-reboot patches, Windows 2003 is better than Windows NT ever was, but there is still a ways to go. Until it becomes possible to patch a live system with no downtime, a static release schedule IS the main way to go. An opportunistic schedule practically guarantees that major IT systems (I'm ignoring home systems for this, that's a very different management regime) won't get the patch for several days to weeks after release. The black-hats have been forcing us to ever shorter lags between patch-release and "too bad, you're hacked".

Also, doing is opportunistically may very well mean MORE patches from Microsoft. This month's batch included 19 CVN numbers for 7 patches. Clearly, some patches bundle more than one fix. I approve of this, since it means less patches I have to apply, and the risk of multiple patches stepping on each other is reduced.

Windows is a horrifically complex system. Microsoft has had a very long history with providing security patches, and they've had problems with:
  • Patch order
  • Service Packs removing in place patches
  • Patches applied simultaneously stepping on each other
  • Patches not applying the way they were intended
  • Feature or bug regressions
  • Patches causing problems in seemingly unrelated programs
  • Patches changing 'undocumented behavior' exploited by legitimate 3rd party applications (and sometimes, other Microsoft applications)
So the extensive QA each patch candidate goes through has to be validated against all of the above list. That takes time. As I said at the beginning, if Microsoft is going to take that long to produce a patch in the first place, at least release the patches on a predictable schedule. It makes my life a lot easier.

Side note: This morning I booted to the openSUSE partition on my home laptop for the first time in a while. Once it got done parsing the list of updates, I had something like 79 packages to update including a kernel update. Just the Security-flagged patches took 20 minutes to apply and that didn't include the reboot. In contrast, this month's Windows patches took under 5 minutes to apply. But then, I don't have MS Office on my Windows partition.

Rumor mills

| 1 Comment
A rumor has surfaced that WWU has secured a steam whistle for use in VATech like thingies. This particular whistle is well known to long time Bellingham residents. Its past life was as the whistle used by the Georgia Pacific plant to notify the area of Chlorine leaks. As a result, this particular whistle is LOUD.

GP did have some leaks, so we know it works. Also, we have a steam plant on campus which means we can use it. This is a better deal than a civil defense siren, as this way we don't have to call the police to get the siren going. We just push the button!

How loud is it? Reportedly, "tens of miles".

Mmmm. Friday.

A relatively uneventful week. So. Random stuff.

When we next service-pack the NetWare cluster, it'll almost definitely by SP7 done during the Summer/Fall intersession. By then it should have been out for at least two months, and we should have a fair idea how buggy it is. SP6 is very buggy, especially if NDPS is used.

Chances are real good we'll have a SAN DOWN event sometime during above mentioned intersession. We need to get the EVA3000 up to the latest code-rev we can. That'll mean taking EVERYTHING down.

Financing for updating the EVA3000 with hardware capable of growing with us (we're near maxed on it right now) is still up in the air.

The password change hammer will be going down in a few short weeks. Espect high call volumes at your local helpdesk.

Exchange 2007 will be going in this summer, we think.

Our main MS SQL 2000 server will probably be migrated to SQL Server 2005 this summer. If we can get the SharePoint databases to convert.

Um, and let me be the first so say, 74683372655f49732d6e6f3d636f6465.

Microsoft permissions

| 2 Comments
It is clear our helpdesks are very used to Novell/NetWare style permissions. Over the years they've gotten used to calling up one of us, having us add someone to a group, and bam they're in! Now that we're doing a bit more with Microsoft permissions, they've run into one of the key differences between how Novell and Microsoft handle permissions.

In MS-land, on login you get an access token with all of your group memberships on it. If you add someone to a group, they have to log out and log in again to refresh that token before they can gain access to the resource.

In Novell-land, every time you access a resource that resource queries the directory service to see if you're in the right groups for access. If you add someone to a group, it takes effect immediately.

Very different expectations! The MS-way may be ultimately more computer resource efficient, but it does come at the cost of user efficiency. This bites us most often when it comes to Exchange. In what we call 'shared mailboxes' we use AD groups to manage access into those. Many times I'll get a call from the helpdesk that a user can't get into a just-created shared mailbox, and this behavior is the reason for it. They're so used to the Novell way of doing it that this seems like an error.