September 2009 Archives

I have a degree in this stuff

| 1 Comment
I have a CompSci degree. This qualified me for two things:
  • A career in academics
  • A career in programming
You'll note that Systems Administration is not on that list. My degree has helped my career by getting me past the "4 year degree in a related field" requirement of jobs like mine. An MIS degree would be more appropriate, but there were very few of those back when I graduated. It has indirectly helped me in troubleshooting, as I have a much better foundation about how the internals work than your average computer mechanic.

Anyway. Every so often I stumble across something that causes me to go Ooo! ooo! over the sheer computer science of it. Yesterday I stumbled across Barrelfish, and this paper. If I weren't sick today I'd have finished it, but even as far as I've gotten into it I can see the implications of what they're trying to do.

The core concept behind the Barrelfish operating system is to assume that each computing core does not share memory and has access to some kind of message passing architecture. This has the side effect of having each computing core running its own kernel, which is why they're calling Barrelfish a 'multikernel operating system'. In essence, they're treating the insides of your computer like the distributed network that it is, and using already existing distributed computing methods to improve it. The type of multi-core we're doing now, SMP, ccNUMA, uses shared memory techniques rather than message passing, and it seems that this doesn't scale as far as message passing does once core counts go higher.

They go into a lot more detail in the paper about why this is. A big one is hetergenaity of CPU architectures out there in the marketplace, and they're not just talking just AMD vs Intel vs CUDA, this is also Core vs Core2 vs Nehalem. This heterogenaity in the marketplace makes it very hard for a traditional Operating System to be optimized for a specific platform.

A multikernel OS would use a discrete kernel for each microarcitecture. These kernels would communicate with each other using OS-standardized message passing protocols. On top of these microkernels would be created the abstraction called an Operating System upon which applications would run. Due to the modularity at the base of it, it would take much less effort to provide an optimized microkernel for a new microarcitecture.

The use of message passing is very interesting to me. Back in college, parallel computing was my main focus. I ended up not pursuing that area of study in large part because I was a strictly C student in math, parallel computing was a largely academic endeavor when I graduated, and you needed to be at least a B student in math to hack it in grad school. It still fired my imagination, and there was squee when the Pentium Pro was released and you could do 2 CPU multiprocessing.

In my Databases class, we were tasked with creating a database-like thingy in code and to write a paper on it. It was up to us what we did with it. Having just finished my Parallel Computing class, I decided to investigate distributed databases. So I exercised the PVM extensions we had on our compilers thanks to that class. I then used the six Unix machines I had access to at the time to create a 6-node distributed database. I used statically defined tables and queries since I didn't have time to build a table parser or query processor and needed to get it working so I could do some tests on how optimization of table positioning impacted performance.

Looking back on it 14 years later (eek) I can see some serious faults about my implementation. But then, I've spent the last... 12 years working with a distributed database in the form of Novell's NDS and later eDirectory. At the time I was doing this project, Novell was actively developing the first version of NDS. They had some problems with their implementation too.

My results were decidedly inconclusive. There was a noise factor in my data that I was not able to isolate and managed to drown out what differences there were between my optimized and non-optimized runs (in hindsight I needed larger tables by an order of magnitude or more). My analysis paper was largely an admission of failure. So when I got an A on the project I was confused enough I went to the professor and asked how this was possible. His response?
"Once I realized you got it working at all, that's when you earned the A. At that point the paper didn't matter."
Dude. PVM is a message passing architecture, like most distributed systems. So yes, distributed systems are my thing. And they're talking about doing this on the motherboard! How cool is that?

Both Linux and Windows are adopting more message-passing architectures in their internal structures, as they scale better on highly parallel systems. In Linux this involved reducing the use of the Big Kernel Lock in anything possible, as invoking the BKL forces the kernel into single-threaded mode and that's not a good thing with, say, 16 cores. Windows 7 involves similar improvements. As more and more cores sneak into everyday computers, this becomes more of a problem.

An operating system working without the assumption of shared memory is a very different critter. Operating state has to be replicated to each core to facilitate correct functioning, you can't rely on a common memory address to handle this. It seems that the form of this state is key to performance, and is very sensitive to microarchitecture changes. What was good on a P4, may suck a lot on a Phenom II. The use of a per-core kernel allows the optimal structure to be used on each core, with changes replicated rather than shared which improves performance. More importantly, it'll still be performant 5 years after release assuming regular per-core kernel updates.

You'd also be able to use the 1.75GB of GDDR3 on your GeForce 295 as part of the operating system if you really wanted to! And some might.

I'd burble further, but I'm sick so not thinking straight. Definitely food for thought!

Browser usage on tech-blogs

Ars Technica just posted their August browser update. They also included their own browser breakdown. ArsTechnica is a techie site, so it comes as no surprise what so ever that Firefox dominates at 45% of browser-share. This made me think about my own readership.

Browser share piechart for September 09
As you can see, Firefox makes up even more of the browser-share here (50.34%). Interestingly on the low end, Opera is actually the #3 browser (4.46%), not Safari (3.43%). Looking at the version breakdown for those IE users, only 17% of them are on IE6. Yay!

ArsTechnica's Safari numbers are not at all surprising, since they cover a fair amount of Apple news and I don't.

So yeah, Tech blogs and sites don't have a lot of IE traffic. Or, so I believe. What are your numbers?

More thoughts on the Novell support change

Something struck me in comments on the last post about this that I think needs repeating on a full post.

Novell spent quite a bit of time attempting to build up their 'community' forums for peer-support. Even going so far as to seed the community with supported 'sysops' who helped catalyze others into participating, and creating a vibrant peer support community. This made sense because it built both goodwill and brand loyalty, but also reduced the cost-center known as 'support'. All those volunteers were taking the minor-issue load off of the call-in support! Money saved!

Fast forward several years. Novell bought SuSE and got heavily into Open Source. Gradually, as the OSS products started to take off commercially, the support contracts became the main money maker instead of product licenses. Just as suddenly, this vibrant goodwill-generating peer-support community is taking vital business away from the revenue-stream known as 'support'. Money lost!

Just a simple shift in the perception of where 'support' fits in the overall cost/revenue stream makes this move make complete sense.

Novell will absolutely be keeping the peer support forums going because they do provide a nice goodwill bonus to those too cheap to pay for support. However.... with 'general support' product-patches going behind a pay-wall, the utility of those forums decreases somewhat. Not all questions, or even most of them for that matter, require patches. But anyone who has called in for support knows the first question to be asked is, "are you on the latest code," and that applies to forum posts as well.

Being unable to get at the latest code for your product version means that the support forum volunteers will have to troubleshoot your problem based on code they may already be well past, or not have had recent experience with. This will necessarily degrade their accuracy, and therefore the quality of the peer support offered. This will actively hurt the utility of the peer-support forums. Unfortunately, this is as designed.

For users of Novell's active-development but severe underdog products such as GroupWise, OES2, and Teaming+Conferencing, the added cost of paying for a maintenance/support contract can be used by internal advocates of Exchange, Windows, and SharePoint as evidence that it is time to jump ship. For users of Novell's industry-leading products such as Novell Identity Management, it will do exactly as designed and force these people into maintaining maintenance contracts.

The problem Novell is trying to address are the kinds of companies that only buy product licenses when they need to upgrade, and don't bother with maintenance unless they're very sure that a software upgrade will fall within the maintenance period. I know many past and present Novell shops who pay for their software this way. It has its disadvantages because it requires convincing upper management to fork over big bucks every two to five years, and you have to justify Novell's existence every time. The requirement to have a maintenance contract in order for your highly skilled staff to get at TIDs and patches, something that used to be both free and very effective, is a real-world major added expense.

This is the kind of thing that can catalyze migration events. A certain percentage will pony up and pay for support every year, and grumble about it. Others, who have been lukewarm towards Novell for some time due adherence to the underdog products, may take it as the sign needed to ditch these products and go for the industry leader instead.

This move will hurt their underdog-product market-share more than it will their mid-market and top-market products.

If you've read Novell financial statements in the past few years you will have noticed that they're making a lot more money on 'subscriptions' these days. This is intentional. They, like most of the industry right now, don't want you to buy your software in episodic bursts every couple years. They want you to put a yearly line-item in your budget that reads, "Send money to Novell," that you forget about because it is always there. These are the subscriptions, and they're the wave of the future!

Very handy but terrible plugin

| 1 Comment
Yes, this plugin is a terrible idea.

But then, so are appliances with built in self-signed SSL certificates you can't change. You take what you can get.
I first ran into this on Bucky's Blog. Specifically, Novell is changing what non-paying users can get out of Novell's support options. The details are still being hashed out, but they made the mistake of running afoul of one of the major no-no's of support: Pay-for-patches, or at least the suggestion of it. They caught a lot of flack about that with requiring a support contract to use the auto-update channels for their Linux products, but this will go even farther and put even support packs behind the maintenance-contract pay-wall.

So if you're a NetWare customer that hasn't paid maintenance in umpteen years since your server Just Works (TM), you'll now have to buy maintenance if you want to apply the latest Service Pack. Or if your server is throwing abends that can be fixed with a patch that you learned about in the peer support forums, you'll need a contract to be able to access it. This was done intentionally to pull in these free-loaders into paid support, but it does represent a potentially steep cost that can catalyze more migrations off of Novell products. This will hurt the shoe-string IT departments more than the big-bucks one. And since that describes a goodly percentage of 'small businesses' this could be a major problem in the future.

What's causing some confusion is their intent to put some of the KB articles behind the pay-wall as well. As described by Novell's support-community coordinator:
FACT: Only about 8% of the TIDs in the knowledgebase will be closed off
for entitled customers. Those are the TIDS for the products under "General
Support" ( http://support.novell.com/lifecycle ). All other TIDS will
remain open to the general public. As products move from general support
to extended and self support, all TIDS will become public.
So the 20+ year history of NetWare TIDs will still be there as NetWare is nolonger on general support per-se, but TIDs about currently in support closed-source items like Novell Identity Manager and the entire ZenWorks line is another story. One beef I have about this is that even if you do have a maintenance contract, it means that anyone who could possibly search the KB for articles has to have:
  1. A novell.com login
  2. Their novell.com login associated with a maintenance contract
This doesn't always happen. I've had to add a few people to our contract so they can use the Customer Center to get license codes or register SLES machines against our support. But the large majority of our historic NetWare admins aren't on the contract because they haven't needed it. This move will force organizations such as ours to much more actively manage our Customer Center contract/username associations. That can be a lot of bother.

The end effect of all of this is that the value of 'peer support' is markedly reduced for currently-shipping products. Once upon a time Novell was a company that really encouraged peer support since it took load off of their support engineers, customers liked it since it was free, and it encouraged quite a lot of goodwill. Now they seem to have realized that this was a drain on the bottom line and are dismantling the system in favor of everyone paying for support. This destroys goodwill, as they're now learning in the support forums.

Printing habits

| 3 Comments
Some students are going to be in for a rude, rude surprise real soon. Today alone there is a student who has printed off 210 pages. Looking at their print history, they printed off 100 copies of two specific handouts (in batches of 50), and that's 40% of their entire quota for the quarter. Once they hit the ceiling, they'll have to pay to get more. This is different from last year!

We always got a few students who rammed their head against the 500 page limit within two weeks of quarter start. I'm sure we'll get some this quarter too. There may be heated tempers at the Helpdesk as a result, but thems the breaks.

Quarter start: printing

Today is go-live for the new Microsoft/PCounter based printing system. It hasn't gone off perfectly, but most of the problems so far have been manageable. Also, it's only Monday. The true peak load for printing will be Wednesday between 11:00 and 12:00. Wednesday is when classes start.

So far the big problem is that some of the disk images used for the labs included printers they weren't supposed to, a side effect of how Microsoft does printing. All in all, it's a pretty small thing but it does ruin the clean look. The time between when Summer session stopped and when all the images had to be applied (last Friday) was the same time we get every year, but this year included major changes we haven't seen since we converted from queue-based printing to NDPS printing back around 2002. So yeah, these kinds of QA things can get dropped when under this kind of time pressure, and just plain new environment.

Also, the Library doesn't have their release stations up yet. They'll have them there by the end of the day, but the fact remains that they're on the old system until then. Due to the realities of accounting, each student was given only 50 pages this morning on the old system. Which means that some users are already whacking their heads on the limit. They'll have to go to one of the ATUS labs to print, as they're all on the new system and their quotas are much higher there. If Libraries doesn't have it by tomorrow, something will have to give.

The end of the line for RAID?

| 1 Comment
Regarding this: http://www.enterprisestorageforum.com/technology/features/article.php/3839636

He has a point. Storage sizes are increasing faster than reliability figures, and the combination is a very bad thing for parity RAID. Size by itself means that large RAID sets will take a long time to rebuild. I ran into this directly with the MSA1500 I was working with a while back, where it would take a week (7 whole days!) to rework a 7TB disk-array. The same firmware also very strongly recommended against RAID5 LUNs on more than 7TB of of disks due to the non-recoverable read error rate on the SATA drives being used. RAID6 increase the durability of parity RAID, but at the cost of increased overhead.

Unfortunately, there are no clear answers. What you need really depends on what you're using it for. For very high performance storage where random I/O latency during high speed transfers are your prime performance metric, lots of cheap-ass SATA drives on randomized RAID1 pairs will probably not be enough to keep up. Data-retention archives where sequential write speeds are your prime metric is more forgiving and can take a much different storage architecture, even though it may involved an order of magnitude more space than the first option here.

One comment deserves attention, though:
The fact is that 20 years ago, a large chunk of storage was a 300MB ESDI drive for $1500, but now a large drive is hard to find above $200.
Well, for large hard drives that may be true but for medium size drives I can show you many options that break the $200 barrier. 450GB FC drives? Over $200 by quite a lot. Anything SSD-Enterprise? Over $200 by a lot, and the 'large drive' segment is at an order of magnitude over that.

We're going to see some interesting storage architectures in the near future. That much is for sure.

It's the little things

| 1 Comment
My attention was drawn to something yesterday that I just hadn't registered before. Perhaps because I see it so often I didn't twig to it being special in just that place.

Here are the Received: headers of a bugzilla message I got yesterday. It's just a sample. I've bolded the header names for readability:
Received: from ExchEdge2.cms.wwu.edu (140.160.248.208) by ExchHubCA1.univ.dir.wwu.edu (140.160.248.102) with Microsoft SMTP Server (TLS) id 8.1.393.1; Tue, 15 Sep 2009 13:58:10 -0700
Received: from mail97-va3-R.bigfish.com (216.32.180.112) by
ExchEdge2.cms.wwu.edu (140.160.248.208) with Microsoft SMTP Server (TLS) id 8.1.393.1; Tue, 15 Sep 2009 13:58:09 -0700
Received: from mail97-va3 (localhost.localdomain [127.0.0.1]) by mail97-va3-R.bigfish.com (Postfix) with ESMTP id 6EFC9AA0138 for me; Tue, 15 Sep 2009 20:58:09 +0000 (UTC)
Received: by mail97-va3 (MessageSwitch) id 12530482889694_15241; Tue, 15 Sep 2009 20:58:08 +0000 (UCT)
Received: from monroe.provo.novell.com (monroe.provo.novell.com [137.65.250.171]) by mail97-va3.bigfish.com (Postfix) with ESMTP id 5F7101A58056 for me; Tue, 15 Sep 2009 20:58:07 +0000 (UTC)
Received: from soval.provo.novell.com ([137.65.250.5]) by
monroe.provo.novell.com with ESMTP; Tue, 15 Sep 2009 14:57:58 -0600
Received: from bugzilla.novell.com (localhost [127.0.0.1]) by soval.provo.novell.com (Postfix) with ESMTP id A56EECC7CE for me; Tue, 15 Sep 2009 14:57:58 -0600 (MDT)
For those who haven't read these kinds of headers before, read from the bottom up. The mail flow is:
  1. Originating server was Bugzilla.novell.com, which mailed to...
  2. soval.provo.novell.com running Postfix, who forwarded it on to Novell's outbound mailer...
  3. monroe.provo.novell.com, who attempted to send to us and sent to the server listed in our MX record...
  4. mail97-va3.bigfish.com running Postfix, who forwarded it on to another mailer on the same machine...
  5. mail97-ca3-r running something called MessageSwitch, who sent it on to the internal server we set up...
  6. exchedge2.cms.wwu.edu running Exchange 2007, who send it on to the Client Access Server...
  7. exchhubca1.univ.dir.wwu.edu for 'terminal delivery'. Actually it went on to one of the Mailbox servers, but that doesn't leave a record in the SMTP headers.
Why is this unusual? Because steps 4 and 5 are at Microsoft's Hosted ForeFront mail security service. The perceptive will notice that step 4 indicates that the server is running Postfix.

Postfix. On a Microsoft server. Hur hur hur.

Keep in mind that Microsoft purchased the ForeFront product line lock stock and barrel. If that company had been using non-MS products as part of their primary message flow, then Microsoft probably kept that up. Next versions just might move to more explicitly MS-branded servers. Or not, you never know. Microsoft has been making placating notes towards Open Source lately. They may keep it.

Mac OS X and Windows 2008 clusters

It seems that all Mac OSX versions except for 10.4 (yes, including 10.6) don't like to talk to Window Server 2008 Failover clusters without special syntax. The reason for this boils down to two technology disagreements.

  1. OS X (except for 10.4) attempts to make smb/cifs connections by the resolved IP address of given names. So a connection string like smb://clu-share1.winclu.wwu.edu/share1/ will be translated into \\140.160.12.34\share1 when it attempts to talk to the server.
  2. Windows failover clustering requires the server name when connecting. Otherwise it tells you no-can-do. You can't use \\140.160.12.34\share1\ syntax, you MUST use a name.
For instance, the string "smb://msfs-class1.univ.dir.wwu.edu/class1" will cause the following packets to occur:
Packets showing fail
However, if you attempt to connect to a non-clustered share, perhaps a share on one of the cluster nodes rather than a cluster service, it works just fine.
Packets showing success
Funny, eh?

So what's a mac-owner, of which we have quite a lot, to do? The fix is pretty simple, append ":139" to the end of the server part of the connection string. In the above example, "smb://msfs-class1.univ.dir.wwu.edu:139/class1". For some reason, this forces the mac to use a name when connecting to the remote system.
Packets showing success
Apparently, OS X 10.4 (Tiger) did this normally, but Apple changed it back to the non-working version with 10.5 (Leopard). And we've tested, 10.6 (Snow Leopard) is broken the same way.

Why this is so is up for debate. I'm personally fond of the idea that the Windows SMB stack isn't detailed enough to tell what IP address an incoming connection came in on and virtualize answers accordingly. For stand-alone servers this is a simple thing; if you can talk to me at all, here are all of my shares. For conditional sharing like with clusters, you can only get these shares on these IP's, the SMB stack apparently lacks a way to discriminate appropriately. Clearly name-based is in there, but not IP.

No word on if 2008 R2 behaves this way. Microsoft dropped R2 about... three weeks too late for us to go with it for this cluster.

This is going to be one of those FAQs the helpdesks are going to get real used to answering.

Lemonade

Two days ago (but it seems longer) the drive that holds my VM images started vomiting bad sectors. Even more unfortunately, one of the bad sectors took out the MFT clusters on my main Win XP management VM. So far that's the only data-loss, but it's a doozy. I said unto my manager, "Help, for I have no VM drive any more, and am woe." Meanwhile I evacuated what data I could. Being what passes for a Storage Administrator around here, finding the space was dead easy.

Yesterday bossman gave me a 500GB Western Digital drive and I got to work restoring service. This drive has Native Command Queueing, unlike the now-dead 320GB drive. I didn't expect that to make much of a difference, but it has. My Vista VMs (undamaged) run noticibly faster now. "iostat -x" shows await times markedly lower than they were before when running multiple VMs.

NCQ isn't the kind of feature that generally speeds up desktop performance, but in this case it does. Perhaps lots of VM's are a 'server' type load afterall.

DNS and AD Group Policy

| 1 Comment
This is aimed a bit more at local WWU users, but it is more widely applicable.

Now that we're moving to an environment where the health of Active Directory plays a much greater role, I've been taking a real close look at our DNS environment. As anyone who has ever received any training on AD knows, DNS is central to how AD works. AD uses DNS the way WinNT used WINS, the way IPX used SAPs, or NetWare uses SLP. Without it things break all over the place.

As I've stated in a previous post our DNS environment is very fragmented. As we domain more and more machines, the 'univ.dir.wwu.edu' domain becomes the spot where the vast majority of computing resources is resolveable. Right now, the BIND servers are authoritative for the in-addr.arpa reverse-lookup domains which is why the IP address I use for managing my AD environment resolves to something not in the domain. What's more, the BIND servers are the DNS servers we pass out to every client.

That said, we've done the work to make it work out. The BIND servers have delegation records to indicate that the AD DNS root domain of dir.wwu.edu is to be handled by the AD DNS servers. Windows clients are smart enough to notice this and do the DNS registration of their workstation name against the AD DNS servers and not the BIND servers. That said, the in-addr.arpa domains are authoritative on the BIND servers so the client's attempt to register the reverse-lookup records all fail. Every client on our network has Event Log entries to this effect.

Microsoft has DNS settings as a possible target for management through Group Policy. This could be used to help ensure our environment stays safe, but will require analysis before we do anything. Changes will not be made without a testing period. What can be done, and how can it help us?

Primary DNS Suffix
Probably the simplest setting of the lot. This would allow us to force all domained machines to consider univ.dir.wwu.edu to be their primary DNS domain and treat it accordingly for Dynamic DNS updates and resource lookups.

Dynamic Update
This forces/allows clients to register their names into the domain's DNS domain of univ.dir.wwu.edu. Most already do this, and this is desirable anyway. We're unlikely to deviate from default on this one.

DNS Suffix Search List
This specifies the DNS suffixes that will be applied to all lookup attempts that don't end in period. This is one area that we probably should use, but don't know what to set. univ.dir.wwu.edu is at the top of the list for inclusion, but what else? wwu.edu seems logical, and admcs.wwu.edu is where a lot of central resources are located. But most of those are in univ.dir.wwu.edu now. So. Deserves thought.

Primary DNS Suffix Devolution
This determines whether to include the component parts of the primary dns suffix in the dns search list. If we set the primary DNS suffix to be univ.dir.wwu.edu, then the DNS resolver will also look in dir.wwu.edu, and wwu.edu. I believe the default here is 'True'.

Register PTR Records
If the in-addr.arpa domains remain on the BIND servers, we should probably set this to False. At least so long as our BIND servers refuse dynamic updates that is.

Registration Refresh Interval
Determines how frequently to update Dynamic registrations. Deviation from default seems unlikely.

Replace Addresses in Conflicts
This is a setting for handling how multiple registrations for the same IP (here defined as multiple A records pointing to the same IP) are to be handled. Since we're using insecure DNS updates at the moment, this setting deserves some research.

DNS Servers
If the Win/NW side of Tech Services wishes to open warfare with the Unix side of Tech Services we'll set this to use the AD DNS servers for all domained machines. For this setting overrides client-side DNS settings with the DNS servers defined in the Group Policy. No exceptions. A powerful tool. If we set this at all, it'll almost definitely be the BIND DNS servers. But I don't think we will. Also, it may be true that Microsoft has removed this from the Server 2008 GPO, as it isn't listed on this page.

Register DNS Records with Connection-Specific DNS Suffix
If a machine has more than one network connection (very, very few non VMWare host-machines will) allow them to register those connections against their primary DNS suffix. Due to the relative derth of configs, we're unlikely to change this from default.

TTL Set in the A and PTR Records
Since we're likely to turn off PTR updates, this setting is redundant.

Update Security Level
As more and more stations domain, there will come a time when we may wish to cut out the non-domained stations from updating into univ.dir.wwu.edu. If that times come, we'll set this to 'secure only'. Until then, won't touch it.

Update Top Level Domain Zones
This allows clients to update a TLD like .local. Since our tree is not rooted in a TLD, this doesn't apply to us.

Some of these can have wide ranging effects, but are helpful. I'm very interested in the search-list settings, since each of our desktop techs has tens of DNS domains to chose from depending on their duty area. Something here might greatly speed up resouce resolution times.

Exchange Transport Rules, update

| 2 Comments
Remember this from a month ago? As threatened in that post I did go ahead and call Microsoft. To my great pleasure, they were able to reproduce this problem on their side. I've been getting periodic updates from them as they work through the problem. I went through a few cycles of this during the month:

MS Tech: Ahah! We have found the correct regex recipe. This is what it is.
Me: Let's try it out shall we?
MS Tech: Absolutely! Do you mind if we open up an Easy Assist session?
Me: Sure. [does so. Opens sends a few messages through, finds an edge case that the supplied regex doesn't handle]. Looks like we're not there yet in this edge case.
MS Tech: Indeed. Let me try some more things out in the lab and get back to you.

They've finally come up with a set of rules to match this text definition: "Match any X-SpamScore header with a signed integer value between 15 and 30".

Reading the KB article on this you'd think these ORed patterns would match:
^1(5|6|7|8|9)$
^2\d$
^30$
But you'd be wrong. The rule that actually works is:
(^1(5$|6$|7$|8$|9$))|(^2(\d$))|(^3(0$))
Except if ^-
Yes, that 'except if' is actually needed, even though the first rule should never match a negative value. You really need to have the $ inside the parens for the first statement, or it doesn't match right; this won't work: ^1(5|6|7|8|9)$. The same goes for the second statement with the \d$ constructor. The last statement doesn't need the 0$ in parens, but is there to match the pattern of the previous two statements of having the $ in the paren.

Riiiiiight.

In the end, regexes in Exchange 2007 Transport Rules are still broken, but they can be made to work if you pound on them enough. We will not be using them because they are broken, and when Microsoft gets around to fixing them the hack-ass recipes we cook up will probably break at that time as well. A simple value list is what we're using right now, and it works well for 16-30. It doesn't scale as well for 31+, but there does seem to be a ceiling on what X-SpamScore can be set to.
Yesterday I ran into this:

http://blogs.msdn.com/clustering/archive/2009/03/02/9453288.aspx

On the surface it looks like NTFS behaving like OCFS. But Microsoft has a warning on this page:
In Windows Server® 2008 R2, the Cluster Shared Volumes feature included in failover clustering is only supported for use with the Hyper-V server role. The creation, reproduction, and storage of files on Cluster Shared Volumes that were not created for the Hyper-V role, including any user or application data stored under the ClusterStorage folder of the system drive on every node, are not supported and may result in unpredictable behavior, including data corruption or data loss on these shared volumes. Only files that are created for the Hyper-V role can be stored on Cluster Shared Volumes. An example of a file type that is created for the Hyper-V role is a Virtual Hard Disk (VHD) file.

Before installing any software utility that might access files stored on Cluster Shared Volumes (for example, an antivirus or backup solution), review the documentation or check with the vendor to verify that the application or utility is compatible with Cluster Shared Volumes.
So unlike OCFS2, this multi-mount NTFS is only for VM's and not for general file-serving. In theory you could use this in combination with Network Load Balancing to create a high-availability cluster with even higher uptime than failover clusters already provide. The devil is in the details though, and Microsoft aludes to them.

A file system being used for Hyper-V isn't a complex locking environment. You'll have as many locks as there are VHD files, and they won't change often. Contrast this with a file-server where you can have thousands of locks that change by the second. Additionally, unless you disable Opportunistic Locking you are at grave risk of corrupting files used by more than one user (Acess databases!) if you are using the multi-mount NTFS.

Microsoft will have to promote awareness of this type of file-system into the SMB layer before this can be used for file-sharing. SMB has its own lock layer, and this will have to coordinate the SMB layers in the other nodes for it to work right. This may never happen, we'll see.

Pushing a feature

One of the things I have missed when Novell went from SLE9 to SLE10 was the lack of a machine name in the title-bar for YaST. It used to look like this:

The old YaST titlebar

With that handy "@[machinename]" in it. These days it is much less informative.

The new YaST titlebar

If you're using SSH X-forwarding to manage remote servers, it is entirely possible you'll have multiple YaST windows open. How can you tell them apart? Back in the 9 days it was simple, the window told you. Since 10, the marker has gone away. This hasn't changed in 11.2 either. I would like this changed so I put in a FATE request!

If you'd also like this changed, feel free to vote up feature 306852! You'll need a novell.com login to vote (the opensuse.org site uses the same auth back end so if you have one there you have one on OpenFATE).

Thank you!

NetWare and Snow Leopard

In case you hadn't heard, the early release of Snow Leopard has tripped up Novell a bit.

http://www.novell.com/products/openenterpriseserver/snowleopard.html

What's interesting is that they'll be releasing a fix for NetWare too, not just OES. This suggests that the breakage isn't something like depreciating older authentication protocols, rather changing how such protocols are handled. That way the amount of engineering required is a lot less than trying to get Diffie Helman into NetWare.