December 2009 Archives

Points of view

Per BoingBoing: http://www.wilmott.com/blogs/eman/index.cfm/2009/12/29/Trading-Places

Headlined, "I had a fantasy in which the Fed and the TSA (Transportation Security Administration) switched roles."

It's short, so go read it. Then come back.

Ah, The Fed is charged with the security of the Financial System, where the TSA is charged with the security of people moving around. Different goals. The Fed's goal is to ensure that a) money keeps moving around, and b) does so in a minimally fraudulent way. The TSA's goal is to ensure that 1) people don't die, and 2) they have the option to move around if they want to.

The TSA would really prefer is if people just stayed put, as it would make their job a lot easier.

The Fed would really prefer if money keeps moving around, because without it doing that we end up with things like negative growth.

I'd call these diametrically opposed goals. One needs to keep things moving around, the other would rather they didn't at all.

So if a terrorist bombs a flight, the TSA moves to restrict that activity by making things (hopefully) more inconvenient for future would-be bombers (and everyone else). If a bank fails the Fed steps in to ensure that depositors get their money back, and takes steps to ensure people don't think the entire system is at fault in order to maintain confidence.

It all depends on your definition of security. "Bad things don't happen," is rather nebulous, you see, so you need more focused definitions. "People don't die using this service," is one the TSA subscribes to. "Ensuring business as usual continues," is one the Fed subscribes to. People die from financial crisis all the time (suicide in the face of mounting debts, retirement incoming coming in w-a-y under projection forcing skimped health-care, loss of a job forcing a person to live on the streets), and yet this isn't really on the Fed's radar. At the same time, the TSA is not at all afraid to change 'continues as usual'; witness their dictum banning in-flight communications.

And yet both do take in to account confidence in their respective systems. For the Fed, confidence is of paramount importance, for without confidence there is no lending, and without lending, there is no wealth creation. The TSA also takes in to account confidence, as their political existence is dependent upon it. For the Fed, even a little loss of confidence is a very bad thing. For the TSA, a little loss of confidence is a small price to pay to prevent future tragedy, and they can afford to wait until confidence is rebuilt due to lack of tragedy.

But it still doesn't help make flying any less of a pain in the neck.

Visual evidence of budget cuts

Want evidence?
Two calendars
The 2008 calendar you see on that wall is a gorgeous 4 color CMYK print. It is put out by Western's Imaging and Graphic Services department as a form of internal advertisement.

The 2009 calendar you see on that wall is a fairly simple 2-color process. 2-color requires less resources to produce. You get your budget savings wherever you can.

Bad tapes

| 4 Comments
It seems that HP Data Protector and BackupExec 10 have different opinions on what constitutes a bad tape. BackupExec seems to survive them better. This means that as we cycle old media into the new Data Protector environment we're getting the occasional bad tape. We've been averaging 3 bad tapes per 40 tape rotation.

While that not may sound like a lot, it really is. Our very large backups are extremely vulnerable to bad tapes, since all it takes is one bad tape to kill an entire backup session. When you're doing a backup of 1.3TB of data, you don't want those backups to fail.

Take that 1.3TB backup. We're backing up to SDL320 media, so we're averaging somewhere between 180GB and 220GB a tape depending on what kinds of files are being backed up. So that's 7-8 tapes for this one backup. How likely is it that this 7 to 8 tape backup will include at least one of the 3 bad tapes?

When the first tape is picked the chance is 3 in 40 (7.5%).
When the second tape is picked, assuming the first tape was good, the chance is 3 in 39 (7.69%).
When the third tape is picked, presuming the first two were good, the chance is 3 in 38 (7.89%).
When the 7th tape is picked, presuming the first six were good, the chance has increased to 3 in 34 (8.82%)

8.82% doesn't sound like much. However, the probability is cumulative. The true probability can be computed:

(3/40)+(3/39)+(3/38)+(3/37)+(3/36)+(3/35)+(3/34) = 0.56923444 or 56.92%

So with 3 bad tapes in a given 40 tape set, the chance of this one 7 tape backup having at least one of them in the tape set is over 50%. For an 8 tape backup the probability increases to 66.01%.

The true probability is a different number, since these backups are taken concurrent with other backups. So when the 7th tape gets picked, the number of available tapes is much less than 34, and the number of bad tapes still waiting to be found may not be 3. Also, these backups are mutliplexed so the true tape set may be as high as 9 tapes for this backups if that one backup target is slow in sending data to the backup server.

So the true probability is not 56.92%, it changes on a week to week basis. However, 56.92% (or 66%) is a good baseline. Some weeks it'll be a lot more. Others, such as weeks where the bad tapes are found by other processes and the target server is streaming fast, less.

We have a couple more weeks until we've cycled through all of our short-retention media. At that point our error rate should drop a lot. Until then, it's like dodging artillery shells.

10 technologies to kill in 2010

| 2 Comments
Computer world has the opinionated list.

  1. Fax Machines. I'm with him here. But they still linger on. Heck, I file my FSA claims via FAX simply because photocopying receipts and the signed claim form works! However... the copy machine in the office is fully capable of emailing me the scanned versions of whatever I place on the platen, which I could email to the FSA claim processor. If they supported that. Which they don't. Until more offices (and companies) start getting these integrated solutions into use and worked into their business processes, they'll stick around.
  2. 12v 'cigarette' plugs in vehicles. These are slowly changing. New cars are increasingly coming with 120v outlets. However, the sheer preponderance of this form-factor out there will guarantee support for many, many years to come. As more vehicles come with 48v systems instead of 12v systems, these new standard 120v outlets will be able to support more wattage.
  3. The 'www' in web-site addresses. Ah, mind-share. Old-schoolers like me still reflexively type 'http://www' before any web-address, simply because I spent so many years typing it it's hard to untrain my fingers. I'll get there in the end. And when geeks like me pick site addresses, the 'www' is kind of a default. But then, I'm not a web-dev so I don't think about things like this all day.
  4. Business Cards. Oh, they'll stick around. How else will I enter 'free lunch' contests? The Deli counter isn't going to get email any time soon, and dropping my card in a fish-bowl does the same thing easier. That said, the 100-pack of business cards I was issued in 2003 is still serving me strong, so I don't go through them much. One deli counter around my old job had a Handspring Visor set out so people could beam business cards to it as a way to enter the free lunch contest. Now THAT'S rocking it old-school!
  5. Movie Rental Stores. For people stuck on slow internet connections, they're still the only way to get video content. They still serve an under-served population. Like check-cashing stores.
  6. Home entertainment remotes. Word. I am in lust for an internet-updatable universal remote like the eHarmony ones.
  7. Landline phones. I still have one, because until a few months ago cell reception at my house was spotty. Also, they're 'always on' even in a power outage. An extended power-outage will cause even cell phones to run out of juice, and then where will you be? Also, cell service still isn't everywhere yet. More with the serving of under-served populations.
  8. Music CD's. They're going the way of vinyl records. Soon to be a scorned format, but their utility for long term media backup is not to be denied. What's really going way is the 'album' format! Kids These Days are going to remember the CD in much the same way I remember the 5.25" floppy disk.
  9. Satellite Radio. The long-haul industry is very much a fan of these services, as you can get the same station coast to coast. Some people like live talk radio, which you can't get on your ipod. Recorded talk radio? Sure, they're called "podcasts". It is no mystery why half of Sirius' channel lineup is talk or other non-music. Satellite radio is here for the long-haul.
  10. Redundant registration. Word.

That TCP Windowing fault

| 2 Comments
Here is the smoking gun, let me show you it (new window).

That's an entire TCP segment. Packet 339 there is the end of the TCP window as far as the NetWare side is concerned. Packet 340 is a delayed ACK, which is a normal TCP timeout. Then follows a somewhat confusing series of packets and the big delay in packet 345.

That pattern, the 200ms delay, and 5 packets later a delay measurable in full seconds, is common throughout the capture. They seem to happen on boundaries between TCP windows. Not all windows, but some windows. Looking through the captures, it seems to happen when the window has an odd number of packets in it. The Windows server is ACKing after every two packets, which is expected. It's when it has to throw a Delayed ACK into the mix, such as the odd packet at the end of a 27 packet window, is when we get our unstable state.

The same thing happened on a different server (NW65SP8) before I turned off "Receive Window Auto Tuning" on the Server 2008 server. After I turned that off, the SP8 server stopped doing that and started streaming at expectedly high data-rates. The rates still aren't as good as they were when doing the same backup to the Server 2003 server, but at least it's a lot closer. 28 hours for this one backup versus 21, instead of over 5 days before I made the change.

The packets you see are for an NW65 SP5 server after the update to the Windows server. Clearly there are some TCP/IP updates in the later NetWare service-packs that help it talk to Server 2008's TCP/IP stack.

It's the little things

One thing that Microsoft's Powershell does that I particularly like is that they alias common unix commands. No longer do I have to chant to myself:
ls on linux, dir on dos.
ls on linux, dir on dos.
ls works in powershell. The command line options are different, of course, but at least I don't have to retrain my fingers when doing a simple file list. This is especially useful when I'm fast switching between the two environments. Such as when I'm using, say, Perl to process a text file, and then Powershell to do a large Exchange operation.
How did you get into sysadmin stuff?

The flip answer is, "A case of mono in 1998."

The full answer is that I intended to get into system administration right out of college. When I made the decision to not pursue graduate school, I chose to join the real world. There were several reasons for this, chief among them being that the field I was interested in involves lot of math and I was strictly a C student there. As for what I'd do in the real world, well... by this time I had learned something about myself, something taught ably by the CompSci program I was getting around to finishing.

Broken code makes me angry. Working on broken code all day makes me angry all around. Since programming involves working on code that is by definition always broken, it didn't seem like the right career for me.

Since this was early 1996 when I realized this, I made this decision at a time when I had friends who had skipped college all together to work in internet startups, get paid in stock options, and otherwise make a lot of money. I didn't see much of them except online (those wacky startup death-marches). That wasn't a gravy train I could get on and survive sane. So, no programming career for me. SysAdmin it was!

I paid for my own Certified Novell Administrator (NW4.10 IIRC) that September while I was working temp jobs. One of the temp jobs went permanent in January of 1997, and I was hired at the bottom rung: Help Desk.

This wasn't all bad, as it happened. Our helpdesk had all of 4 people on it at the time, one dispatcher who half-timed with Solaris and Prime admin work, and three technicians. We pretty much did it all. Two of 'em handled server side stuff (NetWare exclusively) when server stuff needed handling, and all three of us dealt with desktop stuff.

Then I got mono in the summer of 1998. I was out for a week. When I came back, my boss didn't believe I was up for the full desktop rotation and grounded me to my desk to update documentation. Specifically, update our Windows 95 installation guides. What was supposed to take a week took about 6 hours. Then I was bored.

And there was this NetWare 3.11 to NetWare 4.11 upgrade project that had been languishing un-loved due to lack of time from the three of us. And here I was desk-bound, and bored. So I dug into it. By Thursday I had a full migration procedure mapped out, from server side to things that needed doing on the desktop. We did the first migration that August, and it worked pretty much like I documented. The rest of the NW3.x to NW4.11 migrations went as easily.

From there it was a slam-dunk that I get into NetWare Sysadmin work. I got into Windows admin that December while I was attending my Windows NT Administration classes. On Monday of Week 2 (the advanced admin class if I remember right) I got a call from my boss telling me that the current NT administrator had given 2 weeks notice and announced he was going on 2 weeks of vacation, and I'd be the new NT guy when I got back from class.

In his defense, he was a Solaris guy from way back and was actively running Linux at home and other places. He had, "I don't do Windows," in his cube for a while before management tapped him to become the NT-guy. When I got his servers after he left I found the Cygwin stack, circa 1998, on all of them. He had his preferences. And he left to do Solaris admin Somewhere Else. He really didn't want to do Windows.

So within 8 months of getting a fortuitous case of mononucleosis, I was a bone-fide sysadmin for two operating systems. Sometimes life works that way.

Sniffing packets

| 2 Comments
When I first started this sysadmin gig 'round about 1997, Windows based packet sniffers were still in their infancy. In fact, the word 'sniffer' was (and probably still is) a trademarked term for the software and hardware package for, er, sniffing packets. Sniffer. So when I needed to figure out a problem on the network, I went to the Network Guys who plugged their Sniffer into any available port on the 10baseT hub I needed analysis on and went to work. They told me what was wrong. Like a JetDirect card transmitting packets whenever it sensed a packet on the wire, thus bringing the network to is knees. Things like that.

Time passed and Sniffer was bought by Network Associates. Who then added a zero to the price because that package really did have a lock on the market. The next rev then more than doubled the already inflated price. So when it came time to renew/upgrade, our Sniffer couldn't handle Fast Ethernet, the price was eye watering. So. On came the free sniffers.

At first I was using Ether Boy, a now long lost packet sniffer. But eventually I found Ethereal (now WireShark), and I went to work. By the time I left my old job in 2003 I already had a rep for knowing WTF I was looking at, and the network guys didn't bat an eyelash when I asked for a span port. This ability was very handy when diagnosing slow Novell logins.

Fast forward to now. Right now I'm trying to figure out why the heck a certain NetWare server is so slow talking to the Data Protector media agent. It isn't obviously a TSA problem, but I've had problems with DP and NW talking to each other on the TCP level so that's where I'm looking now. Unfortunately for me, the desktop-grade GigE nic I have on the span isn't, shall we say, resourced enough to sniff a full GigE stream without at least a few buffer overruns. So I'm not getting ALL of the packets.

When I asked for the span port, the telecom guy said he greatly respected my ability to dig in to TCP issues. And said it in the voice of, "I think you're better at that kind of troubleshooting than we are." Which is a bit disconcerting to hear from your telecom router-gods. But there it is. What it means is that I can't very well ask for help interpreting these traces.

So far I've been able to determine that there is something hinky going on with network delays. There are some 200ms delays in there, which hints strongly at a failed protocol negotiation somewhere. But there are some rather longer delays, and it could be due to window size negotiation problems. Server 2008, the media-agent server, has a much newer TCP/IP stack than NetWare so it is entirely possible that they just don't work well together. I don't understand that quite well enough to manually deconstruct what's going on, so that's what I'm googling on right now.

And why Saturday? Because of course the volume that's doing this is our single largest and it is on the weekend where it is in the failed state where I can pry the hood off and look. Who knows, I may resort to posting packets and crowd sourcing the problem.

Update 12/23/09: Found it.

Learning new backup software

| 1 Comment
It has been no secret that we've been trying to migrate away from BackupExec (10d) to HP's Data Protector. Originally this was due to cost reasons, but we were either sold a bill of goods, or there was a fundamental misunderstanding somewhere along the way. Your choice as to which it really was. In short, the costs have been about the same or even a bit more than staying with BE. However, sunk costs are sunk, so once the switch was made DP became the cheaper course of the future.

Which brings us to the current state. We've finally pried loose funds to license the Scalar 100 we have for our tape backup solution, and we're in the process of getting that working with DP. As with all backup software, it behaves a bit differently than others.

And now, a digression.

It is my opinion that all backup software everywhere is fundamentally cantankerous, finicky, and locked in obscure traditions. The traditions I'm speaking of are sourced in the ancestral primary supported platforms, and the UI and rotation metaphors created 5, 10, 15, 20 years ago. The cantankerous and finicky parts come from a combination of supporting the cantankerous and finicky tape hardware and the process of getting data off of servers for backup.

I know there are people who love their backup solutions with an unholy passion. I do not understand this. Perhaps I just haven't worked with the right environment.

Back to my point.

Data Protector continues the tradition of cantankerous, finicky software locked into an obscure tradition. I am not surprised by this, but it is still disheartening. How it interacts with our tape robot is sub-optimal in many ways, which will ultimately require hand editing a text file to configure timeout paramters in an optimal way. This reflects DP's origin as a UNIX based backup ported to Windows. It only got a usable GUI very recently, and until DP 6.10 came out required rsh and rcp for their Unix deployment server. I kid you not. DP6.11 at least supports ssh and scp.

It's also not working well with our NetWare backups. I've blogged about this one before, but didn't end up posting the solution to the last round of problems. It turned out to be an out of date driver on the part of the DP backup-to-disk server, as it wasn't ACKing packets fast enough. Updated the driver, the backup started flying. Now that we've got the Scalar in the mix, and backing up to a new server, some new problems have emerged. So far they look to be in the NetWare TSA stack rather than on the DP side (at least, that's what the symptoms look like. I still need to look at packets to be sure), which is unfortunate since 1: Novell isn't going to fix the TSAs on NetWare, and 2: We're getting rid of NetWare in the near future. But not near enough that we can just forget the backups until we migrate. Suck-up-and-deal appears to be our solution. (DP does have OES2 agents, by the way)

Our Windows backups all are looking decent, though. That's something anyway. At least, when the Scalar isn't throwing monkey wrenches into DP's little world.

Old hardware

Watching traffic on the opensuse-factory mailing list has brought home one of the maxims of Linuxdom that has been true for over a decade: People run Linux on some really old crap. And really, it makes sense. How much hardware do you really need for a router/firewall between your home network and the internet? Shoving packets is not a high-test application if you only have two interfaces. Death and fundamental hardware speed-limits are what kills these beasts off.

This is one feature that Linux shares with NetWare. Because NetWare gets run on some really old crap too, since it just works, and you don't need a lot of hardware for a file-server for only 500 people. Once you get over a 1000 or very large data-sets the problem gets more interesting, but for general office-style documents... you don't need much. This is/was one of the attractions for NetWare, you need not much hardware and it runs for years.

On the factory mailing list people have been lamenting recent changes in the kernel and entire environment that has been somewhat deleterious for really old crap boxes. The debate goes back and forth, but at the end of the day the fact remains that a lot of people throw Linux on hardware they'd otherwise dispose of for being too old. And until recently, it has just worked.

However, the diaspora of hardware over the last 15 years has caught up to Linux. Supporting everything sold in the last 15 years requires a HELL of a lot of drivers. And not only that, but really old drivers need to be revised to keep up with changes in the kernel, and that requires active maintainers with that ancient hardware around for testing. These requirements mean that more and more of these really old, or moderately old but niche, drivers are drifting into abandonware-land. Linux as an ecosystem just can't keep up anymore. The Linux community decries Windows for its obsession with 'backwards compatibility' and how that stifles innovation. And yet they have a 12 year old PII box under the desk happily pushing packets.

NetWare didn't have this problem, even though it's been around longer. The driver interfaces in the NetWare kernel changed a very few times over the last 20 years (such as the DRV to HAM conversion during the NetWare 4.x era, and the introduction of SMP later on) which allowed really old drivers to continue working without revision for a really long time. This is how a 1998 vintage server could be running in 2007, and running well.

However, Linux is not NetWare. NetWare is a special purpose operating system, no matter what Novell tried in the late 90's to make it a general purpose one (NetWare + Apache + MySQL + PHP = a LAMP server that is far more vulnerable to runaway thread based DoS). Linux is a general purpose operating system. This key difference between the two means that Linux got exposed to a lot more weird hardware than NetWare ever did. SCSI attached scanners made no sense on NetWare, but they did on Linux 10 years ago. Putting any kind of high-test graphics card into a NetWare server is a complete waste, but on Linux it'll give you those awesome wibbly-windows.

There comes a time when an open source project has to cut away the old stuff. Figuring this out is hard, especially when the really old crap is running under desks or in closets entirely forgotten. It is for this reason that Smolt was born. To create a database of hardware that is running Linux, as a way to figure out driver development priorities. Both in creating new, missing drivers, and keeping up old but still frequently used drivers.

If you're running a Pentium 2-233 machine as your network's NTP server, you need to let the Linux community know about it so your platform maintains supportability. It is no longer good enough to assume that if it worked in Linux once, it'll always work in Linux.

Budget realities, 2010 version

| 1 Comment
Well, the Governor just presented her budget proposal for the coming year. And it's not good. WWU's budget is slated for a further 6.2% whack of state-funds. We survived last year in large part due to the traditional budgetary techiniques of cost shifting, reserve funds, and one-time funds. Those are pretty much gone now, so this will come out of bone.

Dire? Yes.

How DTV and HD Radio mirror security

One of the complaints levied against the now completed transition of US television broadcasts to pure digital is that the range reduces in a lot of cases. The same thing has been said about HD Radio, where the signal goes from crystal clear to nothing. Both are in the Seattle market, but up here in Bellingham we only have DTV, I don't know of any HD radio stations up here.

Which is sad, since I can occasionally pick up some of the Seattle stations if I'm north of town aways. The Chuckanut mountains (yes, that's their name!) get in the way of line-of-site while in town. However, there is no HD radio to be had. In large part this is because the Canadians don't have an HD radio standard approved and that's where most of our radio comes from.

Which is a long way from security, but the reasons for this are similar to something near and dear to any security maven's heart: two-factor security.

With analog TV and Radio signals, the human brain was very good at filtering out content from the noise. Noise is part and parcel to any analog RF system, even if you can't directly perceive it. Even listening to a very distant AM station, I can generally make out the content if I speak that language, or I already know the song. Those two things allow a much better hit-rate for predicting what sound will come next, which in turn enhances understanding. My assumptions about the communication method create a medium in which large amounts, perhaps a majority, of the consumed bandwidth is used in essentially check-sums.

Listening to a news-reader read text off a page. Call it 80 words per minute, and if you assume 5 character per word, that comes to 400 characters a minute. Add another 80-120 characters for various punctuation and white-space, assume 7-bit ASCII since special characters are generally hard to pronounce, and you have a bit-rate of between 56 and 65 bits per second. On a channel theoretically capable of orders of magnitude more then that. Those extra bits are insulation against noise. This is how you can understand said news-reader when your radio station is drowning in static.

TV is much the same. Back in the rabbit-ear era of my youth, we used to watch UHF stations through a fog of snow. It was just fine, we caught the meaning. It worked even better if the show was one we'd seen before, which helped fill in gaps.

Then along came the digital versions of these formats. And one thing was pretty clear, marginal signal meant a greatly reduced chance of getting anything at all. Instead of a slow fall-off of content, you had a sharp cliff where noise overcame the error correction in the signal processor hardware. However... so long as you were within the error correction thresholds, your listening/watching experience was crystal clear.

The something you are part of the security triumvirate of have/are/know is a lot like the experience with analog to digital conversion of TV and radio. The something you actually are is an analog thing, be it a fingerprint, the irides in your eye, the shape of your face, DNA sequence, or voice. The biometric device encodes this into a digital format that is presumably unique per individual. As we've had good experience with, the analog to digital conversion is a fundamentally noisy one, so this encoding has to include a 'within acceptable equivalency thresholds' factor.

It is this noise factor that is the basis of a whole category of attacks on these sensors. It is not sufficient to ensure that the data is a precise match, for some of these, such as voice or face, can change on a day to day basis, and others, such as finger or iris prints, can be faked very convincingly. The later of these is why the higher priced fingerprint sensors also do skin conductivity tests to ensure it is skin and not a gelatin imprint, among other 'live person' tests.

This makes the 'something you are' part of the triumvirate potentially the weakest. 'Something you know,' your password, is a very few bytes of information that has to be 100% correctly entered every time. 'Something you have' can be anything from a SecureID key-fob, to a smart-chiped card, which also requires 100% correctness. There is a fuzz factor for things like SecureID that use time as part of the process, so this is not quite 100%. However, 'something you are', is potentially quite a lot of data at a much lower precision than the other two.

There is a LOT of effort going in to developing algorithms that can perform the same distillation of content our brains do when listing to a news-reader on a distant AM station. You don't check the whole data returned by the finger reader, you just check (and store) the key identifiers inherent in all fingerprints, identifiers that are distilled from the whole data. The identifiers will get better over time as we gain better understanding of what this kind of data looks like. No matter how good we get with that, they'll still have uncertainty values assigned to them due to the analog/digital conversion.

Account lockout policies

This is another area where how Novell and Microsoft handle a feature differ significantly.

Since NDS was first released back at the dawn of the commercial internet (a.k.a. 1993) Novell's account lockout policies (known as Intruder Lockout) were set-able based on where the user's account existed in the tree. This was done per Organizational-Unit or Organization. In this way, users in .finance.users.tree could have a different policy than .facilities.users.tree. This was the case in 1993, and it is still the case in 2009.

Microsoft only got a hierarchical tree with Active Directory in 2000, and they didn't get around to making account lockout policies granular. For the most part, there is a single lockout policy for the entire domain with no exceptions. 'Administrator' is subjected to the same lockout as 'Joe User'. With Server 2008 Microsoft finally got some kind of granular policy capability in the form of "Fine Grained Password and Lockout Policies."

This is where our problem starts. You see, with the Novell system we'd set our account lockout policies to lock after 6 bad passwords in 30 minutes for most users. We kept our utility accounts in a spot where they weren't allowed to lock, but gave them really complex passwords to compensate (as they were all used programatically in some form, this was easy to do). That way the account used by our single-signon process couldn't get locked out and crash the SSO system. This worked well for us.

Then the decision was made to move to a true blue solution and we started to migrate policies to the AD side where possible. We set the lockout policy for everyone. And we started getting certain key utility accounts locked out on a regular basis. We then revised the GPOs driving the lockout policy, removing them from the Default Domain Policy, creating a new "ILO polcy" that we applied individually to each user container. This solved the lockout problem!

Since all three of us went to class for this 7-9 years ago, we'd forgotten that AD lockout policies are monolithic and only work when specified in Default Domain Policy. They do NOT work per-user the way they are in eDirectory. By doing it the way we did, no lockout policies were being applied anywhere. Googling on this gave me the page for the new Server 2008-era granular policies. Unfortunately for us, it requires the domain to be brought to the 2008 Functional Level, which we can't do quite yet.

What's interesting is a certain Microsoft document that suggested settings of 50 bad logins every 30 minutes as a way to avoid DoSing your needed accounts. That's way more that 6 every 30.

Getting the forest functional level raised just got more priority.

New linux kernels

I like reading kernel chagelogs. There is usually at least one NEAT thing in there. This time (2.6.32) that's a memory de-duplication technology that will be of great benefit for VM environments.
Link
The result is a dramatic decrease in memory usage in virtualization environments. In a virtualization server, Red Hat found that thanks to KSM, KVM can run as many as 52 Windows XP VMs with 1 GB of RAM each on a server with just 16 GB of RAM. Because KSM works transparently to userspace apps, it can be adopted very easily, and provides huge memory savings for free to current production systems. It was originally developed for use with KVM, but it can be also used with any other virtualization system - or even in non virtualization workloads, for example applications that for some reason have several processes using lots of memory that could be shared.
So. Cool. And there is more:
To make easier a local configuration, a new build target has been added - make localmodconfig. It runs "lsmod" to find all the modules loaded on the current running system. It will read all the Makefiles to map which CONFIG enables a module. It will read the Kconfig files to find the dependencies and selects that may be needed to support a CONFIG. Finally, it reads the .config file and removes any module "=m" that is not needed to enable the currently loaded modules. With this tool, you can strip a distro .config of all the unuseful drivers that are not needed in our machine, and it will take much less time to build the kernel. There's an additional "make localyesconfig" target, in case you don't want to use modules and/or initrds.
If you want to build a monolithic kernel for some reason, they've just made that a LOT easier. I made one a long while back in the 2.4 era and it took many tries to be sure I got the right drivers compiled in. Turn off module support and you've just made a kernel that's harder to rootkit. I don't see this improvement being as widely useful as the previous, but it is still nifty.

OpenSUSE depreciates SaX2

There is an ongoing thread on opensuse-factory right now about the announcement to ditch SaX2 from the distro. The reason for this is pretty well laid out in the mail message. Xorg now has vastly better auto-detection capabilities, so a tool that generates a static config isn't nearly as useful as it once was. Especially if its on a laptop that can encounter any number of conference room projectors.

This is something of the ultimate fate of Xwindows on Linux. Windows has been display-plug-n-play (and working plug-n-play at that) for many, many years now. The fact that XWindows still needed a text file to work right was an anachronism. So long as it works, I'm glad to see it. As it happens, it doesn't work for me right now, but that's beside the point. Display properties, and especially display properties in an LCD world, should be plug-n-play. 

As one list-member mentioned, the old way of doing it was a lot worse. What is the old way? Well... when I first started with Linux, in order to get my Xwindows working, which was the XFree86 version of Xwin by the way, I needed my monitor manual, the manual for my graphics card, a ruler, and a calculator. I kid you not. This was the only way to get truly accurate ModeLines, and it is the ModeLines that tell Xwindows what resolution, refresh rate, and bit-depth combinations it can get away with.

Early tools had databases of graphics cards and monitors that you could sort through to pick defaults. But fine tuning, or getting a resolution that the defaults didn't think was achievable, required hand hacking. In large part this was due to the analog nature of CRTs. Now that displays have all gone digital, the sheer breadth of display options is now greatly reduced (in the CRT days you really could program an 800x800x32bit display into Xwin on a regular old 4:3 monitor, and it might even have looked good. You can't do that on an LCD.).

In addition, user-space display tools in both KDE and Gnome have advanced to the point that that's what most users will ever need. While I have done the gonzo-computing of a hand-edited xwindows config file, I do not miss it. I am glad that Xwindows has gotten to this point.

Unfortunately, it seems that auto-detect on Xwindows is about as reliable as Windows ME was about that. Which is to say, most of the time. But there are enough edge cases out there where it doesn't work right to make it feel iffy. It doesn't help that people tend to run Linux on their crappiest box in the house, boxes with old CRTs that don't report their information right. So I believe SaX2 still has some life in it, until the 8 year old crap dies off in sufficient numbers that this tool that dates from that era won't be needed any more.