May 2009 Archives

Behavior modification

| No Comments
First off, this isn't sparked by anything in the office. We're pretty mellow around here.

Praise in public, correct in private

Simple words, yet we've all run into violations of it. Such as the following from a hypothetical manager:
We've been having some problems lately with people using their cell-phones excessively during working hours. We need to try and do better.
Which everyone knows that We actually means [person], and manager is too gutless to talk to [person] about their excessive cell-phone use. Instead preferring to issue a semi-anonymous dictate from on high. In my opinion, this is probably the most common manifestation of failing to follow this. Less common are the outright public criticisms, as most people realize that isn't a winning strategy (unless the manger intends to rule by fear, which some do).

That said, there are some circumstances in which a public correction is called for. Such as when the behavior requiring correction was highly public, highly sensitive in some way, or if failure to address the problem in public will significantly impact morale. In a sense, this is a failure on the part of the errant party to keep their transgressions reasonably private. Excessive local-call abuse? Pretty private, worthy of a mangerial talk in person. Surfing porn during working hours? Pretty private until you get caught, then it gets public real fast (at least in America).

If your transgression was big enough that the manager has to save face, you will not get a private rebuke.

As I said, this office is pretty mellow and this hasn't come up. But I still hear stories of other places.

The security of biometrics

| 3 Comments
A question was asked recently about how secure those finger-print readers you find on laptops really are. As with all things security, it depends on what you're defending against. Biometrics have some fundamental problems that make them a bit less secure than passwords in many cases.

For an example of how to fool a fingerprint reader, here is a MythBusters clip where they do just that.

Biometrics measure something you are, which is one of the know/have/are triad of authentication. Two factor authentication has two of these three, which is why some banks are using secure tokens (have) in addition to passwords (know) for online banking. In the abstract, biometrics should be the most secure of the lot since you are the only you in existence.

In practice, however, it is a lot more fuzzy. Fingerprints are shared by one in umpty million people, but can change on a day to day basis (band-aids, paper-cuts, table-saws). Voice prints change from day to day (colds) and year to year (age). Also, biometrics involve a lot more data than the few bytes of 7-bit ascii that is the normal western-alphabet password. Unlike passwords which have to be exact every time, biometric sensors have to allow for levels of uncertainty in measurement lest they permit false negatives. It is the uncertainty that allows attacks against biometrics.

Take the fingerprint reader in the MythBusters clip. As it turns out fingerprints are easy to replicate, which is why the high end readers attempt to determine if the fingerprint is actually attached to a person in some way. The ways of accomplishing this are typically pulse and skin conductivity; two things a xeroxed fingerprint couldn't have. The MythBusters defeated that particular lock by putting a thumb behind the paper which provided the pulse, and licked the paper which provided conductivity. Tada! Open door. No gelatin moulds needed.

Biometrics are problematical in another sense, you can't change them once they're compromised. This can be done for the other legs of the authentication triad, but not biometrics. Because of this, I find them fundamentally unsuited for sole-source authentication; they really need to be used with something else in a two-factor setup.

Biometric systems of the future may end up using more than one biometric. Fingerprint AND iris scan AND face scan. That kind of thing, which would make it a lot harder to fuzz all the methods well enough at the same time to get through. That kind of thing is tricky with laptops, though may come with ever increasing camera sensitivity.

A new addictive site

| 1 Comment
The site ServerFault has just gone public. This is a new project from the folks that brought you StackOverflow. It is a place for SysAdmin types to ping each other for questions, in a format that's very google-friendly. Handy when you're googling for a strange issue. It has a reputation management system that looks pretty good as well. Very nifty.

Doing some cleanup

| No Comments
Every once in a while it's nice to try and take out the trash. I did some fiddling around with some custom scripts and got a list of trustee assignments on the cluster and cross-checked them with our user groups to find groups that do not have any direct trustees assigned anywhere on the cluster. It was a sizeable list, about 30% of the groups in the groups context aren't there to manage file access (any more).

I then ran another one to give me a list of empty groups. There was some congruence between the two lists, but not as much as I thought. Unless the group was there for a documented reason, allows student workers to print to a departmental print-object kind of thing, if it had no trustees, was empty, and otherwse seemed ignored, it got tossed.

Some of these groups had comments in them about what they gave access to. I like that! It allowed to delete the serious looking group that gave access to a directory on a server that's been dead for 6 years and has no modern equivalent.

Unfortunately, there are a whole bunch of groups that I can't determine if they're still good. That'll have to be done by the desktop folk who create most of the groups. While I have hopes, I do not have high hopes that it'll get done.

Printing costs

| 1 Comment
In the most recent ATUS News there is an article about printing costs. As it happens, the page-count data is data I've taken a solid look at. Printing costs have gotten a lot of attention lately as a possible cost-savings measure, and I suspect this article's existence is due in large part to that. Yeah, we go through a lot of paper.

Updated: My figures for Winter quarter.
Anandtech is running a review of an Atom based motherboard from NVidia right now. What grabbed my attention was Page 5 of the review where Anand compares Atom performance against and old-school Pentium 4 processor. It was very interesting! This particular motherboard has a pair of Atom processors, but the review also included benchmarks from a single-cpu Atom motherboard of the same kind used in most netbooks.

If you're thinking about an Atom-based netbook, this should give you a fair idea about how it should perform.

Explaining email

| 1 Comment
In the wake of this I've done a lot of diagramming of email flow to various decision makers. Once upon a time, when the internet was more trusting, SMTP was a simple protocol and diagramming was very simple. Mail goes from originator, through one or more message transfer agents (or none at all and deliver directly, it was a more trusting time back then) to its final home, where it gets delivered. SMTP was designed in an era when UUCP was still in widespread use.

Umpteen years later and you can't operate a receiving mail server without some kind of anti-spam product. Also, anti-spam has been around as an industry long enough that certain metaphors are now ingrained in the user experience. Almost all products have a way to review the "junk e-mail". Almost all products just silently drop virus-laden mail without a way to 'review' it. Most products contain the ability to whitelist and blacklist things, and some even let that happen at the user level.

What this has done is made the mail-flow diagram bloody complicated. As a lot of companies are realizing the need to have egress filters in addition to the now-standard ingress filters, mail can get mysteriously blocked at nearly every step of the mail handling process. The mail-flow diagram now looks like a flow-chart since actual decisions beyond "how do I get mail to domain.com" are made at many steps of the process.

The flow-charts are bad enough, but they can be explained comprehensibly if given enough time to describe the steps. What takes even more time is when deploying a new anti-spam product and deciding how much end-user control to add (do we allow user-level white lists? Access to the spam-quarantine?), what kinds of workflow changes will happen at the helpdesk (do we allow helpdesk people access to other people's spam-quarantine? Can HD people modify the system whitelist?), or overall architectural concerns (is there a native plugin that allows access to quarantine/whitelist without having to log in to a web-page? If the service is outsourced (postini, forefront) does it provide a SSO solution or is this going to be another UID/PW to manage?).

And I haven't even gotten to any kind of email archiving.

Rebuilds and nwadmin

| No Comments
Friday afternoon the Kala server, one of our three primary eDirectory replica servers, died. In event I've never seen before, one hard drive of a mirrored pair failed in such a way that bad data got committed. This server had to be rebuilt.

Happily for me, this is a procedure I can do without having to look things up in the Novell KB. This is part of the reason the letters "CNE" follow my name. The procedure is pretty straight-forward and I've done it before.
  1. Remove dead server's objects from the tree
  2. Designate a new server as the Master for any replica this server was the master of (all of them, as it happened)
  3. Install server fresh
The details change somewhat over time, but that's the same workflow it has been since the NetWare 4 days. In my case I did hit the KB to see if there was a way to do step 2 in iMonitor. I couldn't find one, so I did it through DSREPAIR which works just fine.

As for the install... this server is an HP BL20P G3, which means I used the procedure I documented a while back (Novell, local copy). A few minor steps changed (the INSERT Linux I used then now correctly handles SmartArray cards), but otherwise that's what I did. Still works.

For a wonder, our SSL administrator still had the custom SSL certificate we created for this server three years ago. That saved me the step of creating a CSR and setting up all the Subject Alternate Names we needed.

And today I fired up NWADMIN for the first time in not nearly long enough to associate the SLP scope to this server, since it was one of our two DA's. I could probably have done the same thing in iManager with "Other" attributes, but... why risk not getting all the right attributes associated when I have a tool that has all the built-in rules. This is the one thing that I still have NWAdmin around for. SLP-on-NetWare management.

Password stealing

| No Comments
There has been some press lately about the University of California, Santa Barbara having assumed control of a Torpig botnet. They've put out a report, and it has been getting some attention. There is some good stuff in there, but I wanted to highlight one specific thing.

The Ars Technica review of it says it pretty directly:
The researchers noted, too, that nearly 40 percent of the credentials stolen by Torpig were from browser password managers, and not actual login sessions
The 'browser password managers' are the password managers built in to your browser-of-choice for the ease of logging in to sites. I have personally never, ever used them because the idea of saving my passwords like that gives me the creeps. Even if it is AES encrypted. However, the way to attack those repositories is not through grabbing the file it is through the browser itself. File-level security is only part of the game, even if it is the easiest to secure.

This extends to other areas as well. I exceedingly rarely click the, "remember this password," button in anything I'm on. This includes things like the Gnome keyring. That kind of thing is not a good idea in general.

The closest I get is a text-file on one of these (now with linux support!), and even that is a compromise between having to memorize a lot of cryptographically secure passwords (long AND complex) and a least wince-worthy memory jogging method. I can still describe several attack methods that could compromise that file, not the least of which is a clipboard/keylogger, or even a simple file-sniffer running in the background that drives through any mounted USB sticks. But... for long work passwords I'll use maybe four or five times a year, but still have to know, it's a compromise.

There are still some passwords I'll never write down outside of a password field. Such as the god passwords, any password I use on a daily or even weekly basis (I use those often enough for true memorization), or passwords used for any kind of financial transaction. For those kinds of high-value passwords, convenience of memory prosthetics doesn't enter in to it.

Windows 7 and NetWare CIFS

| 4 Comments
Now that RC1 is out we're trying things. Aaaaaand it doesn't work, even when set to the least restrictive Lan Man authentication level. Also? Windows 7 has a lot more NTLM tweakables in the policy settings that we don't understand. But one thing is clear, Windows 7 will not talk to NetWare CIFS out of the box. The Win7 workstation will need some kind of tweaks.

I may need to break out Wireshark and see what the heck is going on at the packet level.


Life on the bleeding edge, I tell you.

Update: Know what? It was a name-resolution issue. It seems that once you went to the resource with its FQDN rather than the short name, the short names started working. Kind of odd, but that's what did it. A bit of packet sniffing may illuminate why the short-name method didn't work at first (it should) which just might illuminate either a bug in Win7, or a simple feature of Windows name-resolution protocols.

The only change that needed to be made was drop the LAN Manager Authentication Level to not offer NTLMv2 responses unless negotiated for it.
An article over on Ars Technica goes into some detail about a dispute between two Firefox extensions that has gotten nasty. It seems that the extension environment inside Firefox (and that other Mozilla browser, SeaMonkey) is not sandboxed to any significant degree. The developer of NoScript was able to write in a complete disabling of the AdBlock Pro extension.

In some ways this reminds me strongly of how NetWare works. NetWare uses cooperative multitasking, rather than the preemptive multitasking used is pretty much every other modern server-class OS. This is part of the reason NetWare can squeeze out the performance it does under high load. The Firefox extensions run as children of the main Firefox process, can freely interact with each other, and when they crash hard can take the whole environment down with it.

Another way it reminds me of NetWare is the seeming lack of memory protection. In NetWare, unless you specifically declare that a module is to run in a protected memory space, it runs in the Kernel's memory space. This means that one program can access the memory of another program, so long as they're in the same memory space. This stands in contrast to other operating systems which provide a protected memory space for each process. It sounds to me like Firefox has a single memory space equivalent, and all process have access to it. This allows the extension-war described above to be possible without resorting to outright hacking around security features.

Moving away from a cooperative multitasking model made Windows at lot more stable (Win3.11 to Win95) as did the introduction of true memory protection (Win98 to Win2K). Memory protection is a major security barrier, and is something that Firefox seems to lack. If a Firefox install is unlucky enough to have an evil extension added to it, all the data that passes through that browser could be copied to persons nefarious, much like the Browser Helper Objects in IE.

Does it seem odd to you that I'm talking about Operating System protection features being built into browsers? These days browsers are in large part operating systems on their own, albeit ones missing the key feature of having exclusive control of the hardware.

These problems to appear to be on the verge of changing. Both Chrome and IE8 launch each browser tab in its own process, which adds a barrier between processes spawned in those tabs. That way when a flash game starts consuming all available CPU, you can kill the tab and keep the rest of your browser session running. Unfortunately, process separation still won't stop NoScript from killing AdBlocker. For that, more work needs to be done.

Other Blogs

My Other Stuff

Monthly Archives