July 2010 Archives

XKCD, let's see how we did...

| 1 Comment
XKCD University Website, annotated The blue underlines are items that are on our front page, the gold underlines are items that are linked directly from the front page. I happen to know you can get to some of the other items directly from the front page, but they're not labeled in any way that you'd ever expect to get there from the top. I marked those in psychic white.

University front pages are a mismash of competing goals.
  • Current students looking for information
  • Marketing: attracting new students
  • Marketing: keeping Alumni engaged and giving
  • Marketing: attracting community interest in campus events
This is why we have a large, hard to miss link on the right side of the page for logging in to MyWestern. That's our portal where more of the right-side venn-diagram stuff can be found.

Being the WTF person

| 2 Comments
At both this job and my last one I have ended up becoming the WTF person. The WTF person is the person people go to when things are acting strangely, they can't figure it out, and need another set of eyes. Preferably a set of eyes with a reputation for pulling rabbits out of hats.

WTF people are the kind of people that end up on level 2 or 3 tech support, because that's who you want to have at that level. People who solve weird stuff.

At a place like ours where the support relationships are largely informal, at least among people who dink around with servers, the concept of L2 or L3 support doesn't really exist. It manifests as phone-calls or emails from people with strange questions, looking for leads in their own inquiries. Or in the case of my immediate co-workers, a head poked around the door, and, "I'm lost, can you take a look?"

As I alluded to before, becoming the WTF person takes time. You have to make some awesome saves so people notice, and then continue to crack weird, hard to describe problems. It helps a lot to have a deep understanding of the technology you work with. I suspect being ebullient about how you found the problem and describing the problem once it was resolved helps in this.

Once you get there, though, you do get passed some strange, strange things. I've been asked advice on figuring out how something broke in that specific way when the symptoms described... have no causal relationship I can think of. I also get passed weird questions in areas I don't know much about (MS Office for one), but at least those can be deflected.

Honest to goodness bugs are perhaps the hardest to figure out. These are problems that take a few conditions to set up, and it isn't always clear that those conditions are in place. This skill got a lot of work back when I was working on the OES2 SP1 beta. On software that's already been through a beta-test and perhaps a service-pack or two, the bug conditions can be very arcane.

One-man IT shops tend to attract WTF people, simply due to the breadth and complexity of the environment. People who thrive in such environments definitely are. They do a little bit of everything, which sets them up to make connections that other people miss.

At the other end of the IT spectrum, highly specialized IT people in large organizations, you still find WTF people. They're perhaps not as common, but they do exist. And strange but awesome synchronicities can occur if WTF people from different specialties start hammering on a problem together. This kind of thing sometimes happens when I talk to L2/3 vendor-support.

I'm proud to see this happen, even if in the moment I'm also going WTF?? in my head.

Legal rubber hose usage increases

According to The Register, the UK police have increased the exercise of the power that allows them to compel the revealing of crypto keys. That fancy duress key you put on your truecrypt volume is only good for earning you jail time. I've mentioned this before, but crypto is vulnerable at the end-points. If the Government can point a loaded law at you to force you to reveal your keys, the strength of your convictions, not your crypto, is what is being tested. Perhaps that 2-5 year prison term is worth it. Or maybe not.

I take heart that a majority of those served with the demand notice have refused. But we still don't quite know what'll happen to them.

This is harder to pull off in the US thanks to the 5th amendment, but there is nothing stopping this kind of thing off our shores. Or heck, at our borders.

Death of the Desktop (in the home)

The death of the desktop computer has been predicted for years, and yeah. I can see why. At home, we have one desktop. I'm not counting the servers I run in headless mode for this, otherwise the count would be higher and I'm never on the keyboard for those anyway. The desktop gets used for a very few things:

  • That's where the budget is kept
  • PC Gaming
  • The few applications for which a huge screen and a mouse are a real benefit.
We use it on average 1.5 times a month. Unless someone bought a new game at which usage will be fairly constant for about two weeks, at which point interest will be lost and we'll be back to 1.5x month. Our laptops do darned near everything we need, as fast as we needed, in a way that is comfortable. Plus, we can take them places.

At work I'm not giving one up because I consciously made the mobility/performance tradeoff in favor of performance. I've got over 36" of linear monitor, and I use it. I have quite a lot of memory in there, as well as a quad core CPU, that gets well used. As well as two hard-drives because I have needs. This is not a desktop it's a workstation.

While the desktop may be mostly dead in the home, the only serious niche keeping it going is PC gaming and even that's changing thanks to consoles, it's going strong in the workplace.

Crossing the line

This evening I finally crossed the 10K reputation mark on ServerFault. The answered question that brought me over the top?

Explaining modem squeals.

Kinda fitting really, since the modem is what brought me into the greater community of computer geeks. Awww.

Genomics is not source-code

A lot of the squee I've heard about the sequencing of the human genome and the ever dropping cost to completely sequence a single genome has been in the nature of "we're figuring out how nature programs biology!" This is true to a point, but the reality of programming new life or new functions of life is far, far in the future. Yes, we've already created artificial life, but it wasn't done with full understanding of the source code we used; we took the code that governs functions we wanted and fitted them together.


Biology is in some ways like computers in that there is a (presumably) deterministic process that governs the rules of how it works. It exists in a fundamentally chaotic environment, which makes extracting that determinism pretty hard. But we're sure there is a causal chain for most anything, if only we look hard enough. For computers we know it all end to end, we wrote the things so we should, but are only now getting to the levels of complexity in these systems where they can mimic non-deterministic behavior. But if we dig down into the failure analysis we can isolate root and associated causes of the failure chain. We want to do that with biology.


We are far, far away from doing that.


Biology up until the genomics 'revolution' has been in large part describing the function of things. Our ability to stick probes in places has improved over time, which in turn has increased our understanding of how biology interacts with the environment at large. We've even done large scale changes to organisms to see how they behave under faulty conditions, just so we can better figure out how they work. Classic reverse-engineering, in other words. You'd think having access to the source code would make it go much easier. But... not really.


Lets take an example, a hand-held GPS unit. This relatively simple device should be easy to reverse engineer. It has a simple function, provide precise location. It has some ancillary functions such as provide accurate time, and give a map of the surroundings. Ok.


After detailed analysis of this device we can derive many things:

  • It uses radio waves of a specific wavelength set to receive signals.
  • Those signals are broadcast by a constellation of satellites, and it has to receive signal from no less than three of them before it can do so.
  • The time provided is very stable, though if it doesn't receive signals from the satellites it will drift at a mostly predictable rate.
  • The bits that receive the satellite signal, since it doesn't work if they're removed.
  • Where the maps are stored, since removal of that bit causes it to not have any.
  • A whole variety of ways to electrically break the gizmo.
  • How it seems to work electrically.

Additionally, we can infer a few more things

  • The probable orbits of the satellites themselves.
  • The math used to generate position.
  • The existence of an authoritative time-source.

Nifty stuff. What does the equivalent of 'genomics' give us? It gives us the raw machine code that runs the device itself. Keep in mind that we also don't know what each instruction does, and don't yet have high confidence in our ability to discriminate between instructions. And most importantly, we don't know the features of the instruction-set architecture. There is a LOT more work to do before we can make the top-level functional analysis meet up with the bottom-level instructional analysis. Once we do join up, we should be able to understand how it fundamentally works.


But in the mean time we have to reverse-engineer the ISA, the processor architecture itself, the signal processing algorithms (which may be very different than we inferred with the functional analysis), how the device tolerates transient variability in the environment, how it uses data storage, and other such interesting things. There is a LOT of work ahead.


Biology is a lot harder, in no small part because it has built up over billions of years and the same kinds of problems have been solved any number of ways. What's more, there is enough error tolerance in the system that you have to do a lot of correlational work before you can identify what's signal and what's noise. Environment also plays a very key role, which is most vexing as environment is fundamentally chaotic and can not be 100% controlled for.


We're learning that a significant part of our genome is dedicated to surviving faulty instructions in our genetic code, and we hadn't realized they were there before. We're learning ever more interesting ways that faults can change the effect of code. We're learning that the mechanics we had presumed existed for code implementation are in fact wrong in small but significant ways. The work continues.


We may have the machine-code of life, but it is not broken down into handy functions like CreateRetina(). Something like that would be source-code, and is far more useful to us systemizing hominids. We may get there, but we're not even close yet.

On a Wednesday in August in 1996, the WWU NDS tree was born. There were other trees, but this is the one that everyone else merged into. The one tree to rule them all. That was NetWare 4. It brought the directory, and it was glorious (when it worked right).

And now, most of 14 years later, it is done. The last replica servers were powered off today after a two year effort to disentangle WWU from NetWare.

I have some blog-header text to change.
Today I'm spending most of it sheepdogging a vendor installing an application. This vendor is VPNing in, and such access is a key part of the product's support contract.

This is something I've noticed recently. Several of the server-based off the shelf apps I've installed lately have had a requirement that the vendor have access to the server in some way. Some of it is so they can do the install. Some of it is so they can update it so we don't have to. Some of it is just in case we ever call for support and need their help.

I have a theory for why this is. I have a sneaking suspicion that its because that's how these vendors support installs in environments where the sysadmin is a desktop person who got handed a server and was asked, "make it work." This kind of vendor-based hand-holding makes the ongoing maintenance of applications lower on the client side of the equation, which can lead to more sales. But, I'm not sure if that's it or not.

This is causing some grumbling in the ranks, since it means untrusted parties have to be allowed to log in to servers in the domain. Before this recent spate of applications, vendors demanding such access had their apps relegated to servers not in the domain at all. This doesn't work when the app requires domain access. Console access to servers is a sensitive thing for us, so we don't like to hand it out on demand to vendors.

Especially when we weren't involved in the purchase process to begin with. Many a time we've been told:

Client: We spent umpty thousand dollars on this ap. Install it.
Us: *reads install document, cringes* They need Administrator access to the whole box and a tunnel into the inner Banner fortress. I don't want to.
Client: What part of umpty thousand dollars don't you understand? Make it work.
Us to Management: Insecure! Violates best practices!
Management to Us: It's too late to get a refund, and upper management was involved in the decision. Make it work.
Us: Wilco.
Or words to that effect.

Ahem.

How are y'all handling this kind of thing, presuming you're also seeing it and it isn't just me getting lucky.

Special users in .EDU-land

We have a user type that is pretty much unique to the higher-educational world.

The Emeritus professor.

I'm unclear on what, exactly, Emeritus professors get in the way of continued access to WWU resources, but I do know they can have things like email accounts. As you can probably guess, this population is not the most technically savvy bunch. They also represent a very unique population that requires a fair amount of exception-processing in several procedures.

We've been in a multi-year process of eliminating the 'cc.wwu.edu' domain from Campus usage. Way back in the beginning all WWU email came from cc.wwu.edu. Then the Microsoft Mail system came in and Faculty/Staff moved to a different domain, but students stayed on cc.wwu.edu. When we upgraded from MS-Mail to Exchange 5.5 Fac/Staff moved to @wwu.edu instead and that's where we are today. Students were migrated to '@students.wwu.edu' coming on two years ago as part of the Windows Live @ EDU program. The only people on @cc.wwu.edu were people who opted out of Exchange, preferring the more pure text-mode email interface of pine over telnet/ssh.

Getting people off of cc.wwu.edu has been a long process. The fact that most of our Emeritus were over there, and had been there since time began, caused a fair amount of work. The fact that some professors had published articles and books with their @cc.wwu.edu addresses in them caused a fair amount of pain as well. We worked through them (thank you, ATUS!) and we're almost ready to turn cc.wwu.edu off for good.

So of course today I dig up another Emeritus with a Contact in Exchange that forwards to a cc.wwu.edu address.

Another area where Emeritus caused some pain is when we turned off our modem dial-up service. Our only consistent users were a small handful of faculty and Emeritus.

"Accounts for Life" is a tricky service to provide. Yes indeedy.
This question came up, and it got a long response out of me.

Question:
Taking in mind that being a a deterministic machine a today computer is incapable of producing random sequences and all computer-generated "random" sequences are pseudo-random actually, aren't computer-generated random passwords insecure? Isn't it more secure to just press keys randomly to create a random password than to use a digital generation algorithm?
The idea behind this question is sound, since deterministic processes return consistent results, password generators do not return random passwords and are therefore not good (or in the words of this poster, insecure). Unfortunately, it shows a lack of awareness of just what constitutes a good password.

Good passwords come in many flavors. Passwords that humans never have to type (such as those attached to batch processes) can be much more complex and long than the kind humans have to memorize. Also, if you're dealing with a character limit on passwords, a good password on such systems will look different than good passwords on other systems that lack a limit on size.

Over the years we've learned what a good password looks like:
  • Long enough to make randomized guessing non-viable.
  • Have enough entropy to make each character significantly non-dependent on other characters in the password.
  • Able to be memorized.
There are also system-specific variables for what constitutes a good password.
  • For systems with 8-character limits on passwords (old crypt-based *nix password systems), high entropy is required.
  • For systems with well known methods of attacking passwords below a certain length (LanMan, NTLM, but not NTLMv2) high length is required.
In basic, length increases the number of possible passwords that have to be brute-force checked, and entropy increases the number of passwords that have to be checked at a certain length. Perfect entropy (i.e. completely random) of course maximizes the number of passwords that have to be brute forced at any given length. Unfortunately, perfect entropy always fails the memorization test of 'good password' for password lengths much above 7.

Which is a long way of saying that perfect entropy is not required to have a good password. In fact, if your password needs to be used on a system with known attacks for short passwords (Windows) length is more important since most normal humans can't memorize 16+ character passwords of four-character-set random characters. Especially if password rotation is in force so they have to do it a couple times a year.

And a final point about the determinism of generated passwords. A deterministic process can't introduce more entropy that it received as an input, this is true. When generating a 40 character password, a password generator can use a smaller amount of truly random bits (thank you /dev/random) to generate a password. The amount of entropy in this password will never exceed the number of bits that were pulled from the random source so long as an attacker knows what algorithm generated the password. If an attacker doesn't know what algorithm was used to generate a password, then the dependency of one character on another is unknown so the password will have high apparent randomness. This is called pseudo-randomness, and is how /dev/urandom and most hashing algorithms work.

Since perfect entropy is not required to generate a good password, deterministic password generators can produce perfectly fine passwords. So long as they have a good source of entropy to seed their processes with.

The perils of event-log tracking

Over half a year ago I mentioned that I was going to write up a script that sucks down Windows security log info and deposits the summarized data into a database. That script has been pretty much done for several months now, and the data is finally getting some attention on campus. And with attention comes...

...feature requests.

It's good to have them, but tricky to deal with. There is a quirk in how these logs are gathered and how Microsoft records the data.
  • Login events record the User and the IP address the request came from. The machine they're logging into is the Domain Controller in this case.
  • Account Lockout events record the machine that performed the lockout. This would be things like their workstation, or the Outlook Web Access servers.
Since we use DHCP on campus, the Machine : IP association is not static. Our desktop support users like having machine name wherever possible, with machine name AND IP being preferred. This will prove tricky to provide. What doesn't help is that a lot of login events come from IPs that trace to our hardware load-balancers, which generally means a login via LDAP by way of our single-sign-on product (CAS). That's not a domained machine, obviously.

There are a couple of ways to solve this problem, but none of them are terribly good.
  • During event-parse, use WMI to query the IP address to find out what it thinks it is. Should work fast for domained machines, but will be horribly slow for undomained machines.
  • Use a reverse IP lookup. Except for the fact that we use BIND for our reverse-DNS records so we'll get a lot of useless dhcp134-bh.bh.wwu.edu style addresses that won't correspond to the 'machine login' events I'm already tracking.
  • Do DB queries to find out which machine logged in with that IP address most recently and use that. But all those lookups will HORRIBLY slow down parsing, even if I keep a lookup table during the parsing run.
Slowing down parsing is not a good thing, since we chew between 70K-300K events every 15 minutes during busy periods. Even small efficiency dings can be horribly magnified in such an environment.

Nope, looks like the best solution here is to make the IP address a clickable link that leads to another query that'll populate with the most recent machines to hold that IP address. If any show up at all.

I wonder what they'll ask for next?

Password policies in AD

One of the more annoying problems with password and account-lockout policies in Active Directory has been that they apply to every account universally. I you want to force your users to change passwords every 90 days, with account lockout after a certain number of bad login attempts, then the same policies apply to your 'Administrator' user. Account lock-out was a really great way to DoS yourself in really critical ways.

In a way, that's what account-lockout is all about. It's to keep bad people from coming in, but its also a way for bad people from preventing legitimate people from using their own accounts. You need to take the good with the bad.

Since we were a NetWare shot for y-e-a-r-s we're very used to Intruder Lockout (ILO), and losing it during the move to Windows was seen as a loss of a key security feature. We had accounts that had to be exempted from lockout, which was dead easy in eDirectory but very difficult in AD.

Happily, Server 2008 introduces a way to do this. It's called "Fine-Grained Password Policy", and is NOT group-policy based. This was somewhat surprising. Getting this requires raising the domain and forest functional levels to the 2008 level. What it allows is setting password policy based on group memberships, with conflict resolution handled by a priority setting on the policy itself. Interestingly, the actual policies are created through ASDI Edit, so they're not beginner-friendly.

For instance, we can set a 'lock out after 6 tries in 30 minutes' setting to the Domain Users group at a Priority of 30, and a second 'never lock out ever' policy to the Domain Admins group at a Priority of 20. That way 'Administrator' will have the never-lock policy apply to it, but Joe User will have the lock-after-6-in-30 policy apply. This works best if the password policy specifies that Domain Admins need to have very complex and long passwords, which makes a brute-force cracking attempt take unreasonably long amounts of time.

We put this in place a few weeks ago, and it is working as we expected. SO GLAD to have this.