October 2011 Archives


Today I had an incident with biometrics that further convinces me that they are not the end-all be-all of security.

Today I had to head up to our datacenter to do some nebulous "things". Like you do. Since we colo at a large facility that has all of those security certifications, getting into it is something of a trial but a familiar one. Hand scans, double-layered man-trap, the whole deal.

Only, the hand-scanner on the cage our racks are in wouldn't read my hand today. It took 19 tries before it decided I was me. To get this far I had to pass four other hand-scaners and only had to re-enter three times along the way. When I went back to the security station to see WTF, they had me re-scan my hand. Leaning over I saw that I had managed to fill their live-log of entry/exit events with red events, and they didn't seem phased in the least.

I've had some trouble with this particular hand-scanner before, enough that I dread leaving the cage. For what ever reason, this particular scanner is far enough out of spec that the fuzzy results it returns are fuzzier than their system will accept. Since it's on a rack cage rather than the higher trafficked man-trap scanners, it hasn't been caught out and re-tuned (or something). But still, I hate that thing.

Conference and company t-shirts

T-Shirts can be a problem. As anyone who has ever had to order a bunch of shirts for some charity function knows predicting the sizes of people is hard. A few smalls, some mediums, a lot of larges, some XL and XXL, with a few XXXL for good measure. Or worse, just got with a lot of XL's since "that'll fix everyone".

I've known many techies over the years who wear a size 50 suit (if not larger), and they do gripe about shirt-size availability. With the advent of online registration and "Shirt size" on conference registration forms the shirt ordering process can move from the buggy statistical model to the 'give them what they asked for' model. No need to worry about getting 15 or 25 4XL shirts and having a lot of massive leftovers, just order what they asked for. Problem solved!

Unfortunately, this is a much harder problem for women. I know several women who have a bust size north of 50 inches, and they're also the type who go to technical conferences. The same XXL shirt on the 50-inch chest men does not fit nearly as well on a woman. The shoulders droop, there is a complete absence of shaping, and the shirt is likely to be far too long. If that shirt gets worn, it'll be worn at the conference and then binned.

But even offering women's sizes is no panacea. A great breakdown of why this is can be found here:


An example taken straight from the wiki article, take a look at ThinkGeek's sizing chart:


The 50-inch chest woman is still out of luck, since the largest ThinkGeek offers is 42 inches.

Take American Apparel's size-chart, Somewhat better at 46 inches, but that 50-inch chest woman attempting to wear a shirt like that will end up with an over-emphasized bust.

Larger shirts can be found, and it reflects well on the conference when it provides swag that makes attendees actually want to be seen in it.

Batch jobs in PowerShell

| 1 Comment
Say you want to execute a series of commands on a bunch of Windows machines and your AV considers 'psexec' to be malware. What are you to do? Remote PowerShell can be used. What's more, it can be done in parallel. Say you've got to run a malware cleanup script on 1200 computer-lab machines RIGHT NOW, you can't just put it as a scheduled task in a GPO and wait for the GPOs to apply.

$JobTracker=new-object system.collections.hashtable
$Finished=new-object system.collections.hashtable

foreach ($Mcn in $MachineArray) {
    $JobTracker["$Mcn"]=invoke-command -AsJob -ComputerName "$Mcn" -ScriptBlock {
        $ProcList=get-wmiobject win32_Process
        foreach ($Proc in $ProcList) {
           if ($Proc.Name -eq "evil.exe") {$Proc.Terminate}
        remove-item c:\windows\system32\evil.exe
while ($Completed -lt $JobCount) {
    foreach ($Mcn in $Machines) {
        if (($JobTracker["$Mcn"].State -eq "Completed") -and (-not $Finished.Contains("$Mcn"))) {
            remove-job -id $JobTracker["$Mcn"].ID
        } elseif (($JobTracker["$Mcn"].State -eq "Failed") -and (-not $Finished.Contains("$Mcn"))) {
            write-warning "$Mcn failed: $FailReason"
            remove-job -i $JobTracker["$Mcn"].ID
    sleep 1

Heck, psexec can't be run in parallel like this, so this is even better!

What this does:

Strong passwords, an update

Five years ago I published the following article:

Strong passwords in a multiple authentication environment.

The key thing I was driving at in that article, a strong password on one system is not a strong one on another system and this can significantly compromise password security if multiple authentication systems are in use, is still very much true.

That article was full of Solaris 8 and NDS. Here in 2011 those are now really old. What has changed since then?

  • Old Samba versions that don't support NTLMv2 are now very rare.
  • Most modern Samba now includes Kerberos support.
  • LM/NTLM requiring Windows installs are now very few and far between.
  • Linux can now leverage both LDAP and AD for back-end authentication, and such hooks are common and pretty well documented.
  • Web-authentication systems (OpenID) are now much more common.
  • Application-level auth is much more common and the data it protects much more significant.
Because of all of these, password length is beginning to trump password complexity as the surest bet for an uncrackable password. Those 8-character limits of yore are now blessedly rare.

However, some things still haven't changed:

  • Older software with embedded authentication still can require older password protocols.
  • Some IDM systems force passwords to be command-line safe, which restricts the allowable special-characters that may be used.
  • Embedded devices, especially very expensive embedded devices, can require old password protocols long after they've been superseded. 
  • The cost of paying off the IT Debt built into some IDM systems can prevent newer authentication systems from being implemented due to simple resource costs, keeping older protocols around longer than is safe.
  • Arbitrarily short field-limits in databases that store passwords (16 characters should be good enough for everyone, obviously).
  • Developers who decide to write their own authentication systems from scratch rather than hook into something else that's been battle-tested.
So, even now, today, in 2011, the bad decisions of ten years ago, or the hard-to-update technology of ten years ago, can significantly hamper a unified password policy for a multi-authentication system. That hasn't changed. It's all well and good that Linux (which replaced your Solaris installs three years ago) can support 200 character passwords, but that doesn't matter if the custom-built ERP application has 10-character passwords baked into its core.

However, another trend has continued since 2006: web-apps have continued to eat client/server apps for lunch.

With web-apps the option of leveraging different authentication system, or at least providing an abstraction layer to hide the old cruft from view, is possible. Perhaps there is now a web-app in front of the custom-built ERP system, put there so everyone could stop maintaining all of those terminal programs and ease the VPN/home-computer problem (everyone has a browser). That web-app could very well use an alternate authentication source, such as LDAP, and use the LDAP database itself to store the (highly entropic and automatically rotated frequently) authentication tokens needed for the old system.

With a system like that, such an old system can still be protected by an enterprising user who has selected this:

valence NIMBOSE sequestrate absolution [953]
As their passphrase. Four dictionary words, no funny spellings, four character-sets.

Is that a good password? Consider this: The Oxford English Dictionary has over 600,000 words in it. A four-word uncased pass-phrase using random words requires 1.296x10^23 guesses to brute-force, since each position has 600K possibilities. An 8 character mixed case password requires 5.346x10^13 guesses, add in numbers and you rise to 2.815x10^14.

With GPU-based password crackers available now, even salted 8-character passwords are not that good even with fully randomized passwords. This is the march of technology. As has been pointed out, adding ever more random ASCII to a password doesn't scale. Pass phrases? Good things!

But if you restrict the word-set only to words in common usage and exclude words only used in technical settings, the word-set drops precipitously to 20K or less. Even so, a four word uncased password gives an password requiring 1.6x10^17 guesses. Throw in some irregular casing and the total guesses goes up markedly.

Which reminds me, I need to spend some time going over our own developed software for password handling safety. You should to!

Old phrases and drinkware

| 1 Comment
A while back, this mug was gifted to me. Appropriate, really:


It has survived long use, both at work and at home. Unfortunately, it is dying:


The handle is about to break off. I'm not sure what triggered that since I have other mugs that are three times as old as this one and are still going strong, but there it is. The beginning crack of a crack cascade that'll end up with a handle-free mug.

I could replace it, but I'm not going to.

You see, the phrase RTFM was what we used to say on the pre-Google internet when someone was asking stupid questions. "READ THE FUCKING MANUAL", or when being polite, "read the free/fine manual". In those elder days, manuals were the most available repository of technical knowledge available to the average technical worker. CompuServe or Usenet helped with strange edge cases, but really, the manual was where it was at. Want to know how to turn on the TCP stack on a NetWare 3.11 server? Read the bloody manual, it's right there in easy to follow steps. This phrase is also why the defacto repository of Usenet news-group FAQs was on a server called "rtfm" somewhere at mit.edu.

These days LMGTFY has replaced RTFM. There is even a web-page to really drive home the point. The One True Repository Of All Technical Knowledge is no-longer the vendor supplied manual, it is random blog-postings from people solving nearly the same problem and the vendor knowledge-base (if there is one). Manuals are still helpful, especially in their Installation and Administration Guide formats; but they're mere pamphlets to the manuals of old. If you've ever seen a complete printed manual-set for NetWare 4.11 you know of what I speak.

We'll see if LMGTFY makes it onto a ThinkGeek mug sometime in the next couple years.

Facing unreasonable requests

Over on ServerFault we had a question become rather hot lately. The key part:

We received an interesting "requirement" from a client today.

They want 100% uptime with off site fail over on a web application. From our web apps viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers etc.

However, from a networking issue I just can't seem to figure out how to make it work.
100% uptime? Anyone who has been in this business for a while knows that 100% either doesn't exist, or only exists when examining small timescales. Our eDiscovery hosting platform had 100% uptime... for September. For the quarter? No, we had a well announced major outage in August while we relocated some servers to a new rack and performed network-backbone upgrades.

As it happens we faced a similar requirement from a potential client a while back. They demanded a full refund of that month's fees if the service was unavailable to their people at any time when someone tried to use it. Personally, I wasn't directly involved in these negotiations, I saw that as a nice opening offer in a negotiation rather than an ultimatum. Unless you've got boilerplate service-contract language they're trying to amend, I believe 'initial position' is the best way to frame these sorts of "requirements".

This particular client had scoped their 100% uptime, single month, had provided service metrics, if they ever notice it is down, as well as a penalty, we don't get paid. Clearly they had thought this out. I personally wanted to know what they thought of planned and announced service outages, and eventually the response came back as "same as unplanned". Ok, that's an initial position.

At that point we could:

  • Dicker about the scale of the penalty (pro-rate for any hour/day/week an outage is noticed?)
  • Dicker about planned outages that they can clearly work around?
  • Dicker about using a third party to assess downtime rather than be purely defined by the client?
  • Dicker about downtime attributable to forces beyond our control (such as something screwy happening route-wise in the Internet core, as has happened a few times, or their own firewall going fritzy)?
Uptime requirements from paying customers is nothing new, but it's also the kind of thing that can show up in internal SLA negotiations for large organizations. As State budgets continue to shrink IT charge-back schemes are becoming more and more common in areas that previously didn't have them, so these kinds of demands can arise from internal customers just as often.

The best defense is to have downtime concerns addressed in your boilerplate service contract, much like Amazon does. If an entity wants special treatment, they can work to have a special contract written up but that's a lot of pushing boulders up-hill so only the specialest of snowflakes do this kind of thing. But if you don't have boiler-plate, be prepared to dicker over their initial position.