Recently in security Category

Security profiling: TSA

| No Comments

Being of a gender-nonconforming nature has revealed certain TSA truths to me.

Yes, they do profile.

It's a white-list, unlike the police profiling that gets people into trouble. There is a 'generic safe-traveler' that they compare everyone to. If you conform, you get the minimum screening everyone gets. If you don't conform, you get some extra attention. Some ways to earn extra attention:

  • Don't look like your government ID.
  • Wear your hair up, or in braids (they've seen those kung-fu movies too)
    • Yes, they put their gloved hands in your hair and feel round. Anyone with dreads knows this all too damn well.
  • Fly with a name other than the one on your government issued ID.
  • Have body-parts replaced with things, such as a prosthetic leg, or knee (if going through metal detectors).
  • Have junk when there shouldn't be junk (or so they think).
  • Have breasts when there shouldn't be breasts (or so they think).
  • Have breast prosthesis instead of actual breasts (mastectomy patients love this).
  • And many more.

Here is an exercize you can try the next time you fly in the US. When you get to the other side of the scanner (this only works for the porno-scanners, not the metal-detectors), while you are waiting for your stuff to come out of the X-ray machine, look at the back of the scanner. Watch the procedure. Maybe put your shoes on slow to catch it all. You'll notice something I've noticed:

There are always two officers back there, a man and a woman. When someone steps in to get scanned, they have to either hit a button to indicate the gender of the person being scanned, or are presented with a side-by-side with both genders and the officer has to chose which to look at. They have a second, maybe two, to figure out which body baseline to apply to you, and those of us who are genderqueer confuse things. I fail the too-much-junk test all the time and get an enhanced patdown in my inner-thighs.

Yes, but with PreCheck you can skip that.

This actually proves my point. By voluntarily submitting to enhanced screening, I can bypass the flight-day screen annoyances. It's admitting that I no longer fit the profile of 'generic safe traveler' and need to achieve 'specific safe traveler' status. That, or I can have my bits rearranged and conform that way. Whichever.

Encryption is hard

| 1 Comment

I've run into this workflow problem before, but it happened again so I'm sharing.

We have a standard.

No passwords in plain-text. If passwords need to be emailed, the email will be encrypted with S/MIME.

Awesome. I have certificates, and so do my coworkers. Should be awesome!

To: coworker
From: me
Subject: Anti-spam appliance password

[The content can't be displayed because the S/MIME control isn't available]

Standard folowed, mischief managed.

To: me
From: coworker
Subject: RE: Anti-spam appliance password
Thanks! Worked great.

To: coworker
From: me
uid: admin1792
pw: 92*$&diq38yljq3


Encryption is hard. It would be awesome if a certain mail-client defaulted to replying-in-kind to encrypted emails. But it doesn't, and users have to remember to click the button. Which they never do.

The number one piece of password advice is:

Only memorize a single complex password, use a password manager for everything else.

Gone is the time when you can plan on memorizing complex strings of characters using shift keys, letter substitution and all of that. The threats surrounding passwords, and the sheer number of things that require them, mean that human fragility is security's greatest enemy. The use of prosthetic memory is now required.

It could be a notebook you keep with you everywhere you go.
It could be a text file on a USB stick you carry around.
It could be a text file you keep in Dropbox and reference on all of your devices.
It could be an actual password manager like 1Password or LastPass that installs in all of your browsers.

There are certain accounts that act as keys to other accounts. The first account you need to protect like Fort Knox is the email accounts that receive activation-messages for everything else you use, since that vector can be used to gain access to those other accounts through the 'Forgotten Password' links.


The second account you need to protect like Fort Knox are the identity services used by other sites so they don't have to bother with user account management, that would be all those "Log in with Twitter/Facebook/Google/Yahoo/Wordpress" buttons you see everywhere.


The problem with prosthetic memory is that to beat out memorization it needs to be everywhere you ever need to log into anything. Your laptop, phone and tablet all can use the same manager, but the same isn't true of going to a friend's house and getting on their living-room machine to log into Hulu-Plus real quick since you have an account, they don't, but they have the awesome AV setup.

It's a hard problem. Your brain is always there, it's hard to beat that for convenience. But it's time to offload that particular bit of memorization to something else; your digital life and reputation depends on it.

Fingerprinting your way to security

| No Comments

The NSA Raccoon is mostly right in this one:

For two reasons.

Reason 1: You (probably) don't own your phone

If you haven't rooted your phone, you don't really own it. While Apple says the fingerprint images are not uploaded to the borg collective, it's just one network hop away should the collective decide that it really needs them. A phone is a data-network connected device, after all, and the wireless carrier's customizations makes it a lot easier to get local access to the device if they want to.

Reason 2: Rooting won't help much either

Among the many disclosures since Snowden started his leaks is that law enforcement has a dirty-tricks bag deep enough to get into practically anything once they decide they need in. If it's got a data connection, and you haven't taken specific steps to keep others out, they'll still get in.

That said, if there is one thing we've learned from all of this is that there are massive databases at work here, and databases are only as good as the collected data and its indexes. Since fingerprint image collection isn't in the borg collective, it's not going to go into the secret national identity databases. They'll still need to take active effort to collect that information.

Fingerprint readers are nothing new:


My work laptop has one, and they've been on business laptops for years. What's new is that they're now on a data-connected device the habitual users don't own or manage, and is tightly integrated into the OS (unlike my laptop, which runs Linux; the reader has never worked for me).

What I'd like to know is where was the hue and cry over Android's face-lock ability?

That's another extremely useful biometric, and one that's arguably even more damaging than fingerprints: facial recognition in all of it's crappy implementations has been used by law-enforcement for a decade now. A good face-capture allows facial-recognition to work against random video feeds of public places to track individuals, something the ever expanding CCTV network enables.

Fingerprint readers tied to the borg collective will tie individuals to specific high value locations where fingerprint collection is deemed worth it.

Face-capture readers tied to the borg collective will tie individuals to low value public locations.

One of these is more likely to be deemed a serious invasion of privacy than the other, and it isn't the technology Apple just added.

This is a goal that many sysadmins aspire to. On a new person's first day on the job, they have a computing asset upon which they can work, and all of their accounts are accessible and configured so they can do their work. No friction start. Awesome.

How this worked at WWU when I was there:

  1. HR flagged a new employee record as Active.
  2. Nightly batch process notices the state-change for this user and kicks off a Create Accounts event.
  3. CreateAccounts kicks off system-specific account-create events, setting up group memberships based on account type (Student/Faculty/Staff).
    1. Active Directory
    2. Blackboard
    3. Banner
    4. If student: Live@EDU.
    5. Others.
  4. User shows up. Supervisor takes them to the Account Activation page.
  5. User Activates their account, setting a password and security questions.
  6. Automation propegates the password into all connected systems.
  7. User starts their day.

It worked. There was that activation step, but it was just the one, and once that was done they're all in.

This worked because the systems we were trying to connect all had either explicit single-sign-on support, or were sufficiently scriptable that we could write our own.

I'm now working for a scrappy startup that is very fond of web-applications, since it means we don't have to maintain the infrastructure ourselves. The above workflow... doesn't exist. It's not even possible. Very roughly, and with the details filed off:

  1. Invite user to join Google Apps domain.
  2. Wait for user to accept invite and log in.
  3. Send invites from 3 separate web-applications that we use daily.
  4. Wait for user to accept all the invites, create accounts, and suchlike.
  5. Add the new accounts to app-internal groups.
  6. Send invites from the 2 web-applications with short "email verification" windows when the user is in the office and can get them.
  7. Add the new accounts to other app-internal groups.

The side-effect of all of this is that the user has an account before their official start-date, they don't get all of their accounts until well after 8am, and admin users have to do things by hand. Of those 5 web-apps, only 2 of them have anything even remotely looking like an SSO hook.

There is an alternate workflow here, but it has its own problems. That workflow:

  1. Hard create the new user in Google Apps and put into in the 2-factor authentication bypass group. Write down the password assigned to the user.
  2. Login as that user to Google Apps.
  3. Invite the user to the 5 web-applications.
  4. Accept the invites and create users.
  5. Add the new account to whatever groups inside those web-apps.
  6. New user shows up.
  7. Give user the username and password set in step 1.
  8. Give user the username and password for everything created in step 4.
  9. Walk the user through installing a Password Manager in their browser of choice.
  10. Walk the user through changing their passwords on everything set in steps 1 and 4.
  11. Walk the user through setting up 2-factor.
  12. Take user out of 2-factor bypass group.

This second flow is much more acceptable to an admin since setup can be done in one sitting, and final setup can be done once the user shows up. However, it does involve written down passwords. In the case of a remote-user, it'll involve passwords passed through IM, SMS, or maybe even email

That one "Account Activation" page we had at WWU? Pipe-dream.

At some point we'll hit the inflection point between "Scrappy Startup" and "Maturing Market Leader" and the overhead of onboarding new users (and offboarding the old ones) will become onerous enough that we'll spend serious engineering resources coming up with ways to streamline the process. That may mean migrating to web-apps that have SSO hooks.

You know what would make my life a lot easier now?

If more web-apps supported either OpenID or Google Login.

It's one fewer authentication domain I have to manage, and that's always good.

So why DO VPN clients use UDP?

I've wondered why IPSec VPNs do that for a while, but hadn't taken the time to figure out why that is. I recently took the time.

The major reason comes down to one very big problem: NAT traversal.

When IPSec VPNs came out originally, I remember there being many problems with the NAT gateways most houses (and hotels) had. It eventually cleared up but I didn't pay attention; it wasn't a problem that was in my area of responsibility, so I didn't do any troubleshooting for it.

There are three problems IPSec VPNs encounter with NAT gateways. One is intrinsic to NAT, the other two are specific to some implementations of NAT.

  1. IPv4 IPSec traffic uses IP Protocol 50, which is neither TCP (proto 6) or UDP (proto 17), and protocol 50 uses no ports on the packet. Therefore, a VPN concentrator can only support a single VPN client behind a specific NAT gateway. This can be a problem if four people from the same company are staying in the same hotel for a conference.
  2. IPv4 IPSec traffic uses IP Protocol 50, which is neither TCP or UDP. Some NAT gateways drop anything that isn't TCP or UDP, which will be a problem for IPSec VPNs.
  3. NAT gateways rewrite certain headers and play games with packet checksums, which IPSec doesn't like. So if IPSec is going to tunnel via TCP or UDP, there will be issues.

These are some of the reasons SSL VPNs became popular.

This is where RFC 3751 comes in. It's titled, "IPsec-Network Address Translation (NAT) Compatibility Requirements" oddly enough. It turns out that packet checksums are not required for IPv4 UDP packets, which makes them a natural choice to tunnel an IPSec VPN through a stupid NAT gateway. The VPN concentrator pulls the IPSec packet out of the UDP packet, and thanks to the cryptographic nature of IPSec it already has ways to detect packet corruption and will handle that (and any required retransmits) at the IPSec layer.

The cloud will happen

| No Comments
Like many olde tyme sysadmins, I look at 'cloud' and shake my head. It's just virtualization the way we've always been doing it, but with yet another abstraction layer on top to automate deploying certain kinds of instances really fast.

However... it's still new to a lot of entities. The concept of an outsourced virtualization plant is very new. For entities that use compliance audits for certain kinds of vendors it is most definitely causing something of a quandary. How much data-assurance do you mandate for such suppliers? What kind of 3rd party audits do you mandate they pass? Lots of questions.

Over on 3 Geeks and a Law Blog, they recently covered this dynamic in a post titled The Inevitable Cloud as it relates to the legal field. In many ways, the Law field shares information handling requirements similar to the Health-Care field, though we don't have HIPPA. We handle highly sensitive information, and who had access to what, when, and what they did with it can be extremely relevant details (it's called spoliation). Because of this, certain firms are very reluctant to go for cloud solutions.

Some of their concerns:

  • Who at the outsourcer has access to the data?
  • What controls exist to document what such people did with the data?
  • What guarantees are in place to ensure that any modification is both detectable and auditable?

For an entity like Amazon AWS (a.k.a. Faceless Megacorp) the answer to the first may not be answerable without lots of NDAs being signed. The answers to the second may not even be given by Amazon unless the contract is really big. The answers to the third? How about this nice third-party audit report we have...

The pet disaster for such compliance officers is a user with elevated access deciding to get curious and exploiting a maintenance-only access method to directly access data files or network streams. The ability of an entity to respond to such fears to satisfaction means they can win some big contracts.

However, the costs of such systems are rather high; and as the 3 Geeks point out, not all revenue is profit-making. Firms that insist on end-to-end transport-mode IPSec and universally encrypted local storage all with end-user-only key storage are going to find fewer and fewer entities willing to play ball. A compromise will be made.

However, at the other end of the spectrum you have the 3 person law offices of the world and there are a lot more of them out there. These are offices who don't have enough people to bother with a Compliance Officer. They may very well be using dropbox to share files with each other (though possibly TrueCrypted), and are practically guaranteed to be using outsourced email of some kind. These are the firms that are going into the cloud first, pretty much by default. The rest of the market will follow along, though at a remove of some years.

Exciting times.

Microsoft is releasing an out-of-band patch to invalidate two of their Intermediate Certificate Authorities.

In essence, the Flame malware appears to have code signed by a valid MS certificate authority. As this particular malware is suspected to have been written by a "state actor" (a.k.a. cyber-warfare unit for a government), chances are good that this CA is not circulating in the general unseemly underbelly of the Internet. However, it does present a compromise of those certificates, so Microsoft is issuing revocation certificates for them.

The core problem here is that this CA is trusted by all Windows installs by default, and could be used to sign patches or other software. This has obvious problems in the form of valid-seeming software installs, but less obvious ones in that somewhere a state-actor has the ability to perform man-in-the-middle attacks for SSL using valid certificates.

The PKI system was designed around the idea that certificate authorities would occasionally get compromised and does contain mechanisms for handling that. However, those mechanisms are not frequently exercised so the process of issuing such a revocation is, shall we say, not as smooth as it really should be.

You really should apply this patch as soon as it arrives.

The cost of insecurity

| 1 Comment
This rant is a familiar one to a lot of desktop-facing IT professionals.

Today the person who handles our bank stuff came to me with a problem. The check-scanner wasn't working. I poked around but couldn't make it work, and advised her to talk to the bank since they provided the scanner, the scanner driver, the IE-plugins it worked with, and Windows didn't recognize it as a scanner.

So she did.

Their advice?

You need to turn off Windows Update. Every time it runs it changes things in IE and you have to go through and do a bunch of things to make it work

I gave her the look. Then I remembered myself and redirected the look to a harmless corner until I could speak again. Even getting a piece of that baleful stare caused her to cringe. Oops.

Like all right-thinking Windows admins, I'm a believer in leaving Windows Updates turned on if I'm not doing something else to manage that particular risk. Our fleet of Windows machines is not big enough for me to bother with WSUS (we have a LOT of Mac users), and we most definitely do not do anything like blocking browsing at the border. So, I want those updates thanks.

So I pulled a laptop out of the dead pile. That laptop will now be a dedicated machine for talking to the bank and nothing else. All because our freaking bank can't run secureable software. Makes you question your trust in them, it does.

Judicial rubber-hoses

The other day a Colorado court ordered a defendant to produce the unencrypted contents of their own laptop. This is what I called "rubber hose cryptography", and previously we've heard of efforts in the UK to compel decryption. It has now happened here, and not at the US border. Unlike the UK, this decryption demand in Colorado is not based on a law that specifically says that courts can demand this.

Wired article

The counter-argument is quite clearly the 5th amendment right guaranteeing the ability to not self-incriminate. If that decryption key only exists in your head, and disclosing it would incriminate you, then you don't have to yield the key.

This judge disagreed. I'm not a lawyer, so I can't tell what legal hairs were split to come to this decision. But the fact remains that this judgment stands. The only concession he appears to have made for the defendant is to preclude the prosecution from using the act of disclosure as a 'confession', but the data yielded by the disclosure is still admissible.

Other Blogs

My Other Stuff

Monthly Archives