Recently in security Category

Digital Doorknobs

Doorknobs are entering the Internet of (unsecured) Things.

However, they've been there for quite some time already. As anyone who has been in a modern hotel any time in the last 30 years knows, metal keys are very much a thing of the past. The Hotel industry made this move for a lot of reasons, a big one being that plastic card is a lot easier to replace than an actual key.

They've also been there for office-access for probably longer, as anyone who has ever had to waive their butt or purse at a scan-pad beside a door knows. Modern versions are beginning to get smartphone hookups, allowing the an expensive (but employee-owned) smartphone with an app on it and enabled Bluetooth to replace that cheap company-owned prox-pass.

They're now moving into residences, and I'm not a fan of this trend. Most of my objection comes from being in Operations for as long as I have. The convenience argument for internet-enabling your doorknob is easy to make:

  • Need emergency maintenance when you're on vacation? Allow the maintenance crew in from your phone!
  • Assign digital keys to family members you can revoke when they piss you off!
  • Kid get their phone stolen? Revoke the stolen key and don't bother with a locksmith to change the locks!
  • Want the door to unlock just by walking up to it? Enable Bluetooth on your phone, and the door will unlock itself when you get close!

This is why these systems are selling.

Security

I'm actually mostly OK with the security model on these things. The internals I've looked at involved PKI and client-certificates. When a device like a phone gets a key, that signed client-cert is allowed to access a thingy. If that phone gets stolen, revoke the cert at the CA and the entire thing is toast. The conversation between device and the mothership is done over a TLS connection using client-certificate authentication, which is actually more secure than most banks website logins.

The handshake over Bluetooth is similarly cryptoed, making it less vulnerable to replay attacks.

Where we run into problems is the intersection of life-safety and the flaky nature of most residential internet connections. These things need to be able to let people in the door even when CentryLink is doing that thing it does. If you err on the side of getting in the door, you end up caching valid certs on the lock-devices themselves, opening them up to offline attacks if you can jam their ability to phone home. If you err on the side of security, an internet outage is a denial of access attack.

The Real Objection

It comes down to the differences in the hardware and software replacement cycles, as well as certain rare but significant events like a change of ownership. The unpowered deadbolt in your front door could be 20 years old. It may be vulnerable to things like bump-keys, but you can give the pointy bits of metal (keys) to the next residents on your way to your new place and never have to worry about it. The replacement cycle on the whole deadbolt is probably the same as the replacement cycle of the owners, which is to say 'many years'. The pin settings inside the deadbolt may get changed more often, but the whole thing doesn't get changed much at all.

Contrast this with the modern software ecosystem, where if your security product hasn't had an update in 6 months it's considered horribly out of date. At the same time, due to the iterative nature of most SaaS providers and the APIs they maintain, an API version may get 5 years of support before getting shut down. Build a hardware fleet based on that API, and you have a hardware fleet that ages at the rate of software. Suddenly, that deadbolt needs a complete replacement every 5 years, and costs about 4x what the unpowered one did.

Most folks aren't used to that. In fact, they'll complain about it. A lot.

There is another argument to make about embedded system (that smart deadbolt), and their ability to handle the constantly more computationally expensive crypto-space. Not to mention changing radio-specs like Bluetooth and WiFi that will render old doorknobs unable to speak to the newest iPhone. Which is to say, definitely expect Google and Apple to put out doorknobs in the not too distant future. Amazon is already trying.

All of this make doorknob makers salivate, since it means more doorknobs will be sold per year. Also the analytics over how people use their doors? Priceless. Capitalism!

It also means that doorknob operators, like homeowners, are going to be in for a lot more maintenance work to keep them running. Work that didn't used to be there before. Losing a phone is pretty clear, but what happens when you sell your house?

You can't exactly 'turn over the keys' if they're 100% digital and locked into your Google or Apple identities. Doorknob makers are going to have to have voluntary ownership-transfer protocols.

Involuntary transfer protocols are going to be a big thing. If the old owners didn't transfer, you could be locked out of the house. That could mean a locksmith coming in to break in to your house, and having to replace every deadbolt in the place with brand new. Or it could mean arguing with Google over who owns your home and how to prove it.

Doing it wrong has nasty side-effects. If you've pissed off the wrong people on the internet, you could have griefers coming after your doorknob provider, and you could find yourself completely locked out of your house. The more paranoid will have to get Enterprise contracts and manage their doorknobs themselves so they have full control over the authentication and auth-bypass routes.

Personally, I don't like that added risk-exposure. I don't want my front door able to be socially engineered out of my control. I'll be sticking with direct-interaction token based authentication methods instead of digitally mediated digital token auth methods.

Internet of Patches

This is a good recommendation:

As a sysadmin, I've been saying fuckno to things like Smart TVs and fridges. I do that game professionally, and I know what it takes to keep a fleet of software up to date. It ain't easy. Keeping firmware updated in things like... non-Nest internet attached thermostats (yes, they exist), the PC embedded in the fridge, the hub that runs your smart lighting, the firmware in your BluRay player, internet-attached talking dog toys... It's hard. And it only takes one for Evil People to get inside your crunchy exterior and chow down on everything else.

You can probably trust a company like Schlage to treat their software like a security-critical component of a network. You probably can't say the same about the internet-attached talking dog toy, even though they're likely on the same subnet. The same subnet as all of your iPads, MacBooks, and phones. Segmenting the network makes it harder for evil coming in on the, shall we say, vendor supported side from the more routine evils faced by general web-browsing.

Not that segmenting is easy to do, unfortunately.

Security profiling: TSA

Being of a gender-nonconforming nature has revealed certain TSA truths to me.

Yes, they do profile.

It's a white-list, unlike the police profiling that gets people into trouble. There is a 'generic safe-traveler' that they compare everyone to. If you conform, you get the minimum screening everyone gets. If you don't conform, you get some extra attention. Some ways to earn extra attention:

  • Don't look like your government ID.
  • Wear your hair up, or in braids (they've seen those kung-fu movies too)
    • Yes, they put their gloved hands in your hair and feel round. Anyone with dreads knows this all too damn well.
  • Fly with a name other than the one on your government issued ID.
  • Have body-parts replaced with things, such as a prosthetic leg, or knee (if going through metal detectors).
  • Have junk when there shouldn't be junk (or so they think).
  • Have breasts when there shouldn't be breasts (or so they think).
  • Have breast prosthesis instead of actual breasts (mastectomy patients love this).
  • And many more.

Here is an exercize you can try the next time you fly in the US. When you get to the other side of the scanner (this only works for the porno-scanners, not the metal-detectors), while you are waiting for your stuff to come out of the X-ray machine, look at the back of the scanner. Watch the procedure. Maybe put your shoes on slow to catch it all. You'll notice something I've noticed:

There are always two officers back there, a man and a woman. When someone steps in to get scanned, they have to either hit a button to indicate the gender of the person being scanned, or are presented with a side-by-side with both genders and the officer has to chose which to look at. They have a second, maybe two, to figure out which body baseline to apply to you, and those of us who are genderqueer confuse things. I fail the too-much-junk test all the time and get an enhanced patdown in my inner-thighs.

Yes, but with PreCheck you can skip that.

This actually proves my point. By voluntarily submitting to enhanced screening, I can bypass the flight-day screen annoyances. It's admitting that I no longer fit the profile of 'generic safe traveler' and need to achieve 'specific safe traveler' status. That, or I can have my bits rearranged and conform that way. Whichever.

Encryption is hard

| 1 Comment

I've run into this workflow problem before, but it happened again so I'm sharing.


We have a standard.

No passwords in plain-text. If passwords need to be emailed, the email will be encrypted with S/MIME.

Awesome. I have certificates, and so do my coworkers. Should be awesome!

To: coworker
From: me
Subject: Anti-spam appliance password

[The content can't be displayed because the S/MIME control isn't available]

Standard folowed, mischief managed.

To: me
From: coworker
Subject: RE: Anti-spam appliance password
Thanks! Worked great.

To: coworker
From: me
uid: admin1792
pw: 92*$&diq38yljq3
https://172.2.245.11/login.cgi

Sigh.

Encryption is hard. It would be awesome if a certain mail-client defaulted to replying-in-kind to encrypted emails. But it doesn't, and users have to remember to click the button. Which they never do.

The number one piece of password advice is:

Only memorize a single complex password, use a password manager for everything else.

Gone is the time when you can plan on memorizing complex strings of characters using shift keys, letter substitution and all of that. The threats surrounding passwords, and the sheer number of things that require them, mean that human fragility is security's greatest enemy. The use of prosthetic memory is now required.

It could be a notebook you keep with you everywhere you go.
It could be a text file on a USB stick you carry around.
It could be a text file you keep in Dropbox and reference on all of your devices.
It could be an actual password manager like 1Password or LastPass that installs in all of your browsers.

There are certain accounts that act as keys to other accounts. The first account you need to protect like Fort Knox is the email accounts that receive activation-messages for everything else you use, since that vector can be used to gain access to those other accounts through the 'Forgotten Password' links.

ForgottenEmail.png

The second account you need to protect like Fort Knox are the identity services used by other sites so they don't have to bother with user account management, that would be all those "Log in with Twitter/Facebook/Google/Yahoo/Wordpress" buttons you see everywhere.

LoginEverywhere.png

The problem with prosthetic memory is that to beat out memorization it needs to be everywhere you ever need to log into anything. Your laptop, phone and tablet all can use the same manager, but the same isn't true of going to a friend's house and getting on their living-room machine to log into Hulu-Plus real quick since you have an account, they don't, but they have the awesome AV setup.

It's a hard problem. Your brain is always there, it's hard to beat that for convenience. But it's time to offload that particular bit of memorization to something else; your digital life and reputation depends on it.

Fingerprinting your way to security

The NSA Raccoon is mostly right in this one:

For two reasons.

Reason 1: You (probably) don't own your phone

If you haven't rooted your phone, you don't really own it. While Apple says the fingerprint images are not uploaded to the borg collective, it's just one network hop away should the collective decide that it really needs them. A phone is a data-network connected device, after all, and the wireless carrier's customizations makes it a lot easier to get local access to the device if they want to.

Reason 2: Rooting won't help much either

Among the many disclosures since Snowden started his leaks is that law enforcement has a dirty-tricks bag deep enough to get into practically anything once they decide they need in. If it's got a data connection, and you haven't taken specific steps to keep others out, they'll still get in.


That said, if there is one thing we've learned from all of this is that there are massive databases at work here, and databases are only as good as the collected data and its indexes. Since fingerprint image collection isn't in the borg collective, it's not going to go into the secret national identity databases. They'll still need to take active effort to collect that information.


Fingerprint readers are nothing new:

FingerprintLaptop.png

My work laptop has one, and they've been on business laptops for years. What's new is that they're now on a data-connected device the habitual users don't own or manage, and is tightly integrated into the OS (unlike my laptop, which runs Linux; the reader has never worked for me).


What I'd like to know is where was the hue and cry over Android's face-lock ability?

That's another extremely useful biometric, and one that's arguably even more damaging than fingerprints: facial recognition in all of it's crappy implementations has been used by law-enforcement for a decade now. A good face-capture allows facial-recognition to work against random video feeds of public places to track individuals, something the ever expanding CCTV network enables.

Fingerprint readers tied to the borg collective will tie individuals to specific high value locations where fingerprint collection is deemed worth it.

Face-capture readers tied to the borg collective will tie individuals to low value public locations.

One of these is more likely to be deemed a serious invasion of privacy than the other, and it isn't the technology Apple just added.

This is a goal that many sysadmins aspire to. On a new person's first day on the job, they have a computing asset upon which they can work, and all of their accounts are accessible and configured so they can do their work. No friction start. Awesome.

How this worked at WWU when I was there:

  1. HR flagged a new employee record as Active.
  2. Nightly batch process notices the state-change for this user and kicks off a Create Accounts event.
  3. CreateAccounts kicks off system-specific account-create events, setting up group memberships based on account type (Student/Faculty/Staff).
    1. Active Directory
    2. Blackboard
    3. Banner
    4. If student: Live@EDU.
    5. Others.
  4. User shows up. Supervisor takes them to the Account Activation page.
  5. User Activates their account, setting a password and security questions.
  6. Automation propegates the password into all connected systems.
  7. User starts their day.

It worked. There was that activation step, but it was just the one, and once that was done they're all in.

This worked because the systems we were trying to connect all had either explicit single-sign-on support, or were sufficiently scriptable that we could write our own.

I'm now working for a scrappy startup that is very fond of web-applications, since it means we don't have to maintain the infrastructure ourselves. The above workflow... doesn't exist. It's not even possible. Very roughly, and with the details filed off:

  1. Invite user to join Google Apps domain.
  2. Wait for user to accept invite and log in.
  3. Send invites from 3 separate web-applications that we use daily.
  4. Wait for user to accept all the invites, create accounts, and suchlike.
  5. Add the new accounts to app-internal groups.
  6. Send invites from the 2 web-applications with short "email verification" windows when the user is in the office and can get them.
  7. Add the new accounts to other app-internal groups.

The side-effect of all of this is that the user has an account before their official start-date, they don't get all of their accounts until well after 8am, and admin users have to do things by hand. Of those 5 web-apps, only 2 of them have anything even remotely looking like an SSO hook.

There is an alternate workflow here, but it has its own problems. That workflow:

  1. Hard create the new user in Google Apps and put into in the 2-factor authentication bypass group. Write down the password assigned to the user.
  2. Login as that user to Google Apps.
  3. Invite the user to the 5 web-applications.
  4. Accept the invites and create users.
  5. Add the new account to whatever groups inside those web-apps.
  6. New user shows up.
  7. Give user the username and password set in step 1.
  8. Give user the username and password for everything created in step 4.
  9. Walk the user through installing a Password Manager in their browser of choice.
  10. Walk the user through changing their passwords on everything set in steps 1 and 4.
  11. Walk the user through setting up 2-factor.
  12. Take user out of 2-factor bypass group.

This second flow is much more acceptable to an admin since setup can be done in one sitting, and final setup can be done once the user shows up. However, it does involve written down passwords. In the case of a remote-user, it'll involve passwords passed through IM, SMS, or maybe even email

That one "Account Activation" page we had at WWU? Pipe-dream.

At some point we'll hit the inflection point between "Scrappy Startup" and "Maturing Market Leader" and the overhead of onboarding new users (and offboarding the old ones) will become onerous enough that we'll spend serious engineering resources coming up with ways to streamline the process. That may mean migrating to web-apps that have SSO hooks.

You know what would make my life a lot easier now?

If more web-apps supported either OpenID or Google Login.

It's one fewer authentication domain I have to manage, and that's always good.

So why DO VPN clients use UDP?

| 2 Comments
I've wondered why IPSec VPNs do that for a while, but hadn't taken the time to figure out why that is. I recently took the time.

The major reason comes down to one very big problem: NAT traversal.

When IPSec VPNs came out originally, I remember there being many problems with the NAT gateways most houses (and hotels) had. It eventually cleared up but I didn't pay attention; it wasn't a problem that was in my area of responsibility, so I didn't do any troubleshooting for it.

There are three problems IPSec VPNs encounter with NAT gateways. One is intrinsic to NAT, the other two are specific to some implementations of NAT.

  1. IPv4 IPSec traffic uses IP Protocol 50, which is neither TCP (proto 6) or UDP (proto 17), and protocol 50 uses no ports on the packet. Therefore, a VPN concentrator can only support a single VPN client behind a specific NAT gateway. This can be a problem if four people from the same company are staying in the same hotel for a conference.
  2. IPv4 IPSec traffic uses IP Protocol 50, which is neither TCP or UDP. Some NAT gateways drop anything that isn't TCP or UDP, which will be a problem for IPSec VPNs.
  3. NAT gateways rewrite certain headers and play games with packet checksums, which IPSec doesn't like. So if IPSec is going to tunnel via TCP or UDP, there will be issues.

These are some of the reasons SSL VPNs became popular.

This is where RFC 3751 comes in. It's titled, "IPsec-Network Address Translation (NAT) Compatibility Requirements" oddly enough. It turns out that packet checksums are not required for IPv4 UDP packets, which makes them a natural choice to tunnel an IPSec VPN through a stupid NAT gateway. The VPN concentrator pulls the IPSec packet out of the UDP packet, and thanks to the cryptographic nature of IPSec it already has ways to detect packet corruption and will handle that (and any required retransmits) at the IPSec layer.


The cloud will happen

Like many olde tyme sysadmins, I look at 'cloud' and shake my head. It's just virtualization the way we've always been doing it, but with yet another abstraction layer on top to automate deploying certain kinds of instances really fast.

However... it's still new to a lot of entities. The concept of an outsourced virtualization plant is very new. For entities that use compliance audits for certain kinds of vendors it is most definitely causing something of a quandary. How much data-assurance do you mandate for such suppliers? What kind of 3rd party audits do you mandate they pass? Lots of questions.

Over on 3 Geeks and a Law Blog, they recently covered this dynamic in a post titled The Inevitable Cloud as it relates to the legal field. In many ways, the Law field shares information handling requirements similar to the Health-Care field, though we don't have HIPPA. We handle highly sensitive information, and who had access to what, when, and what they did with it can be extremely relevant details (it's called spoliation). Because of this, certain firms are very reluctant to go for cloud solutions.

Some of their concerns:

  • Who at the outsourcer has access to the data?
  • What controls exist to document what such people did with the data?
  • What guarantees are in place to ensure that any modification is both detectable and auditable?

For an entity like Amazon AWS (a.k.a. Faceless Megacorp) the answer to the first may not be answerable without lots of NDAs being signed. The answers to the second may not even be given by Amazon unless the contract is really big. The answers to the third? How about this nice third-party audit report we have...

The pet disaster for such compliance officers is a user with elevated access deciding to get curious and exploiting a maintenance-only access method to directly access data files or network streams. The ability of an entity to respond to such fears to satisfaction means they can win some big contracts.

However, the costs of such systems are rather high; and as the 3 Geeks point out, not all revenue is profit-making. Firms that insist on end-to-end transport-mode IPSec and universally encrypted local storage all with end-user-only key storage are going to find fewer and fewer entities willing to play ball. A compromise will be made.




However, at the other end of the spectrum you have the 3 person law offices of the world and there are a lot more of them out there. These are offices who don't have enough people to bother with a Compliance Officer. They may very well be using dropbox to share files with each other (though possibly TrueCrypted), and are practically guaranteed to be using outsourced email of some kind. These are the firms that are going into the cloud first, pretty much by default. The rest of the market will follow along, though at a remove of some years.

Exciting times.

Microsoft is releasing an out-of-band patch to invalidate two of their Intermediate Certificate Authorities.

http://technet.microsoft.com/en-us/security/advisory/2718704
http://isc.sans.edu/diary/Microsoft+Emergency+Bulletin+Unauthorized+Certificate+used+in+Flame+/13366

In essence, the Flame malware appears to have code signed by a valid MS certificate authority. As this particular malware is suspected to have been written by a "state actor" (a.k.a. cyber-warfare unit for a government), chances are good that this CA is not circulating in the general unseemly underbelly of the Internet. However, it does present a compromise of those certificates, so Microsoft is issuing revocation certificates for them.

The core problem here is that this CA is trusted by all Windows installs by default, and could be used to sign patches or other software. This has obvious problems in the form of valid-seeming software installs, but less obvious ones in that somewhere a state-actor has the ability to perform man-in-the-middle attacks for SSL using valid certificates.

The PKI system was designed around the idea that certificate authorities would occasionally get compromised and does contain mechanisms for handling that. However, those mechanisms are not frequently exercised so the process of issuing such a revocation is, shall we say, not as smooth as it really should be.

You really should apply this patch as soon as it arrives.