Recently in security Category

The number one piece of password advice is:

Only memorize a single complex password, use a password manager for everything else.

Gone is the time when you can plan on memorizing complex strings of characters using shift keys, letter substitution and all of that. The threats surrounding passwords, and the sheer number of things that require them, mean that human fragility is security's greatest enemy. The use of prosthetic memory is now required.

It could be a notebook you keep with you everywhere you go.
It could be a text file on a USB stick you carry around.
It could be a text file you keep in Dropbox and reference on all of your devices.
It could be an actual password manager like 1Password or LastPass that installs in all of your browsers.

There are certain accounts that act as keys to other accounts. The first account you need to protect like Fort Knox is the email accounts that receive activation-messages for everything else you use, since that vector can be used to gain access to those other accounts through the 'Forgotten Password' links.

ForgottenEmail.png

The second account you need to protect like Fort Knox are the identity services used by other sites so they don't have to bother with user account management, that would be all those "Log in with Twitter/Facebook/Google/Yahoo/Wordpress" buttons you see everywhere.

LoginEverywhere.png

The problem with prosthetic memory is that to beat out memorization it needs to be everywhere you ever need to log into anything. Your laptop, phone and tablet all can use the same manager, but the same isn't true of going to a friend's house and getting on their living-room machine to log into Hulu-Plus real quick since you have an account, they don't, but they have the awesome AV setup.

It's a hard problem. Your brain is always there, it's hard to beat that for convenience. But it's time to offload that particular bit of memorization to something else; your digital life and reputation depends on it.

Fingerprinting your way to security

| No Comments

The NSA Raccoon is mostly right in this one:

For two reasons.

Reason 1: You (probably) don't own your phone

If you haven't rooted your phone, you don't really own it. While Apple says the fingerprint images are not uploaded to the borg collective, it's just one network hop away should the collective decide that it really needs them. A phone is a data-network connected device, after all, and the wireless carrier's customizations makes it a lot easier to get local access to the device if they want to.

Reason 2: Rooting won't help much either

Among the many disclosures since Snowden started his leaks is that law enforcement has a dirty-tricks bag deep enough to get into practically anything once they decide they need in. If it's got a data connection, and you haven't taken specific steps to keep others out, they'll still get in.


That said, if there is one thing we've learned from all of this is that there are massive databases at work here, and databases are only as good as the collected data and its indexes. Since fingerprint image collection isn't in the borg collective, it's not going to go into the secret national identity databases. They'll still need to take active effort to collect that information.


Fingerprint readers are nothing new:

FingerprintLaptop.png

My work laptop has one, and they've been on business laptops for years. What's new is that they're now on a data-connected device the habitual users don't own or manage, and is tightly integrated into the OS (unlike my laptop, which runs Linux; the reader has never worked for me).


What I'd like to know is where was the hue and cry over Android's face-lock ability?

That's another extremely useful biometric, and one that's arguably even more damaging than fingerprints: facial recognition in all of it's crappy implementations has been used by law-enforcement for a decade now. A good face-capture allows facial-recognition to work against random video feeds of public places to track individuals, something the ever expanding CCTV network enables.

Fingerprint readers tied to the borg collective will tie individuals to specific high value locations where fingerprint collection is deemed worth it.

Face-capture readers tied to the borg collective will tie individuals to low value public locations.

One of these is more likely to be deemed a serious invasion of privacy than the other, and it isn't the technology Apple just added.

This is a goal that many sysadmins aspire to. On a new person's first day on the job, they have a computing asset upon which they can work, and all of their accounts are accessible and configured so they can do their work. No friction start. Awesome.

How this worked at WWU when I was there:

  1. HR flagged a new employee record as Active.
  2. Nightly batch process notices the state-change for this user and kicks off a Create Accounts event.
  3. CreateAccounts kicks off system-specific account-create events, setting up group memberships based on account type (Student/Faculty/Staff).
    1. Active Directory
    2. Blackboard
    3. Banner
    4. If student: Live@EDU.
    5. Others.
  4. User shows up. Supervisor takes them to the Account Activation page.
  5. User Activates their account, setting a password and security questions.
  6. Automation propegates the password into all connected systems.
  7. User starts their day.

It worked. There was that activation step, but it was just the one, and once that was done they're all in.

This worked because the systems we were trying to connect all had either explicit single-sign-on support, or were sufficiently scriptable that we could write our own.

I'm now working for a scrappy startup that is very fond of web-applications, since it means we don't have to maintain the infrastructure ourselves. The above workflow... doesn't exist. It's not even possible. Very roughly, and with the details filed off:

  1. Invite user to join Google Apps domain.
  2. Wait for user to accept invite and log in.
  3. Send invites from 3 separate web-applications that we use daily.
  4. Wait for user to accept all the invites, create accounts, and suchlike.
  5. Add the new accounts to app-internal groups.
  6. Send invites from the 2 web-applications with short "email verification" windows when the user is in the office and can get them.
  7. Add the new accounts to other app-internal groups.

The side-effect of all of this is that the user has an account before their official start-date, they don't get all of their accounts until well after 8am, and admin users have to do things by hand. Of those 5 web-apps, only 2 of them have anything even remotely looking like an SSO hook.

There is an alternate workflow here, but it has its own problems. That workflow:

  1. Hard create the new user in Google Apps and put into in the 2-factor authentication bypass group. Write down the password assigned to the user.
  2. Login as that user to Google Apps.
  3. Invite the user to the 5 web-applications.
  4. Accept the invites and create users.
  5. Add the new account to whatever groups inside those web-apps.
  6. New user shows up.
  7. Give user the username and password set in step 1.
  8. Give user the username and password for everything created in step 4.
  9. Walk the user through installing a Password Manager in their browser of choice.
  10. Walk the user through changing their passwords on everything set in steps 1 and 4.
  11. Walk the user through setting up 2-factor.
  12. Take user out of 2-factor bypass group.

This second flow is much more acceptable to an admin since setup can be done in one sitting, and final setup can be done once the user shows up. However, it does involve written down passwords. In the case of a remote-user, it'll involve passwords passed through IM, SMS, or maybe even email

That one "Account Activation" page we had at WWU? Pipe-dream.

At some point we'll hit the inflection point between "Scrappy Startup" and "Maturing Market Leader" and the overhead of onboarding new users (and offboarding the old ones) will become onerous enough that we'll spend serious engineering resources coming up with ways to streamline the process. That may mean migrating to web-apps that have SSO hooks.

You know what would make my life a lot easier now?

If more web-apps supported either OpenID or Google Login.

It's one fewer authentication domain I have to manage, and that's always good.

So why DO VPN clients use UDP?

| 2 Comments
I've wondered why IPSec VPNs do that for a while, but hadn't taken the time to figure out why that is. I recently took the time.

The major reason comes down to one very big problem: NAT traversal.

When IPSec VPNs came out originally, I remember there being many problems with the NAT gateways most houses (and hotels) had. It eventually cleared up but I didn't pay attention; it wasn't a problem that was in my area of responsibility, so I didn't do any troubleshooting for it.

There are three problems IPSec VPNs encounter with NAT gateways. One is intrinsic to NAT, the other two are specific to some implementations of NAT.

  1. IPv4 IPSec traffic uses IP Protocol 50, which is neither TCP (proto 6) or UDP (proto 17), and protocol 50 uses no ports on the packet. Therefore, a VPN concentrator can only support a single VPN client behind a specific NAT gateway. This can be a problem if four people from the same company are staying in the same hotel for a conference.
  2. IPv4 IPSec traffic uses IP Protocol 50, which is neither TCP or UDP. Some NAT gateways drop anything that isn't TCP or UDP, which will be a problem for IPSec VPNs.
  3. NAT gateways rewrite certain headers and play games with packet checksums, which IPSec doesn't like. So if IPSec is going to tunnel via TCP or UDP, there will be issues.

These are some of the reasons SSL VPNs became popular.

This is where RFC 3751 comes in. It's titled, "IPsec-Network Address Translation (NAT) Compatibility Requirements" oddly enough. It turns out that packet checksums are not required for IPv4 UDP packets, which makes them a natural choice to tunnel an IPSec VPN through a stupid NAT gateway. The VPN concentrator pulls the IPSec packet out of the UDP packet, and thanks to the cryptographic nature of IPSec it already has ways to detect packet corruption and will handle that (and any required retransmits) at the IPSec layer.


The cloud will happen

| No Comments
Like many olde tyme sysadmins, I look at 'cloud' and shake my head. It's just virtualization the way we've always been doing it, but with yet another abstraction layer on top to automate deploying certain kinds of instances really fast.

However... it's still new to a lot of entities. The concept of an outsourced virtualization plant is very new. For entities that use compliance audits for certain kinds of vendors it is most definitely causing something of a quandary. How much data-assurance do you mandate for such suppliers? What kind of 3rd party audits do you mandate they pass? Lots of questions.

Over on 3 Geeks and a Law Blog, they recently covered this dynamic in a post titled The Inevitable Cloud as it relates to the legal field. In many ways, the Law field shares information handling requirements similar to the Health-Care field, though we don't have HIPPA. We handle highly sensitive information, and who had access to what, when, and what they did with it can be extremely relevant details (it's called spoliation). Because of this, certain firms are very reluctant to go for cloud solutions.

Some of their concerns:

  • Who at the outsourcer has access to the data?
  • What controls exist to document what such people did with the data?
  • What guarantees are in place to ensure that any modification is both detectable and auditable?

For an entity like Amazon AWS (a.k.a. Faceless Megacorp) the answer to the first may not be answerable without lots of NDAs being signed. The answers to the second may not even be given by Amazon unless the contract is really big. The answers to the third? How about this nice third-party audit report we have...

The pet disaster for such compliance officers is a user with elevated access deciding to get curious and exploiting a maintenance-only access method to directly access data files or network streams. The ability of an entity to respond to such fears to satisfaction means they can win some big contracts.

However, the costs of such systems are rather high; and as the 3 Geeks point out, not all revenue is profit-making. Firms that insist on end-to-end transport-mode IPSec and universally encrypted local storage all with end-user-only key storage are going to find fewer and fewer entities willing to play ball. A compromise will be made.




However, at the other end of the spectrum you have the 3 person law offices of the world and there are a lot more of them out there. These are offices who don't have enough people to bother with a Compliance Officer. They may very well be using dropbox to share files with each other (though possibly TrueCrypted), and are practically guaranteed to be using outsourced email of some kind. These are the firms that are going into the cloud first, pretty much by default. The rest of the market will follow along, though at a remove of some years.

Exciting times.

Microsoft is releasing an out-of-band patch to invalidate two of their Intermediate Certificate Authorities.

http://technet.microsoft.com/en-us/security/advisory/2718704
http://isc.sans.edu/diary/Microsoft+Emergency+Bulletin+Unauthorized+Certificate+used+in+Flame+/13366

In essence, the Flame malware appears to have code signed by a valid MS certificate authority. As this particular malware is suspected to have been written by a "state actor" (a.k.a. cyber-warfare unit for a government), chances are good that this CA is not circulating in the general unseemly underbelly of the Internet. However, it does present a compromise of those certificates, so Microsoft is issuing revocation certificates for them.

The core problem here is that this CA is trusted by all Windows installs by default, and could be used to sign patches or other software. This has obvious problems in the form of valid-seeming software installs, but less obvious ones in that somewhere a state-actor has the ability to perform man-in-the-middle attacks for SSL using valid certificates.

The PKI system was designed around the idea that certificate authorities would occasionally get compromised and does contain mechanisms for handling that. However, those mechanisms are not frequently exercised so the process of issuing such a revocation is, shall we say, not as smooth as it really should be.

You really should apply this patch as soon as it arrives.

The cost of insecurity

| 1 Comment
This rant is a familiar one to a lot of desktop-facing IT professionals.

Today the person who handles our bank stuff came to me with a problem. The check-scanner wasn't working. I poked around but couldn't make it work, and advised her to talk to the bank since they provided the scanner, the scanner driver, the IE-plugins it worked with, and Windows didn't recognize it as a scanner.

So she did.

Their advice?

You need to turn off Windows Update. Every time it runs it changes things in IE and you have to go through and do a bunch of things to make it work

I gave her the look. Then I remembered myself and redirected the look to a harmless corner until I could speak again. Even getting a piece of that baleful stare caused her to cringe. Oops.

Like all right-thinking Windows admins, I'm a believer in leaving Windows Updates turned on if I'm not doing something else to manage that particular risk. Our fleet of Windows machines is not big enough for me to bother with WSUS (we have a LOT of Mac users), and we most definitely do not do anything like blocking browsing at the border. So, I want those updates thanks.

So I pulled a laptop out of the dead pile. That laptop will now be a dedicated machine for talking to the bank and nothing else. All because our freaking bank can't run secureable software. Makes you question your trust in them, it does.

Judicial rubber-hoses

| 4 Comments
The other day a Colorado court ordered a defendant to produce the unencrypted contents of their own laptop. This is what I called "rubber hose cryptography", and previously we've heard of efforts in the UK to compel decryption. It has now happened here, and not at the US border. Unlike the UK, this decryption demand in Colorado is not based on a law that specifically says that courts can demand this.

Wired article

The counter-argument is quite clearly the 5th amendment right guaranteeing the ability to not self-incriminate. If that decryption key only exists in your head, and disclosing it would incriminate you, then you don't have to yield the key.

This judge disagreed. I'm not a lawyer, so I can't tell what legal hairs were split to come to this decision. But the fact remains that this judgment stands. The only concession he appears to have made for the defendant is to preclude the prosecution from using the act of disclosure as a 'confession', but the data yielded by the disclosure is still admissible.

Biometrics

| No Comments
Today I had an incident with biometrics that further convinces me that they are not the end-all be-all of security.

Today I had to head up to our datacenter to do some nebulous "things". Like you do. Since we colo at a large facility that has all of those security certifications, getting into it is something of a trial but a familiar one. Hand scans, double-layered man-trap, the whole deal.

Only, the hand-scanner on the cage our racks are in wouldn't read my hand today. It took 19 tries before it decided I was me. To get this far I had to pass four other hand-scaners and only had to re-enter three times along the way. When I went back to the security station to see WTF, they had me re-scan my hand. Leaning over I saw that I had managed to fill their live-log of entry/exit events with red events, and they didn't seem phased in the least.

I've had some trouble with this particular hand-scanner before, enough that I dread leaving the cage. For what ever reason, this particular scanner is far enough out of spec that the fuzzy results it returns are fuzzier than their system will accept. Since it's on a rack cage rather than the higher trafficked man-trap scanners, it hasn't been caught out and re-tuned (or something). But still, I hate that thing.

Strong passwords, an update

| 2 Comments
Five years ago I published the following article:

Strong passwords in a multiple authentication environment.

The key thing I was driving at in that article, a strong password on one system is not a strong one on another system and this can significantly compromise password security if multiple authentication systems are in use, is still very much true.

That article was full of Solaris 8 and NDS. Here in 2011 those are now really old. What has changed since then?

  • Old Samba versions that don't support NTLMv2 are now very rare.
  • Most modern Samba now includes Kerberos support.
  • LM/NTLM requiring Windows installs are now very few and far between.
  • Linux can now leverage both LDAP and AD for back-end authentication, and such hooks are common and pretty well documented.
  • Web-authentication systems (OpenID) are now much more common.
  • Application-level auth is much more common and the data it protects much more significant.
Because of all of these, password length is beginning to trump password complexity as the surest bet for an uncrackable password. Those 8-character limits of yore are now blessedly rare.

However, some things still haven't changed:

  • Older software with embedded authentication still can require older password protocols.
  • Some IDM systems force passwords to be command-line safe, which restricts the allowable special-characters that may be used.
  • Embedded devices, especially very expensive embedded devices, can require old password protocols long after they've been superseded. 
  • The cost of paying off the IT Debt built into some IDM systems can prevent newer authentication systems from being implemented due to simple resource costs, keeping older protocols around longer than is safe.
  • Arbitrarily short field-limits in databases that store passwords (16 characters should be good enough for everyone, obviously).
  • Developers who decide to write their own authentication systems from scratch rather than hook into something else that's been battle-tested.
So, even now, today, in 2011, the bad decisions of ten years ago, or the hard-to-update technology of ten years ago, can significantly hamper a unified password policy for a multi-authentication system. That hasn't changed. It's all well and good that Linux (which replaced your Solaris installs three years ago) can support 200 character passwords, but that doesn't matter if the custom-built ERP application has 10-character passwords baked into its core.

However, another trend has continued since 2006: web-apps have continued to eat client/server apps for lunch.

With web-apps the option of leveraging different authentication system, or at least providing an abstraction layer to hide the old cruft from view, is possible. Perhaps there is now a web-app in front of the custom-built ERP system, put there so everyone could stop maintaining all of those terminal programs and ease the VPN/home-computer problem (everyone has a browser). That web-app could very well use an alternate authentication source, such as LDAP, and use the LDAP database itself to store the (highly entropic and automatically rotated frequently) authentication tokens needed for the old system.

With a system like that, such an old system can still be protected by an enterprising user who has selected this:

valence NIMBOSE sequestrate absolution [953]
As their passphrase. Four dictionary words, no funny spellings, four character-sets.

Is that a good password? Consider this: The Oxford English Dictionary has over 600,000 words in it. A four-word uncased pass-phrase using random words requires 1.296x10^23 guesses to brute-force, since each position has 600K possibilities. An 8 character mixed case password requires 5.346x10^13 guesses, add in numbers and you rise to 2.815x10^14.

With GPU-based password crackers available now, even salted 8-character passwords are not that good even with fully randomized passwords. This is the march of technology. As has been pointed out, adding ever more random ASCII to a password doesn't scale. Pass phrases? Good things!

But if you restrict the word-set only to words in common usage and exclude words only used in technical settings, the word-set drops precipitously to 20K or less. Even so, a four word uncased password gives an password requiring 1.6x10^17 guesses. Throw in some irregular casing and the total guesses goes up markedly.

Which reminds me, I need to spend some time going over our own developed software for password handling safety. You should to!

Other Blogs

My Other Stuff

Monthly Archives