Recently in crypto Category

Microsoft is releasing an out-of-band patch to invalidate two of their Intermediate Certificate Authorities.

In essence, the Flame malware appears to have code signed by a valid MS certificate authority. As this particular malware is suspected to have been written by a "state actor" (a.k.a. cyber-warfare unit for a government), chances are good that this CA is not circulating in the general unseemly underbelly of the Internet. However, it does present a compromise of those certificates, so Microsoft is issuing revocation certificates for them.

The core problem here is that this CA is trusted by all Windows installs by default, and could be used to sign patches or other software. This has obvious problems in the form of valid-seeming software installs, but less obvious ones in that somewhere a state-actor has the ability to perform man-in-the-middle attacks for SSL using valid certificates.

The PKI system was designed around the idea that certificate authorities would occasionally get compromised and does contain mechanisms for handling that. However, those mechanisms are not frequently exercised so the process of issuing such a revocation is, shall we say, not as smooth as it really should be.

You really should apply this patch as soon as it arrives.

Judicial rubber-hoses

The other day a Colorado court ordered a defendant to produce the unencrypted contents of their own laptop. This is what I called "rubber hose cryptography", and previously we've heard of efforts in the UK to compel decryption. It has now happened here, and not at the US border. Unlike the UK, this decryption demand in Colorado is not based on a law that specifically says that courts can demand this.

Wired article

The counter-argument is quite clearly the 5th amendment right guaranteeing the ability to not self-incriminate. If that decryption key only exists in your head, and disclosing it would incriminate you, then you don't have to yield the key.

This judge disagreed. I'm not a lawyer, so I can't tell what legal hairs were split to come to this decision. But the fact remains that this judgment stands. The only concession he appears to have made for the defendant is to preclude the prosecution from using the act of disclosure as a 'confession', but the data yielded by the disclosure is still admissible.

Larcenous information leakage

Or, losing company data through laptop theft. ServerFault had an interesting question on this topic crop up the other day. Most of the answers were focused on private industry, but this is a topic that affects us governmental/educational types as well. In different ways, of course.

Unlike a private business that has business methods and data that are intellectual property, us governmental types have to live with variations on the Freedom of Information Act. Here in Washington State, it's called a Public Records Request. Either way, it is entirely probable that a correctly worded PRR would be able to retrieve any source-code we have. There are some regulations that limit what we can let out, such as FERPA (Family Educational Rights and Privacy Act), but mere business process is open for citizen review.

Because of FERPA, we're quite paranoid about student data. That kind of information doesn't tend to wander on laptops, but we still don't want to get listed. We have policies about this.

That said, while our budget realities mean that very few people have work-supplied laptops, a lot of private laptops do end up in the office. These are laptops that generally do not connect to the wired Ethernet, they connect via the same wireless networks all of our students use. They can't get directly at our Banner data there, but they can get at pretty much everything else.

I believe I've mentioned before now that Higher Ed networks do not look like Corporate networks.

  • We do not have 'whole disk encryption' policies though those might be coming.
  • We're currently updating our email policies to make even more clear that University business conducted in private email (ahem, gmail) is still subject to Public Records Requests and archiving requirements.
  • For a while our use of Blackberries exploded, but the iPhone/Android revolution is rapidly reducing that. However, the number of people reading work-email over these devices has only gone up (see also, revised email policy).
  • Due to internal politics, policies restricting the use of USB-drive blocking GPOs and other technologies is exceedingly hard to put into place. The same holds true for blocking access to off-campus WebMail and social media sites.
In short, it's hard to keep our data from wandering.

There is a very good reason why our Security Audits are interesting reading. We're a kind of unholy cross between an ISP network and a corporate network.

The risk of email interception

Anyone who does email knows that it is really easy to intercept in-flight. Unless TLS is in use the messages are transmitted in plain text, and the SMTP protocol is designed around the assumption that untrusted 3rd parties may handle the messages between source and destination (a holdover from the UUCP days as it happens). The appliance and cloud anti-spam industries are designed around this very capability.

But how much of a risk is illicit interception? Or worse, monitoring? Everyone knows you don't send passwords or credit-card information in email, but we also send password reset messages in email. Some web-sites still send your password when asked for a 'reminder', so clearly some reset-system designers consider email secure-enough. Or maybe it's just convenience trumping security again.

To figure out how much of a risk it is we need to know 2 things:
  1. How can email be intercepted?
  2. How likely these methods are to be used?
Interception can be accomplished two ways:
  1. Catch the messages in-flight by way of a sniffer.
  2. Catch the messages in the mail-spools of the mailers handling the message.
There is another vector that is even more damaging, though. Catching the message in the final mailbox. That isn't interception, it's something else, but it really impacts the security of email so I'll be including it. Under the fold.

Legal rubber hose usage increases

According to The Register, the UK police have increased the exercise of the power that allows them to compel the revealing of crypto keys. That fancy duress key you put on your truecrypt volume is only good for earning you jail time. I've mentioned this before, but crypto is vulnerable at the end-points. If the Government can point a loaded law at you to force you to reveal your keys, the strength of your convictions, not your crypto, is what is being tested. Perhaps that 2-5 year prison term is worth it. Or maybe not.

I take heart that a majority of those served with the demand notice have refused. But we still don't quite know what'll happen to them.

This is harder to pull off in the US thanks to the 5th amendment, but there is nothing stopping this kind of thing off our shores. Or heck, at our borders.

Two years ago I posted an article that has been fairly popular, Encryption and Key Demands. The phrase 'duress key' seems to drive the traffic there, even though I'm not the one who coined the term. Anyway, a real-life example of that has shown up.

UK jails schizophrenic for refusal to decrypt files

You don't fork over your decryption keys on demand, you get jail time just for that! As I said two years ago, this is a lot harder to pull off in the US due to that whole Bill of Rights thing. Harder, but not impossible.

Legal key recovery

Remember this? About the UK's new laws stating that failing to reveal decryption codes on-demand could result in jail sentences?

Well, it happened. We have yet to see what size of rubber hose is being used, but these two are being sized up.

XKCD gets it (unsurprisingly)

Today's XKCD:

As one crypto-wonk I spoke years ago said, this is called, "rubber hose cryptanalysis". Or put another way, the easiest way to crack crypto is to attack the ends points. Don't waste your time brute-forcing the cypher text, kidnap the person who owns the password and beat them until they tell you what it is. Or grab the cleartext using a screen-scraper when the recipient decrypts it. Or sniff the crypto-password with a key-logger when it is enciphered. Or live-clone the box once the encrypted partition is mounted. Except for the beatings, US law enforcement has used all of these methods to circumvent encryption.

It is for this reason why the UK has passed a law making it an illegal activity to withhold crypto-passwords when demanded by law enforcement. Failure to reveal the passphrase will result in jail time, even if the crime they're investigating has a lower mandatory sentence. The cryptonerds that xkcd was lampooning have thought of this, which is where the concept of the duress key comes from; a key you give when you are under duress that when used will either destroy your data instead of revealing it or reveal an equivalent amount of innocent data.

The problem with a duress key like this is that law enforcement NEVER works on the live data if they can at all get away with it. Working on live data taints evidence-chains, which makes convictions harder. So you set up a duress key for your TruCrypt partition, UK police nab it, demand the password, you give the duress password, it'll only scrub the copy of the data they were working on. Now they know you lied to them, and you are now guaranteed to be asked firmly for the real password, if not thrown into jail for hampering a police investigation.

Enabling autokey auth in NTP on SLES10

The NTP protocol permits the use of crypto to authenticate clients and servers to each other, as well as between time servers. By default, SLES10 is set up to allow the v3 method of using symmetric keys, but not the v4 method that uses public/private keys. If you want to use the v4 method, this is the tip for you.


By default SLES runs NTP inside a chroot jail. This can be changed from the YaST NTP config screen if you wish. This is a more secure method of running NTP. The chroot jail's root is at /var/lib/ntp/.

Additionally, ntp runs with an AppArmor profile loaded against it for added security.

Getting NTPv4 auth to work

There are 4 steps to get this to work.

  1. Copy the .rnd file to the chroot jail
  2. Run ntp-keygen
  3. Modify the AppArmor profile for /usr/sbin/ntpd to allow read access to the new files
  4. Modify the /etc/ntp.conf file to enable v4 auth.

Copy the .rnd file to the chroot jail

By default, there should be a .rnt file at /root/.rnd. If so, copy this to /var/lib/ntp/etc/.rnd. If there is no file there, one can be generated through use of openssl.

timehost:~ # openssl rand -out /var/lib/ntp/etc/.rnd 1

Run ntp-keygen

Change-directory to /var/lib/ntp/etc, and execute the following command:

timehost:~ # ntp-keygen -T

This will drop a pair of files in the directory you run it, so running it while in /var/lib/ntp/etc saves you the step of copying them to this directory.

Modify the AppArmor profile

This is done through YaST

  1. Launch YaST
  2. Go to the "Novell AppArmor" section, and enter the "Edit Profile" tool.
  3. Select "/usr/sbin/ntpd" and click Next.
  4. Click the "Add Entry" button and select File.
  5. Browse to /var/lib/ntp/etc/.rnd and click the "Read" permissions check-box, and click OK
  6. Repeat the previous two steps to add the two files created by ntp-keygen, named "ntpkey_cert_[hostname]" and "ntpkey_host_[hostname]".
    1. Note: AppArmor behavior changes between SP1 and SP2. In SP1 you can use the link files, in SP2 you need to specify the link targets.
  7. Click Done on the main Profile Dialog
  8. Agree to reload the AppArmor profile

Modify /etc/ntp.conf

The YaST tool for NTP doesn't allow for v4 configurations, so this has to be done on the command line. Open the /etc/ntp.conf file with your editor of choice, and insert the following lines before your "server" lines:

keysdir /var/lib/ntp/etc/
crypto randfile /var/lib/ntp/etc/.rnd

Then append the word "autokey" to the server and peer lines of your choice. At this point, you should be able to restart ntpd, and it will use authentication. This is a very basic NTPv4 configuration setup, but this should set the ground up for more complex configs.

eDirectory certificate server changes

The new eDir 8.8 has introduced some changes into my environment, and from the looks of it some of them were there before I did the upgrade. Specifically to the CA in the tree. In googling around, I found this excerpt from the CA documentation:

With Certificate Server 3.2 and later, in order to completely backup the Certificate Authority, it is necessary to back up the CRL database and the Issued Certificates database. On NetWare, these files are located in the sys:system\certserv directory.

For other platforms, both of these databases are located in the same directory as the eDirectory dib files. The defaults for these locations are as follows:

  • Windows: c:\novell\nds\dibfiles

  • Linux/AIX/Solaris: /var/opt/novell/edirectory/data/dib

These defaults can be changed at the time that eDirectory is installed.

The files to back up for the CRL database are crl.db, crl.lck, crl.01 and the crl.rfl directory. The files to back up for the Issue Certificates database are cert.db, cert.lck, cert.01, and the cert.rfl directory.

I didn't know about that directory. I also didn't know that the CA is publishing a certificate-revocation-list to sys:apache2\htdocs\crl\. Time to twiddle the backup jobs.