October 2007 Archives

It's that time of year

It's Combined Campaign time! As WWU is officially a state agency, we get to give through the centralized web page for that.

This is a login and password I use once a year. This is a login and password I forget every year, since the 'usually your userID is' thing is wrong for me. So I have the needed info squirreled away somewhere.

This is EXACTLY the kind of thing that a distributed identity federation would fix. However, as anyone who has attempted to integrate umpteen bajillion different ID systems knows, that's a heck of a lot of work. So far as I'm aware there is no "State Employee ID Number" to index everyone on.

This one leaked through

Every so often something slips by the spam filters and also catches my attention. Maybe a couple times a year, but this one needed chasing.

I got a mail on a private account with the highly suspicious subject line of "YOU HAVE WON!!!!!!!!!!!!!!"

Rightie then. Time for a text-mode reader! PINE to the rescue! I drop into header mode so it won't render anything in there. This happens fairly frequently when things leak, I like to see the header-spam to see what the spam checkers thought of it on the way through. This one was somewhat unremarkable, but one thing did stand out. It passed SPF checks.

X-RC-DBID: 046c9cac-dc1e-47d7-acbb-d595ac2651b6
X-RC-ID: 20071025215619610
DomainKey-Signature: a=rsa-sha1;
/QlL/RWHQbX2i8KIAx0KA=; c=nofws; d=yousendit.com; q=dns; s=signed
Received: from localhost (unknown [])
by wa-smtp-02.yousendit.com (Postfix) with ESMTP id 6FA7B3550334
for ; Thu, 25 Oct 2007 14:56:15 -0700 (PDT)
From: Victor Kundala via YouSendIt
To: xxxxxxxxxxxxxxx,
Reply-To: victor_kundala5@yahoo.co.uk
Subject: YOU HAVE WON!!!!!!!!!!!!!!
MIME-Version: 1.0
Content-Type: multipart/alternative;
Message-Id: <20071025215615.6fa7b3550334@wa-smtp-02.yousendit.com>
Date: Thu, 25 Oct 2007 14:56:15 -0700 (PDT)

Huh. So I google up "yousendit" and find that it really is a legitimate service. The text of the email was the typical gark:

Hello from YouSendIt,

Hello from YouSendIt,

You have a file or files called Dear Winner.doc (1 file(s)) from
victor_kundala5@yahoo.co.uk waiting for download.

You can click on the following link to retrieve your File. The link will expire
in 14 Days .

Link: http://download.yousendit.com/05CE02D8475BB9F9

Do not reply to this automatically-generated email. If you have any questions,
please email us at paidsupport@yousendit.com.

File too big for email? Try YouSendIt at @ysi.base.url@

1919 S.Bascom Ave., 3rd Floor
Campbell, CA 95008

Really? So a little wget magic and I have the file, which I crack open with strings and I get this text:
Dear Winner
We happily announce to you today, the draw of the online UK National Lottery programme held on 20th of October 2007. Your e-mail address won you in the second category, your e-mail address attached to a ticket numbers: 4-33-34-38-39-49(bonus no.23).
You have therefore been approved to claim a total sum of
420,200 British pounds sterling. You are to contact our AFFILIATE COURIER COMPANY for delivery of your winning certificate and winning cheque.
You are to reply to this email address below: MR SOLOMON STONE INTERNATIONAL COURIER SYSTEMS EMAIL: solo_stone2004@yahoo.com Congratulations once more from all members and staffs of this programme.
Yours Truly,
Victor Kundala
It's a phish! And in homage to its 409 past, it even has a Nigerian-sounding name. Awwww.

Virtualization and security

I've known for a while now that virtualization as it exists in x86 space is not a security barrier. Heck, it was stated outright at BrainShare 2006 when Novell started pushing AppArmor. The Internet Storm Center people had an article on it a month ago. And now we have an opinion from the OpenBSD creator about it, which you can read here.

It sounds like the main reason virtualization isn't a security barrier is because of the CPU architecture. Intel is making advances with this, witness the existence of VT extensions. Also, as virtualization becomes more ubiquitous in the marketplace Intel will start making their CPUs more virtualization-friendly. Which is to say that they're not very VM friendly now.

And as Theo stated in his thread, "if the actual hardware let us do more isolation than we do today, we would actually do it in our operating system." Process separation is its own form of 'virtualization', and is something that is handled in software right now. Anything in software can be subverted by software, so having a hardware enforceable boundary makes things stay where they are put.

Which is why I hold the opinion that you should group virtual-machines with similar security requirements on the same physical hardware, but separate machines subject to different regulations and requirements. Or put another way, do not host the internal Time Card web-server VM on the same hardware as your public web-server, even if they're on completely different networks. Or, do not host HIPPA-subjected VM's on the same ESX cluster as your Blackberry Enterprise Server VM.

Virtualization as it exists now in x86-space does provide a higher barrier to get over to fully subvert the hardware. Groups only interested in the physical resources of a server, such as CPU or disk, may not even need to subvert the hypervisor to get what they want; so no need to break out of jail. Groups intent on thievery of information may have to break out of jail to get what they want, and they'll invest in the technology to do just that.

Warez crews don't give a damned about virtualization, they just want an internet-facing server with lots of disk space they can subvert. That can be a VM or physical server for all they care. They're not the threat, though the resource demands they can place on a physical server may cause problems on on unrelated VM's due to simple resource starvation.

The threat are cabals looking to steal information for resale. They are the ones who will go to the effort to bust out of the VM jail. They're a lot harder to detect since they don't cause huge bandwidth spikes the ways the warez crews do. They've always been our worst enemy, and virtualization doesn't do much at all to prevent them gaining access. In fact, virtualization may ease their problem as we group secure and unsecure information on the same physical hardware.

Large eDirectory installs

There was a nice post about a real install of a large eDir tree in the support forums recently:

Check it out.

Also? Novell has a real HTTP interface for the forums now.

Peer-to-peer sharing

One feature that has shown up in some applications and widgets lately has gained some traction internally. That is the concept of peer to peer sharing of disk space without going through all the pain of getting things approved and formally set up. The general idea is this one.

I want to share U:\SharedStuff\ApacheGroup\ to five other users. U: is my home directory, which is actually map-rooted so I don't see the top level directory. So I go to a web page and tell it I want to share this directory, to these people, for this long. Go.

It struck me that this sort of thing can be engineered with NetWare and OES. The key components are eDirectory, NSS, and NetStorage.

The web server takes the request and translates $Path into a real path by referencing the HomeDirectory attribute of the user who requested the share. Then, using LDAP it creates two objects:

A Group Object
  • Created and named dynamically
  • [AuxClass] Attribute with user-defined name
  • [AuxClass] Attribute with the creator
  • [AuxClass] Attribute with the expiry date
  • Since this is eDirectory, group memberships apply immediately rather than taking a logout/login cycle to refresh the access token like in MS networks.
A Storage Location Object
  • Created & named dynamically
  • Associated to the created group
  • Assigned to the specified users
  • This allows the share to show up in NetStorage
The web server sends a request to a file daemon that handles the actual trustee assignment.

There is a small constellation of maintenance tasks that also need to be created, such as a janitor process to deal with expirations, a helpdesk view to track who has what shares, a historic view to see what shares got deleted recently that suddenly need to be back RIGHT NOW, something to interface this with whatever disk or directory quota systems are in use.

The use of NetStorage allows WebDAV to be used as an access method, which allows the shares to be seen. The really brave may be able to leverage DFS to create actual directory structures reflecting the shares in the actual directories so drive mappings can be used; unfortunately I have no idea if a DFS database that large is a good idea.

Users would love this. No need to go through management to get a directory set up on the shared space. You just set up and go. Great for adhoc groups, or small private gatherings.

Unfortunately, this sort of share model is one that a lot of sys-admins are familiar with. If you've ever had a chance to examine the network of a small business with under 15 users, all of whom call themselves 'not that good with computers', you know what I'm talking about. This model of sharing is the one that Windows for Workgroups was designed for, and is still the default mode for plain old WinXP. Excessive use of peer to peer sharing like that can lead to one unholy mess, especially if a key person leaves (or in the case of the Windows example, one hard drive crashes hard).

If left unchecked, you can get whole business processes designed with the assumption that [username] will never retire. That already happens to an alarming extent, but this would make the dependency more invisible to those of us charged with making it all work again when it breaks. You can have shared spaces that are business critical to the company living 100% inside a user's self-managed space, and vulnerable to deletion on termination of that employee.

This is all part of the balance we as system administrators have to keep between end user functionality, and data protection. Desktop techs fight a constant battle to get users to save data on the server where it is backed up, and Novell puts out things like iFolder to help that whole thing become more invisible. We created shared directories to draw a big line between 'my stuff' and 'us stuff'.

That said, data-access habits are changing all the time. My own boss prefers to email a 150KB Excel spreadsheet to all of us, even though all of us have ready access a shared directory setup just for that. SharePoint integrates with Office to make the web-server look like a file-server. We still have to adapt with the times.

User-directed sharing is something I can see as highly desirable among the student population and faculty as well. Among staff, I'm less sure its a good idea outside of the 'trivial' personal use we're allowed.

Student email

Another piece on Slashdot today was about how GMail is increasing its limits since some users are going past what is already there. What's more, it points out that the two other biggest freemail systems have gone past GMail in terms of storage. Well, they kind of have to as gmail is something of the gold standard and if you're going to compete you have to be better then them. No biggie.

But it does underline the sheer difficulty in providing email service these days. End users, thanks in large part to the work Google has done in gmail, expect the following in their mail service:
  • No significant mail quota
  • A fast, easy to use web interface
  • A fast, easy to use search function for mail inside the web client
  • Very effective spam filters
  • The ability to do everything you want without having to use a mail-client like Thunderbird
That's a lot to live up to, so it is no wonder that .EDUs are actively considering forking over their student email to the commercial services. It is possible to do all of the above, but it is very expensive. The big enterprise email products (Exchange, GroupWise, Lotus Notes) all fall down on at least one of the above points. It is possible to cobble together something out of opensource components, but as we've learned the Achilles heel of the OSS mail stack is spam. Plus, the OSS stack has a tendency to fall apart when scaling to the levels we're at. And we soooo can't afford to serve student email out of the enterprise systems, so rock, meet hard place.

As a side note, I know of at least one .EDU larger than us that serves student email out of Exchange. That's 50,000 accounts all told. So it can be done. But they're a private university, unlike WWU which is publicly funded.

Yet, there are still problems with 'outsourcing' student email to Google or Microsoft. First and foremost, if our internet connection bombs students on campus are out of email. Second, data mining on usage patters by this highly desirable demographic run contrary to the spirit of .edu mail. Third, single-sign-on may be hard to impossible to accomplish, forcing students to have *shudder* more than one password to manage. Fourth, it may not be possible to 'skin' the interface with our official WWU web standards. Er brand.

In the end, we could up our student mail quota to 2GB and students STILL wouldn't use it. Good email service is so much more than sheer quota these days.

Xen in 10.3

| 1 Comment
Because I couldn't get a good video driver working for it, I haven't spent much time in the new Xen. I believe it is Xen 3.1. And yes, it IS a lot faster on the network. Wow. It used to be painful, now it is improved quite a bit. I just patched the windows vm on my work machine, and the network transfer went really fast. Then I had to turn it off since I needed my linux desktop back.

And now I get to see how to convert a Xen disk-image into a vmware disk image. I know it can be done, but I haven't dug up the script or whatever.

openSUSE 10.3 is in

No problems, but one. The "nvidia in Xen" problem is back. The prior solution isn't working yet. Unresolved symbols. This could be a work-stopper, or the thing that makes me move all the way to VMWare.

NCL 2.0 beta and Xen, pt 2

I now have server side and client side captures. The server side shows what's really going on. It is clear that those jumbos I talked about earlier are being disassembled in the Xen network stack. The client traces look similar to this

-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
<- Ack
<- Ack
-> NCP to server
<- Ack

With variations on the order. The NCP to server packets are all jumbos. From the server side it looks a lot different. The same sequence from the server side:

-> NCP to server
-> NCP to server
-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
-> NCP to server
<- Ack

What's more, the server sees a marked delay in packets between the <- Ack and the first -> NCP to server. On the client side the delays are between the -> NCP to server and the responding <- Ack. I interpret this to show a packet-disassembly delay in the Xen stack.

What I can't figure out is how are the jumbos getting on the network stack at all? The configured network interfaces (except for loopback) in the Dom0 all have MTU values of 1500. I suspect NCL throughput for higher record sizes will improve markedly if it didn't force the Xen layer to disassemble the jumbos. Overriding the MTU on an interface is something that can only be done in kernel-space (I think) which would point to the novfs kernel module.

The weakness of crypto

From Slashdot.

It is now legal in the UK for the government to force you to hand over your crypto keys, or face obstruction of justice charges.

This is what's known as, "rubber hose cryptanalysis." Or, beating the holder of the crypto keys with a rubber hose until they relent and let you have them.

Banks are understandably wary of this law, as are privacy advocates. Failure to decrypt data on demand by the government will earn you jail-time, regardless of what offenses they may be planning to charge you with. If you refuse to decrypt, you'll STILL do time.

NCL 2.0 beta and Xen

| 1 Comment
I found a good candidate for why the Novell Client for Linux 2.0-beta is so crappy in a Xen kernel. Take a look at this:

Frame 6 (4434 bytes on wire, 4434 bytes captured)

What the heck? What it should look like is this:

Frame 6 (1514 bytes on wire, 1514 bytes captured)

So I go to our network guys and ask, "Have we turned on jumbo frames anywhere?" No, we haven't. Anywhere. Which I pretty much knew, since we're not doing any iSCSI. So where the heck is that jumbo coming from? The only thing I can think of is that the sniffing layer I'm getting this at is above the layer that'd grab what actually hits the wire, and something between the sniffer and the wire is converting these jumbos to the normal 1514B ethernet MTU, and that's where my lag is coming from.

This is a case where I'd like to span a port and get a sniff of what actually hits the wire so I can compare.