February 2009 Archives

[This is a short series I'm doing about this act. This is my opinion, and in no way represents the opinion or stance of WWU as a whole or in part, nor does it imply anything about our lobbying efforts. This is editorial.]

Part 4: Court cases to expect

The law itself is pretty non-specific, which means that it'll be up to the courts to determine the exact limits. There are several cases I can think of that'll need to be had early in order to flesh out who is responsible for what, and what the limits of enforceability are.
  • Does this apply to home users or not? Does that wireless router sitting next to your cable modem qualify as, "A provider of an electronic communication service"? I'm guessing not, but I expect this very question to be asked as soon as a case gets to the point where a leeching neighbor sucked down some files he really shouldn't have, and the access-point owner can't prove it.
  • Does the ISP have to distinguish between individual adults on a contract? The test case for this one would be the room-mate problem. Say I have an apartment I share with 3 friends, none of whom are married (i.e. 4 separate legal entities). If I pay the internet bill, but gas-bill paying roomie has been sucking down kiddy porn, the ISP has my billing info on file. Is this good enough, or do they also have to be able to audit roomie's discrete access patterns? This also applies to hotel rooms shared by adults.
  • The NAT gateway problem. In theory, once everyone is on IPv6, NAT isn't a serious concern anymore. Riiight. My ISP has one IP address for our connection, and yet... there are eight IP-consuming devices in my house. Who is responsible for auditing the true originating IP address? Resolution of this question will go a long way towards answering whether or not home installs also need to keep a two year audit trail of IP/UID.
  • Active vs. Passive authentication barriers. Active authentication (you must log in) versus passive authentication (jack location and inductive logic). This will determine what the 'good enough' line is for identifying information.
  • Intranet vs. Internet. Is the need to keep an IP/User audit trail only for access off of the local network, or does it have to be kept for on-network access as well? A test case here at WWU would be if a group of students were swapping files they should not swap purely on our own WLAN. To my knowledge, we don't restrict access on the WLAN subnet itself, just off that network. If the students don't try to access off network resources, we'll never have an audit trail. Is this a problem under the act?
If this act does come before Congress, I expect the above issues to come up for debate. We'll see what ultimately passes, if it passes.

SLES11 will be out soon.

For Novell has posted a preview release of SLES11. It says "RC4", which I suspect means we're within a month or so of release. This release is just in time for BrainShare 2009, were it actually happening. SLES11 would have been the major message of BS09.
[This is a short series I'm doing about this act. This is my opinion, and in no way represents the opinion or stance of WWU as a whole or in part, nor does it imply anything about our lobbying efforts. This is editorial.]

Part 3: Trade Shows and Conventions

Perhaps the most visible area to be impacted by this act would be trade shows and conventions. Conventions like DEFCON, TechEd, VMWorld, BrainShare, and the like all have wireless access as part of the show. It allows journalists to mail their impressions of the event to their editors, bloggers to blog, fliker photo updates, IM-based event coordination among attendees, and a billion other things.

Wireless access at these events is not going away. It is far too critical for correct function for it to go away. Instead, they'll have to adapt. What'll most likely happen is that you'll get a uid/pw on the back of your convention badge that you'll need to enter to get access out of the local network. It'll just be a fact of life.

At conferences like DEFCON where network perversion is a game, you can guarantee that the auth mechanism will come under extremely heavy attack.

Unlike your local coffee shops or the hotel you're staying at for the convention, the big convention centers have wireless access as a core business need. They will solve this problem. The interesting court cases will be about who is the custodian of the user/IP audit data for these conferences.
[This is a short series I'm doing about this act. This is my opinion, and in no way represents the opinion or stance of WWU as a whole or in part, nor does it imply anything about our lobbying efforts. This is editorial.]

Part 2: Coffee Shop Access

Coffee Shop Wifi is axiomatic. IF Coffee THEN Wifi. It just works that way. Coffee shops were among the first public hotspots out there. It just goes with the business model.

Getting wifi in your independently owned coffee shop is very simple. Get a business internet connection of some kind, an off-the-shelf wifi router/AP of some kind, set it to open, and go. Done! Very little maintenance needed. Just the kind of low margin value-add needed to keep butts in seats and swilling coffee, while nibbling on high margin baked goods.

The big boys of the coffee business, Starbucks, Dunn Brothers, and their ilk, tend to partner with the big wifi providers like T-Mobile. Their internet isn't free, but at least the home office doesn't have to track umpty thousand individual broadband connections and troubleshoot wireless equipment failures; they just pay the national company to do that.

The Internet SAFETY Act would force the free-access shops to sign with one of the big boys of wifi. Coffee shops are pretty clearly commercial interests, and they really do use the internet connection as a key value-add to keep customers. Your average independent coffee shop doesn't have the technical moxy to even try and handle the authentication problem. It would be far simpler to sign a contract with a T-Mobile, and let them handle the problem. The internet wouldn't be free, but at least it wouldn't be absent which is worse.

The Internet SAFETY Act would be the final death of free wifi in coffee shops. One of the ways the independents distinguish themselves from the large chains is that their internet is free, where you have to pay to access in the Starbucks down the street. This act would remove that small business incentive.
[This is a short series I'm doing about this act. This is my opinion, and in no way represents the opinion or stance of WWU as a whole or in part, nor does it imply anything about our lobbying efforts. This is editorial.]

Part 1: Hotels

In my first piece yesterday, I brought up Hotels as one area that would be affected by this act. The details of that can be obscure to a first glance, but they are there. The Hospitality Industry as a whole would be affected by the need to authenticate end-users to IP addresses.

I recently stayed at a hotel in another part of the US that would have to change how they handle their internet connection in the rooms. As with so may hotels, they used wifi for their connection rather than a wire in the room. As with so many hotels, you had to click through an Acceptable Use Policy screen to get access to the internet from their wireless segment. Some hotels have a username and password to get past this screen, and this uid/pw combo has never been individualized to the room and has always been either generic to the hotel or rotating on a daily basis for all guests.

Were the Internet SAFETY Act to become law, this would have to change. Hotels that use Wifi connections, and they are very much preferred by travelers as it means one less cable to carry around, would have to find some way to associate IP to the credit card used for the hotel room. The obvious way would be to provision a unique userID and PW when the room-keys are generated on check-in. This technology exists, but is not in use because it is inconvenient to the user; something the hospitality industry tries to avoid if at all possible. It would also require hotels to redesign how they offer internet service.

Yet even this has some problems. Take the case of a college football team bussing to another state for a Bowl game. That'll be a room block of anywhere from 15-30 rooms all on the same purchasing instrument, and yet there could be as many as 60 unique IP addresses requested by various devices on the team members and associated staff and adults. Is it good enough that there are 60 addresses associated to one name? Would the hotel have to issue a uid/pw for each person staying in the room block? The courts will have to decide what constitutes 'good enough' identity-glue in cases like these.

It is also possible that Hotels will start 'partnering' with the big Wifi providers out there like T-Mobile, and just use that authentication method. Internet will no longer be complimentary, but at least the Hotels would be out of the ISP business. The audit requirement would be outsourced to the same companies that can provide WiFi at every Starbucks in the land.

The big caveat here is wired access. Port 34A-423 (SW08/02/32) is in room 322, and IP address 192.168.202.33 was last seen on that port. This kind of data is pretty simply gathered, and can be associated with a specific room. Hotels with a significant wired internet installation wouldn't have to go with an extensive uid/pw setup, as they can already isolate network access to physical (paid for) location. Hotels like these would be able to migrate to SAFTEY Act compliant complimentary internet more cheaply than their wireless brethren. But who uses wired internet anymore?

This bill would change the nature of business travel. In my opinion it would make it more expensive, as internet access would be encumbered with an audit requirement that is significantly more expensive than the base network infrastructure. Complimentary access would more and more be a benefit only at the most expensive hotels.

999 posts

| 1 Comment
According to blogger, this is post number 999 of this blog. Neat. That represents five years and one month of blogging activity here. The very first post here was not even, "Tap tap tap is this thing on?" which I'm very proud of. It was an outage report.

This blog was started as a way to test out the Myweb service. I was tasked with creating the 'web pages from home directory' project, and myweb was the result of that. Faculty have proven very fond of the service, as I see evidence of web design classes being taught every quarter. I also see signs of certain faculty publishing syllabus that way, rather than through Blackboard. It's also a pretty good way to send really large files to people.

Since then, I've done a lot. Four BrainShares, several major migration projects, a lot of Novell stuff, more opinion stuff, blogger migrations, the introduction of actual tags in blogger, the move from Apache 1.3 to Apache 2.0 for this service, and lots of other stuff as well. I don't post as often as I did back then, so I expect it'll take longer to get to 2000. Who knows where it'll be by then.

I'm keeping it up, of course!

The Internet SAFETY Act

I'm sure this has made the rounds, but I've been out sick for the past week and thus not as caught up on my tech media as I normally would be. But a bill has been introduced to the US Congress that would:

SEC. 5. RETENTION OF RECORDS BY ELECTRONIC COMMUNICATION SERVICE PROVIDERS.

    Section 2703 of title 18, United States Code, is amended by adding at the end the following:
    `(h) Retention of Certain Records and Information- A provider of an electronic communication service or remote computing service shall retain for a period of at least two years all records or other information pertaining to the identity of a user of a temporarily assigned network address the service assigns to that user.'.
    At minimum this means keeping DHCP records for 2 years. What's a bit more unclear is whether or not just IP address is sufficient to meet the standard of, 'identity of a user'. I don't think it is, though the courts will have to clarify this. This tells me that we'd have to retain records associating IP address with authenticated user.

    For commercial ISPs this is an easier bar to pass, as you need a username and password or some such equivalent to get on their networks in the first place and be provisioned with an address. For entities like us who are sort-of ISPs for our students, and have very permissive usage policies for our faculty (sex-researchers have a legitimate business need to search for, you know, sex), it's a bit less cut and dried. What isn't yet clear, but is getting a lot of internet buzz, is whether or not home users fall under this requirement as well.

    Bills such as these make a fundamentally false assumption about the internet:
    The end points always require authentication prior to usage.
    So long as vendor-neutrality holds, anyone who can get on the network at all can pass traffic over it. The Internet's protocols have no header value for signifying whether the originating node is an authenticated access or anonymous access, they just don't care. Authentication is optional on the Internet, not mandatory.

    This bill would indirectly require mandatory authentication for network access. Yes, this is a trend in the business world these days (google term: NAC), but there are whole classes of network users out there that aren't even looking into this. The locally owned independent coffee shop, with the commercial DSL line and free WiFi, the Hotel with 200 guests sharing the same business Comcast line, these are the sorts of 'anonymous' network access where NAC solutions aren't likely to ever be in place.

    Ultimately, by the time I'm 50 I expect the Internet to have converted to a mandatory-auth scheme for access. However, we're not there yet, not even close. This bill needs to be fought.

    $8.3Bn

    A new budget forecast is out. It's now up to $8.3Bn for next biennium. Back when we thought it was just a $5.8Bn deficit, the governor handed the legislature a $33.5Bn budget. That now gets to be pared down to $31Bn somehow. Earlier, the $5.8Bn number was cited as 20% of the discretionary budget. Which would make $8.3Bn about 28.6% of the discretionary budget, of which Higher Ed is a part.

    Ouch.

    But then, we kind of expected the forecasts to get worse. Alas.

    The Legislature doesn't have a lot of options. Cutting is the easiest thing to do from a procedural point of view, as any new taxes have a large process they have to go through before being approved; thanks to the Washington State initiative process. However, any time you try to cut any of the unprotected sacred cows, such as childhood healthcare or public education there will be phone calls. Lots of phone calls.

    I expect the 2010 initiative process to include more than one attempt to roll back a cut made this year. We'll see.

    tsatest and incrementals

    Today I learned how to tell TSATEST to do an incremental backup. I also learned that the /path parameter requires the DOS namespace name. Example:

    tsatest /V=SHARE: /path=FACILI~1 /U=.username.for.backup /c=2

    That'll do an incremental (files with the Archive bit set) backup of that specific directory, on that specific volume.
    HP Data Protector has a client for NetWare (and OES2, but I'm not backing up any of those yet). This is proving to take a bit of TSA tuning to work out right. I haven't figured out where the problem exactly is, but I've worked around it.

    The following settings are what I've got running right now, and seems to work. I may tweak later:

    tsafs /readthreadsperjob=1
    tsafs /readaheadthrottle=1

    This seems to get around a contention issue I'm seeing with more aggressive settings, where the TSAFS memory will go the max allowed by the /cachememorythreshold setting and sit there, not passing data to the DP client. This makes backups go really long. The above setting somehow prevent this from happening.

    If these prove stable, I may up the readaheadthrottle setting to see if it halts on that. This is an EVA6100 after all, so I should be able to go up to at least 18 if not 32 for that setting.

    High availability

    | 1 Comment
    64-bit OES provides some options to highly available file serving. Now that we've split the non-file services out of the main 6-node cluster, all that cluster is doing is NCP and some trivial other things. What kinds of things could we do with this should we get a pile of money to do whatever we want?

    Disclaimer: Due to the budget crisis, it is very possible we will not be able to replace the cluster nodes when they turn 5 years old. It may be easier to justify eating the greatly increased support expenses. Won't know until we try and replace them. This is a pure fantasy exercise as a result.

    The stats of the 6-node cluster are impressive:
    • 12 P4 cores, with an average of 3GHz per core (36GHz).
    • A total of 24GB of RAM
    • About 7TB of active data
    The interesting thing is that you can get a similar server these days:
    • HP ProLiant DL580 (4 CPU sockets)
    • 4x Quad Core Xeon E7330 Processors (2.40GHz per core, 38.4GHz total)
    • 24 GB of RAM
    • The usual trimmings
    • Total cost: No more than $16,547 for us
    With OES2 running in 64-bit mode, this monolithic server could handle what six 32-bit nodes are handling right now. The above is just a server that matches the stats of the existing cluster. If I were to really replace the 6 node cluster with a single device I would make a few changes to the above. Such as moving to 32GB of RAM at minimum, and using a 2-socket server instead of a 4-socket server; 8 cores should be plenty for a pure file-server this big.

    A single server does have a few things to recommend it. By doing away with the virtual servers, all of the NCP volumes would be hosted on the same server. Right now each virtual-server/volume pair causes a new connection to each cluster node. Right now if I fail all the volumes to the same cluster node, that cluster node will legitimately have on the order of 15,000 concurrent connections. If I were to move all the volumes to a single server itself, the concurrent connection count would drop to only ~2500.

    Doing that would also make one of the chief annoyances of the Vista Client for Novell much less annoying. Due to name cache expiration, if you don't look at Windows Explorer or that file dialog in the Vista client once every 10 minutes, it'll take a freaking-long time to open that window when you do. This is because the Vista client has to enumerate/resolve the addresses of each mapped drive. Because of our cluster, each user gets no less than 6 drive mappings to 6 different virtual servers. Since it takes Vista 30-60 seconds per NCP mapping to figure out the address (it has to try Windows resolution methods before going to Novell resolution methods, and unlike WinXP there is no way to reverse that order), this means a 3-5 minute pause before Windows Explorer opens.

    By putting all of our volumes on the same server, it'd only pause 30-60 seconds. Still not great, but far better.

    However, putting everything on a single server is not what you call "highly available". OES2 is a lot more stable now, but it still isn't to the legendary stability of NetWare 3. Heck, NetWare 6.5 isn't at that legendary stability either. Rebooting for patches takes everything down for minutes at a time. Not viable.

    With a server this beefy it is quite doable to do a cluster-in-a-box by way of Xen. Lay a base of SLES10-Sp2 on it, run the Xen kernel, and create four VMs for NCS cluster nodes. Give each 64-bit VM 7.75GB of RAM for file-caching, and bam! Cluster-in-a-box, and highly available.

    However, this is a pure fantasy solution, so chances are real good that if we had the money we would use VMWare ESX instead XEN for the VM. The advantage to that is that we don't have to keep the VM/Host kernel versions in lock-step, which reduces downtime. There would be some performance degradation, and clock skew would be a problem, but at least uptime would be good; no need to perform a CLUSTER DOWN when updating kernels.

    Best case, we'd have two physical boxes so we can patch the VM host without having to take every VM down.

    But I still find it quite interesting that I could theoretically buy a single server with the same horsepower as the six servers driving our cluster right now.

    Budget jockying

    Today in the Seattle PI was this headline:

    UW may face hundreds of job cuts

    In short, the University of Washington may have to lay off quite a bit of staff if the budget rumors going around are true. Rumors that say that the higher-ed budget cut would be 50% above what has already been proposed.

    As it happens, this is in line with an email that President Shepard sent last week. The higher-eds have dispatched lists of what they'll have to cut if the higher cut targets are passed. And they're doing their best to scare people. I don't know exactly what was in our package, but it was grim. On purpose.

    Monday, the same day the above news was released, WWU announced the creation of an Outplacement Center in HR. This is to assist those staff that are given a layoff to find new jobs. I checked, and for Salaried Exempt Professionals like myself, they are required to give me either a sliding scale of warning (based on years of service, in my case 5 months notice) or a check equivalent to the same amount of salary. So if there are layoffs in the non-Union ranks, these people will need the Outplacement Center.

    This is a tacit admission that our budget woes are deeply unlikely to be addressed without a reduction in force of some kind.

    Bad graphs

    One of my pet peeves about the Brocade performance graphs is the bad legend. Take this example:
    Performance graph showing GB/Gb inconsistency
    The line across the bottom is clearly labeled "Gb/sec". Following the rule of, "big B is byte, little b is bit," this should mean gigabit, not gigabyte. Port 13 is shoving data at 45.9Mb/sec speeds, and is on a 2Gb port.

    This is wrong. Port 13 is a NetWare server where I'm running a TSATEST virtual backup against a locally mounted volume. The reported data-rate was 2203 MB/Minute, or 36.7 MB/Second, or 293 Mb/sec. The difference between the reported data rate of 36.7 and the Brocade rate of 45.9 probably has to do with caching, but I'm not 100% certain of that.

    More generally, I've seen these charts when I've been pushing data at 140MB/Second (a RAID0 volume, I was doing a speed test) which is over 1Gb/s. As it happens, that was the first time I'd ever seen a Fibre Channel port exceed 1Gb, so it was nifty to see. But the performance reported by the Brocade performance monitor was 140 Mb/s. Clearly, that legend on the bottom is mis-labeled.

    I can't figure out why that would be, though. The fastest port we have in the system are those 4Gb ports, and their max data-rate would be 512 MB/s. A speed that wouldn't even reach "1.2" on their chart. I don't get that.

    Novell marketing

    | 1 Comment
    Looks like they're continuing to follow their grassroots marketing strategy. Novell recently created a Novell channel on YouTube. They always made many videos for BrainShare, and I see some vids from BrainShares past up there already. So if you were wandering the internet looking for the Novell PC/Mac spoofs, now you have an official channel to find them.

    The mystery of lsimpe.cdm

    Last night I turned on multi-path support for the main NetWare file cluster. This has been a long time coming. When we upgraded the EVA3000 to an EVA6100 it gained the ability to do active/active IO on the controllers, something that the new EVA4400 can also do.

    What's more, the two Windows backup-to-disk servers we've attached to the EVA4400 (and the MSA1500 for that matter) have the HP MPIO drivers installed, which are extensions of the Microsoft MPIO stack. Looking at the bandwidth chart on the fiber-channel fabric I see that these Windows servers are also doing load balancing over both of the paths. This is nifty! Also, when I last updated the XCS code on the EVA4400 both of those servers didn't even notice the controller reboots. EVEN NICER!

    I want to do the same thing with NetWare. On the surface, turning on MPIO support is dead easy:

    Startup.ncf file:
    SET MULTI-PATH SUPPORT = ON
    LOAD QL2X00.HAM SLOT=10001 /LUNS /ALLPATHS /PORTNAMES


    Tada. Reboot, present both paths in your zoning, and issue the "list failover devices" command on the console, and you'll get a list. In theory should one go away, it'll seamlessly move over to the other.

    But what it won't do is load-balance. Unfortunately, the documentation on NetWare's multi-path support is rather scanty, focusing more on configuring path failover priority. The fact that the QL2X00.HAM driver itself can do it all on its own without letting NetWare know (the "allpaths" and "portnames" options tell it to not do that and let NetWare do the work) is a strong hint that MPIO is a fairly light weight protocol.

    On the support forums you'll get several references to the LSIMPE.CDM file. With interesting phrases like, "that's the multipath driver", and, "Yeah, it isn't well documented." The data on the file itself is scanty, but suggestive:
    LSIMPE.CDM
    Loaded from [C:\NWSERVER\DRIVERS\] on Feb 4, 2009 3:32:13 am
    (Address Space = OS)
    LSI Multipath Enhancer
    Version 1.02.02 September 5, 2006
    Copyright 1989-2006 Novell, Inc.
    But the exact details of what it does remain unclear. One thing I do know, it doesn't do the load-balancing trick.

    More on DataProtector 6.10

    | 1 Comment
    We've had DP6.10 installed for several weeks now and have some experience with it. Yesterday I configured a Linux Installation Server so I can push agents out to Linux hosts without having to go through the truly horrendous install process that DP6.00 forced you to do when not using an Installation Server. This process taught me that DataProtector grew up in the land of UNIX, not Linux.

    One of the new features of DP6.10 is that they now have a method for pushing backup agents to Linux/HP-UX/Solaris hosts over SSH. This is very civilized of them. It uses public key and the keychain tool to make it workable.

    The DP6.00 method involved tools that make me cringe. Like rlogin/rsh. These are just like telnet in that the username and password is transmitted over the wire in the clear. For several years now we've had a policy in place that states that protocols that require cleartext transmission of security principles like this are not to be used. We are not alone in this. I am very happy HP managed to get DP updated to a 21st century security posture.

    Last Friday we also pointed DP at one of our larger volumes on the 6-node cluster. Backup rates from that volume blew our socks off! It pulled data at about 1600GB/Minute (a hair under 27MB/Second). For comparison, SDL320's native transfer rate (the drive we have in our tape library, which DP isn't attached to yet) is 16MB/Second. Considering the 1:1.2 to 1:1.4 compression ratios typical of this sort of file data, the max speed it can back up is still faster than tape.

    The old backup software didn't even come close to these speeds, typically running in the 400MB/Min range (7MB/Sec). The difference is that the old software is using straight up TSA, where DP is using an agent. This is the difference an agent makes!

    XKCD gets it (unsurprisingly)

    Today's XKCD:



    As one crypto-wonk I spoke to...um...10 years ago said, this is called, "rubber hose cryptanalysis". Or put another way, the easiest way to crack crypto is to attack the ends points. Don't waste your time brute-forcing the cypher text, kidnap the person who owns the password and beat them until they tell you what it is. Or grab the cleartext using a screen-scraper when the recipient decrypts it. Or sniff the crypto-password with a key-logger when it is enciphered. Or live-clone the box once the encrypted partition is mounted. Except for the beatings, US law enforcement has used all of these methods to circumvent encryption.

    It is for this reason why the UK has passed a law making it an illegal activity to withhold crypto-passwords when demanded by law enforcement. Failure to reveal the passphrase will result in jail time, even if the crime they're investigating has a lower mandatory sentence. The cryptonerds that xkcd was lampooning have thought of this, which is where the concept of the duress key comes from; a key you give when you are under duress that when used will either destroy your data instead of revealing it or reveal an equivalent amount of innocent data.

    The problem with a duress key like this is that law enforcement NEVER works on the live data if they can at all get away with it. Working on live data taints evidence-chains, which makes convictions harder. So you set up a duress key for your TruCrypt partition, UK police nab it, demand the password, you give the duress password, it'll only scrub the copy of the data they were working on. Now they know you lied to them, and you are now guaranteed to be asked firmly for the real password, if not thrown into jail for hampering a police investigation.