December 2010 Archives

IPv6 in the home

Comcast, the United States largest ISP, has a vested interest in IPv6. They've been posting their progress on a web-page:

For those wondering when they'll get a v6 capable ISP, and I know many of my readers are technically inclined enough to desire such, it is coming. I haven't dropped by the site in a while, it doesn't update much, but they did present at IETF recently and posted their slides. One slide-deck in particular is interesting.

Down on page 5:
* Delegated prefix is minimally a /64 by default for trial (LAN-­‐side)
That is a subnet being allocated to customers, not a single IPv6 address. Clearly, Comcast is leaving it up to customers to figure out if they want to use some kind of address-translation or just use straight-up public IPv6 addresses on their home network. This kind of thing is exactly what the authors of IPv6 had in mind, and not coincidentally is one of the biggest critiques of IPv6... the role of NAT.

NAT has been a network security 'best practice' for years. While firewalls provide robust network security, hiding the IP-space behind the firewall makes planning an attack on that IP-space take longer. A fact that has been enshrined in the PCI-DSS standards.

IPv6 NAT exists now, which it didn't shortly after IPv6 was ratified. The question remains over what addresses you use to replace the obsolete-in-v6 RFC1918 addresses we all know and love. The answer to that is one of two types of addresses:

  • If your home network is completely flat, you only have one subnet, then Link Local addresses (fe80:/8) will work just fine. These addresses are explicitly non-routed, so a NAT gateway at the edge of the network is required.
  • If your home network has multiple subnets, Unique Local addresses are what you want. Unlike Link Local, ULA's are routeable. NAT is optional, but your home-ISP is vanishingly unlikely to agree to route those addresses.
However, running without any kind of address translation is not as insane as it sounds. Keep in mind a few points.

  • Comcast is handing out a /64 subnet to you, so your attacker already knows what your IP space looks like.
  • A /64 provides a mind bogglingly huge number of potential addresses. 2^64 worth! That's vastly, vastly larger than the entire 2^32 IPv4 address space. While the nature of IPv6 autoprovisioning reduces the actual number of addresses that are likely to be provided, scanning it is still very infeasible.
  • Unless you set up your own domain to provide it, Comcast will not be providing forward or reverse DNS lookups to your /64-worth of IP addresses. This greatly reduces the ability of attackers to recon your network.
Running without a firewall is still just as insane as it sounds, though. Happily, you can do firewalling without having to NAT.

Running 'in the clear' will be an absolute boon to server-in-the-home folks. Get a domain from registrar-of-choice and set up your DNS records to point to the server on your home /64 and away you go. Want to play with multiple web-servers? No problem! Everyone can use port 80 on their very own IPv6 address, no need to monkey with :8080 :8008 and so on. Of course, terms-of-service violations may get this behavior whacked, but I can guarantee it'll happen regardless of what the TOS forbids.

Whole suites of protocols and applications just plain work better when there isn't a NAT gateway getting in the way. Peer-to-peer networks of all kinds are a LOT easier to maintain without NAT. We're still years and years from a fully featured pure-v6 Internet experience, but I like the way it looks from here.

A PowerShell tip

| 1 Comment
Recently I had the pleasure of attempting to use 'netsh' output in PowerShell. The output was fine, the hard part was getting the output. The netsh command behaved differently than other cli utilities I've used, notably dsget. With dsget, this works:

$List = dsget user -name "$FName"
But with netsh, this very much didn't:

$List = netsh dhcp server \\dhcpsrv scope $ScopeID show clients 1
The failure was pretty clear in that "$ScopeID" was not actually being correctly interpreted before execution. It was actually passing "$ScopeID". In order to make that work, I had to call out by way of the Invoke-Expression cmdlet.

$List=invoke-expression "netsh dhcp server \\dhcpsrv scope $ScopeID show clients 1"

Invoke-Expression is pretty robust in that it'll do variable substitution before execution, very handy in scripts!

Bad-for-you office treats

T'is the season for home made goodies to show up in the office break room.

Or used to.

Some time between when I started (7 years ago) and now the culture in this office has changed. Technical Services shares a floor with Administrative Computing, which is where most of ITS's programmers hang out. SysAdmins and programmers, you'd think this would be prime munching grounds. But you'd be wrong.

Recently I brought in some salted caramel puff corn (a.k.a. "crack" and just as bad for you as it sounds, recipe). Back in 2005 when I brought a batch in it didn't last until lunch. This time around I brought home leftovers. Other sugar-laden goodies have also gone from rapacious consumption to guilty sneaking. You know someone is feeling guilty about eating something when they cut a pre-cut piece in half and eat the half-piece.

I don't know what exactly has caused this sea-change in munchie preference, but it certainly has changed.

A thought on the season

T'is the season for new software
fa la la la la, la la la la
Update we now to our displeasure
fa la la la la, la la la la

Gone is now our dream of sleeping
fa la la la la, la la la la
For if not done there will be weeping
fa la la la la, la la la la

Shopping for a datacenter

| 1 Comment
A nice check-list for things you want to have in your new datacenter was posted today on ServerFault. Some good things in there, and all in one list!

However, there is one thing that is not quite right.

"It should have Halon fire suppression, not sprinklers."
Actually, it should have both FM200 (or equivalent) AND sprinklers, at least according to most fire-codes.

Sprinklers in the datacenter? Yep. One of the last things I did at my old job (11/2003 to be exact) was help move into a newly built datacenter, and I had quite a shock when I saw sprinkler heads over the server racks. I asked my boss about that, she had been neck deep in the building of it, and she replied that yes, it was indeed fire-code and that was new since the last time she helped build a datacenter in the 80's.

Building and fire codes are nigh impossible to get at in their entirety online, it seems they're put behind pay-walls so I can't link directly to them. Considering which governmental agency was going to be putting machines in that room, I consider that fire-code ruling to be highly trustworthy. FM200 protects all that very expensive gear in the datacenter. The sprinklers protect the room and building.

Those sprinklers should be dry-pipe, pre-action sprinklers. No sense tempting fate more than required.

OpenSUSE and rolling updates

I somehow missed this when it first came out, but there is a move a-foot to create a rolling-update version of OpenSUSE. The announcement is here:

In the last couple of days the repo has been created and opensuse-factory has been awash in requests to add packages to it.

What IS this thing, anyway?

It's like Factory, but with the stable branches of the packages in use. For instance if OpenSUSE 11.4 releases and a month later Gnome 3.0 drops, 11.4 will still be on whatever version it shipped with but OpenSUSE Tumbelweed (the name of this new rolling-updates version) will get it. The same applies to kernel versions. 11.4 will likely have 2.6.37 in it, but 2.6.38 will drop pretty soon after 11.4 releases.

Is this suitable for production?

Depends on how much testing you want in your packages. The order of tested-stable of SUSE versions from least to most is:

  1. Factory (breaks regularly!)
  2. Factory-Tested
  3. Tumbleweed
  4. OpenSUSE releases
  5. SLES releases (long term support contracts available)
Factory-Tested is also pretty new. It's a version of Factory that passes certain very specific tests, so you're much less likely to brick your system by using it over Factory itself. It will have bugs though, just not as many blockers.

There are some usages where I'd be leery of OpenSUSE releases just from code quality and support considerations, and Tumbleweed would be nearly certain to be rejected. And yet, if your use-case needs cutting edge (not bleeding edge) packages, Tumbleweed is the version for you.

Right now it looks like determining which packages get updated in Tumbleweed will be determined by the project maintainers. For the Gnome example, they have a Devel and Stable branch in their repo and it will be the Stable repo that gets included in Tumbleweed. Find a bug? It'll get reported to the repo-maintainer for that package. It may get fixed quicker there, or not. Tumbleweed users will help the OpenSUSE releases be more stable by providing testing.

Personally, I'll be sticking with the Releases version on my work desktop, since I need to maintain some stability there. I just might go Tumbleweed on my home laptop, though. 

The ebb and flow of student life

Looking at our bandwidth chart for the last day I can really tell it's finals week.
See that trail-off about 1am last night (Sunday)? That normally starts earlier and bottoms out faster when students aren't up doing finals-related things. I know from watching printing-activity reports for overnights that for the first part of finals week our printing activity between 1:30am and 6am is markedly higher than it is at any other time during the quarter. Bandwidth-usage also increases during this time as they take YouTube breaks and whatnot whilst typing madly. By Friday the chart should be a lot flatter as students who don't have end-of-week finals up-root and leave for home mid-week.

Back on printing, our morning peak starts earlier during finals. During most of the quarter usage starts rising about 6:30am and doesn't really get going until 7:30-8:00. This time of the quarter the rise starts at 6am, and is a lot busier, earlier. The steady drumbeat of mid-terms means that there are usually a couple of people pulling all-nighters starting roughly the 3rd week of classes, but finals really focus everyone.

Bootstrapped authentication

| 1 Comment
This weekend my father-in-law had his email account hijacked. While I've handled this kind of aftermath at work from time to time, it really focuses the mind when it is family it happens to. It was probably an Evil-AP attack sometime last week, he failed to be sufficiently alarmed when his email client alerted about a bad SSL certificate, not phishing. Happily there wasn't much email in the account (POP3 user) for the Evil People to plunder through. Even so, every time he took a walk this weekend someone (many someones) would ironically ask him about his trip to Cyprus and express ironic gratitude that he made it home safe. Jokers, the lot of 'em.

Happily for my sleep schedule, he learned of it during a time of day that the only reason I should be getting phone calls is for a death in the family or something at work has gone fatally grink. Which means he called his email support people first, and not me.

Even so, one of the things I told him before I learned how empty his mailbox was is that control of an email account, especially a private one, is a great way to bootstrap into ecommerce accounts. General internet consensus seems to have arrived on the idea that your email address should be your username on most sites. Most non-bank password-reset processes generally involve an email sent to that very account. Add in 3+ years worth of payment receipts, and the Evil People have a nice list of ecommerce sites where that email address can likely be used to gain access. Many sites will send you the actual password in that email, others will send you a link that'll force a password change but still let you in. Either way, they're in.

Not in my father-in-law's case, big empty mailbox, but still.

It's for this reason that many of my friends have a specific email account just for paying for things on the Internet. It only gets checked via webmail, and infrequently. Therefore, that account has a smaller attack surface.

Which is to say that email-as-proxy-password is a crappy authentication environment. All three of the big email providers have a way to use their authentication services, not just having an account there, as an actual auth service. For Google and Yahoo! it's OpenID, for Microsoft it's LiveID. Microsoft has had Passport for ages and even actively promoted it, but it still failed to gain much of a foothold. It's these sorts of systems that can break us out of the bad-password-reset process trap!

ahm, no.

In this case the attacker actually had a legitimate login to one of the big three email account providers. Once the attacker had that, they could quite legitimately leverage that to login to any OpenID/LiveID sites without having to go through a password-reset process. I'm pretty sure this is why most sites that handle money aren't leveraging these systems.

Evil-AP and phishing allow attackers to use a legitimate password to log in to accounts. The only real defense is some kind of two-factor security. Two-factor can work find for web-logins, banks do it all the time, but POP and IMAP don't have native hooks that support it. Both protocols have Kerberized versions that can be used, and the Kerberos system can require two-factor; but this is a usability nightmare and will never be widely adopted. Alternately, the SSL-wrapper around the POP/IMAP service can require a Client Certificate before it even establishes the POP/IMAP session, but Client Certificates have never seen wide adoption, and mail-client support is spotty at best. Also, certificate distribution and management is another usability nightmare.

What could be done is if these sorts of per-site login engines also asked for a cell number, which would only be used to notify of password reset/lockout events. It provides an out-of-band notification method. The down side with that method is that a large majority of people will leave that field blank from the very real fear that entering a number would sign them up for SMS-Spam.  A more acceptable way would be an alternate email account that would also receive reset notices.

Usability makes this a tricky problem to solve.

Usage statistics

| 1 Comment
There is a not at all surprising disconnect between what Google analytics reports for this blog and what logfile analysis reports. In light of the FTC's push for an "opt out" button for tracking, I'm guessing the javascript-method of website tracking is going to be less effective.

Operating system:

Google Analytics
Log analysis

Interestingly, log analysis also breaks down the OS versions in use. I'm happy to note that the large majority of the Linux users are Suse variants. XP users still outnumber Vista/Win7 users.


Google Analytics
Log analysis
Internet Explorer
The other/unknown is likely the log-analysis engine's inability to figure out some agent strings. At a guess it's really under-reporting all the Chrome users out there. Even so, there are significant differences between the two. To me this looks like Firefox users are much more likely to be using NoScript.

And finally, once browsers start scrambling the User Agent String, even that will be not useful for this kind of tracking.