March 2010 Archives

Know your I/O

We recently had one of our student-workers who worked on the ADMCS helpdesk up and graduate. On his last day, he spent a good 30 minutes pumping me for storage information. We're a teaching institution, and I don't get asked that often, so I didn't mind. It did, however, get me thinking about how storage is managed. Now, there are better books (and blogs) on this than what I'm writing here, but this is how I think of it.

Know your I/O: Access Patterns

When planning the storage for a system one thing you need to know above all other information is how that data is going to be accessed. This isn't WebDAV vs. SMB, this is more storage specific. I'm talking about 95% reads 5% writes, 1.2Gb/minute average transfer, highly latency sensitive. That kind of thing. You can make some assumptions based on the applications that'll be accessing the storage, but if you really need to know, the only way to find out is measuring.


Once you know how that data is going to be accessed, you can build or provision its storage accordingly. Knowing how likely the dataset is to grow is also something you need to know, but that's a luxury we often don't get. And for the love of performance metrics, don't forget peak loading and behavior under fault conditions.


Here are several areas to be thinking of when looking at a storage request.

Spying on SSL

ArsTechnica had a nice write-up regarding a recently uncovered hardware device that facilitated man-in-the-middle attacks for SSL. The fact that this is possible is nothing new. The fact that it now exist in hardware seemingly is.

While the article focuses on the government spying angle, this exact same thing applies to corporate spying. For the sake of example, presume I'm working in an identical role in EvilCorp. EvilCorp does what quite a lot of corporate America does and restricts employee access to the internet. They take it to the extra step of attempting to stop 'information leaks' of proprietary documents. While systems to do this for SMTP-based email are commonly available, blocking webmail access is another issue. One option is to subscribe to a webmail block-list that update your filtering appliance with sites to block users from. Another option is to allow them to access it, but be sure they're not selling you out to MoreEvilCorp.

To do this, you need a mandatory HTTP proxy. Dead easy to implement in the modern network. Second, you need access to a Certificate Authority trusted by your peons. If you're running Active Directory (and really, what self-respecting EvilCorp wouldn't?) then you have a trusted CA built into your infrastructure. Third, you need a software package (or maybe a, say, hardware appliance) that'll generate a certificate signed by your own CA to gmail.com, talk SSL to the client, and then create an SSL session with the real gmail.com, allowing you to sniff their personal email free of that dreadful encryption.

This appliance sounds like an all-in-one hardware appliance designed to do exactly what a lot of Companies would really like to start doing. And what's good for Corporate spying is good for the spooks (for whom it is more likely to be illegal).

New Laptop

I've been looking for a new laptop. I finally found one, Dell had a sale on their Studio 15 line. This is a Core i5 laptop with all the niftyness I wanted. I also purchased a new 320GB hard-drive for it. Why? So I could put the Dell supplied one, with its pre-built Win7 install, into a baggie for later use if I so chose, and so I could get openSUSE onto it without mucking about with resizing partitions and all that crapola.

I did what I could to make sure the components were Linux compatible (Intel wireless not Dell, that kind of thing), but some things just don't work out. This is a brand new laptop with a brand new processor/chipset/GPU architecture, so I planned on having at least one thing require several hours of hacking to get working. This is the price you pay for desktop-linux on brand spanking new hardware. I, at least, am willing to pay it.

And pay I am. I installed OpenSUSE pretty simply, or at least it started there. It got to the first reboot and got a black screen of nothingness. Watching post it was pretty clearly a kernel video handling problem. Some quick googling identified the problem:

OpenSUSE 11.2 uses the 2.6.31 kernel. This laptop uses an intel 4500MHD GPU. Support for which was introduced in 2.6.32, and greatly refined in 2.6.33. What's more, it uses Kernel Mode Setting Direct Rendering Management, support for which was introduced in 2.6.32. All this means is that 2.6.31 simply can't support this video GPU at anything like reasonable speeds.

OpenSUSE 11.3 (currently in a very buggy Milestone 3 release, soon to be M4) has a 2.6.33 series kernel. But I don't want to be buggy. So...

Time to compile a kernel!

Because I've done it before (a LOT), kernel compiles do not scare me. They take time, and generally require multiple runs to get right, so you just have to plan for time to get it right. So I booted to the OpenSUSE 11.2 Rescue System, followed these instructions, and got into my (still half installed) file-system. I plugged it into wired ethernet because that's hella easier to set up from command-line, and used yast to grab the kernel-dev environment. Then downloaded 2.6.33 from kernel.org. I had to grab the /proc/config.gz from a working x86_64 system, so I pulled the one from my 11.2 install here at work, threw it into the kernel-source directory, ran 'make oldconfig', answered a bajillion questions, and tada. make bzImage; make modules; make modules_install ; make install ; mkinitrd -k vmlinuz-2.6.33 -i initrd-2.6.33, and a bit of Yast GRUB work to make certain Grub was set up right, and reboot.

Most of the way there. I had to add this line to /etc/modprobe.d/99-local:

options i915 modeset=1

Which got me enough graphics to finish the openSUSE install, and get me to a desktop. I don't yet know how stable it is, haven't had time to battletest it. I probably need updated x.org software for full stability. I did get it up long enough this morning to find out that the wireless driver needs attention; dmesg suggested it had trouble loading firmware. So that's tonight's task.

Update: Getting the wireless to work involved downloading firmware for the 6000 from here, dropping the files into /lib/firmware, and rebooting. Dead easy. Now, Suspend doesn't work for some reason. That might be intractable, though.

Trends in racks

| 2 Comments
As alluded to in my last post, we're doing a bit of cleaning. And this cleaning has brought me into contact with some 10 year old stuff. As always when doing cleaning, you run into stuff that makes you go, "Oh yeah! That stuff!" When cleaning a filing cabinet you end up reading a bunch of stuff that you haven't laid eyes on in ages. When doing racks, you (or at least I) ponder how things have changed.

As I said in the last post, our racks don't have enough space between the back rail and the door. This was a common config in the 1998-2000 timeframe, but the industry has moved on and the need for a couple more inches of back-of-rack space has been built into all modern racks. Another two inches would do our racks a world of good, four would be heaven.

As for power handling, we bought a pallet of in-rack power-strips similar to these from APC. Only ours have 6 outlets not 10 (one source of irritation). What I'd like to use are a certain kind of 0-U unit that mounts in the rear of the rack, similar to these. Those 6-outlet strips were on the way out back in the day, but we bought a decade's supply so we're still using them.

These racks were designed with top-venting in mind. Back when these were purchased, you could still buy top-of-rack fan trays and 5% perforated back doors in order to implement a bottom-to-top cooling strategy. This was fine when we only could fit four ML530's into a rack. This is very not fine with modern servers that are resisting off 300-500 watts of heat at any given moment, and can cram 5x as many servers per rack. We've already done a lot to fix this problem, including getting some perforated front and back doors, and consolidating our servers down to a lot fewer racks so we didn't have to buy as many doors. It's also very likely we'll be getting some HVAC reengineering attention which should help our overall airflow problem.

These racks for the most part have a single 4-port KVM switch and a CRT/Keyboard/Mouse tray in them. We have three racks with larger KVMs in them that were added after the initial build-out. Back in the day, when the standard server was a 9U HP ML530 and we could only fit four servers to a rack and flat panel monitors were still four figures, this was just fine. However, those CRTs consume a LOT of rack space. I'd be more rabid about that, but our cooling and space issues mean those kinds of densities aren't required; we just need to put a bit of clear plexiglass in to block back-to-front recirculation. We've just replaced a bunch of those CRTs with LCDs that were just rendered redundant in desktop-land, which has improved things a titch.

Ripping out 9 servers, all of which were old enough to be from the era when servers were 7U or larger and had 3 power supplies, has given me a biiiiig pile o' power cables. Every single one of which is 10 feet long. A 10' cable is enough cable to go through a cable management arm for a server in the very bottom of the rack, and have enough cable to reach a power strip mounted in the very top of the rack. Since we mount our power-strips towards the middle of our racks, we don't need 10' cables. Heck, for the most part 6' cables are perfectly fine, with a few spots requiring 7' cables. This is something the server manufacturers have figured out, since they're all now shipping with 6 footers instead of the 10 footers these ML530's came with.

One structural problem that isn't easy to fix without rebuilding the room is the location of our HVAC units. They're at the edges of the room, which was the standard for very many years. The modern high-heat datacenter has caused the industry to figure out that placing these units in the middle of the room is more effective, even better is having a physically separate hot-air plenum. If we do manage to upgrade our UPS, which would allow us to add more heat-producers in the room, we'll have to pay much more attention to hot-spots than we are now. Moving HVAC units is not even close to cheap, and is highly disruptive.

Another thing I'd like to start doing is moving away from ye ole standard NEMA 5-15 style outlets on our power-strips, and onto IEC style outlets. I got to use these at my last job when we finished building a new datacenter, and they're really nice to work with. They don't project as much as 5-15's do, they're a lot easier to lock into place, and you can fit a lot more outlets per linear foot. This isn't going to happen any time soon, since it'll require buying new outlet-strips, actually purchasing power-cables, and re-running the power for existing servers.

A newbie mistake

| 1 Comment
Today I'm removing a series of servers from our racks. These are really old servers. As in, I wouldn't be surprised if one of them is 11 years old kind of old. Why they're still in our racks has a variety of reasons.
  • We have a hard time stopping use of old servers. There is a constant need for development and testing servers. Old crap serves this need well.
  • Due to our blade consolidation project several years ago, we had more rack-space than servers, so we just didn't remove stuff.
  • Due to our VMWare consolidation project (ongoing) we have more rack-space than we know what to do with, so old servers can sit in the racks for years before we need the space.
This particular 11 year old server is old enough it moved down to this building from the Bond Hall datacenter when Technical Services was exiled off campus in the 1999-2001 timeframe. While removing the rack rails for this server I found a rather strange configuration.
BadNut.jpgThis is looking down the inside side of the rack's vertical post. Look closely at that clip-nut. Notice anything weird?

The clips on that nut are on the wrong side. They're facing away from the rack. Attempting to unscrew this post caused the nut to spin around and around, and required me to hold the nut with my fingers.

What's more, the order is wrong. It should be screw, server rail, rack post, nut. Instead, it was screw, rack post, server rail, nut. Happily the screw heads were large enough they didn't fit through the rack post holes.

There were a few other rails mounted like that one. One memorable rail was screw, rack post, nut, server rail. That only worked because the rail was threaded, but at least the nut was mounted correctly. Nuts that spin freely are not good.

As near as I can figure, this is what happens when someone moves from racks with round holes to one with square holes and doesn't have a manual. This was several years before my time so I have no idea who did this. I have suspicions, but I'm not going to bring this up. It was 10 years ago, whoever it was has learned better since then.

More broadly, we need new racks. The ones we have don't have nearly enough back-of-rack space for everything that needs to cram in back there. Unfortunately, our large surplus of racks means that convincing the powers that be that we need new ones is very, very hard. Also, the power-strips we have for these racks are sooo 1990's. These racks were not designed for modern densities of 1-3U servers, they're designed for high densities of 5-9U servers. Because of their lack of back-of-rack space, we can't use those nifty modern power-strips that give you a display of the load on that strip.

When I went to put the rails in for the new tape library (YAY!) I found that those rails and our racks aren't really compatible. For whatever reason, if I put a rack nut in the hole, the rail's holes won't align with the nut. Since the rail's holes are threaded there is very little tolerance for that 2mm difference of opinion for where the center of the rack-nut needs to be. I managed to hack something together that'll keep the rails in, but it's still just wrong.  You work with what you got.

Centralized IT

| 2 Comments
I've had quite a bit of experience with the process of centralizing IT. At my last job I was at ground zero as I was on the committee that was charged with rationalizing an IT job family structure that was grounded in the early 1980's (key clue, the phrase, "electronic data processing" was slathered across many job titles, a phrase not at all in vogue in the 1990's). This particular consolidation event was driven from a directive from on high, above the CIO. So, as it were, it happened in spite of the grumbling.

WWU has gone through some of its own consolidations, but there are natural barriers to complete consolidation in the Higher Ed market. I'll get to those in a bit. The one thing acting as a serious barrier to consolidation in any organization are departments that are large enough to support their own multi-person IT departments. Departments with one or two people effectively doing the full IT stack (stand-alone sysadmins who also do desktop support, database maintenance, to-the-desk network wiring, and maybe a bit of app-dev along the side) are most vulnerable to being consolidated into the central Borg.

Some departments are all too happy to join the central IT infrastructure, as they see it as a way to shed costs onto another business unit. Others are happy because their own IT people are so overworked, the idea of getting them help is seen as a cost-free mercy; or put another way agreeing to consolidation is seen as a cost-free way to increase IT investment. Still others are happy to join because they want some nifty new technology and their stick-in-the-mud IT people keep saying, "no," and view the central Borg as a way to get that thing.

The big reason departments don't want their IT people consolidated away from them is personalized service. These are people who know the business intimately, something those central-office folk don't. The cost of maintaining an independent IT infrastructure is seen as a perfectly valid business investment in operational efficiency. Any centralization initiative will have to deal with this concern.

The other big reason shows up less often, but is very hard to overcome without marching orders delivered from On High: distrust of central IT in specific. If the business unit that contains central IT is seen to be less competent as compared to the local IT people, that business unit will not consent to centralization. If the people in central IT are collectively viewed as a bunch of idiots, or run by idiots, the only way that unit is centralizing is if a metaphorical gun is held to their heads.

My last job handled the all of the above and eventually came to an agreement. First and foremost, it was a fiat from On High that IT centralization would happen. All IT job titles started being paid out of the same budget. We then spent the next four years hammering out the management structure, which meant that for a long time a whole bunch of people had their salary paid by people with 0% influence on their work direction.

Many departments gleefully joined the central infrastructure, driven in large part by their own IT people. They'd been overworked, you see, and the idea of gaining access to a much wider talent pool, and a significantly deeper one as well, was hard to not take advantage of. These were the departments with 1-3 IT people. In almost every case the local IT people stayed in their areas as the local IT contact, which maintained the local knowledge they'd developed over the years.

There was one small department that was a holdout until the very end. An attempt to merge some 5 years earlier had gone horribly wrong, and institutional memory remembered that very clearly. It wasn't until that department got a new director that an agreement was reached. The one IT guy up there stayed up there after the merger and stopped doing server and desktop support in favor of department-specific app-dev work, what he was hired to do in the first place as it happened.

Then the arm-wrestling over the bigger departments took place. For the most part they kept near complete control over their own IT staffs, but their top level IT managers were regularly hauled back to the home IT office for 'management team meetings'. This ended up being a good move, since it reduced the barriers for communication at the very top level, and ultimately lead to some better efficiencies overall; especially in the helpdesk area as staff started to move between stacks after a while. Also, the departments that had been deeply skeptical of this whole centralized IT thing started working with other IT managers and getting their concerns heard, which reduced some of the inherent distrust.

With Higher Ed, there is an additional factor or two that my previous job didn't face. First of all, the historic independence of specific Colleges. Second, Universities are generally a lot less command-and-control than their .com or even .gov brethren. This means that centralization relies far more on direct diplomacy between IT business units than it does on direct commands from on high. Distrust in this environment is much more hard to overcome as coercion is not a readily available option.

Back in the day, WWU had 7 separate NDS trees. 7. That's a lot. Obviously, there wasn't much in the way of cross-departmental access of data. Over the course of around 5 years we consolidated down to a single 'WWU' NDS tree. Some departments happily stopped spending IT time on account maintenance tasks and let central IT do it all. Some departments gave up their servers all together. Time passed and still more areas decided they really didn't need to bother keeping local replicas, and let central IT handle that problem.

In the end, handling IT in Higher Ed means dealing with a more heterogeneous environment than is otherwise cost-effective. I've mentioned before how network management on Higher Ed networks resembles ISPs more than it does corporate networks, and that unfortunately applies to things like server and storage purchases. Now that we're in the process of migrating off of NetWare and onto Windows, it means we're now in the process of wrangling over rules governing Active Directory management.

We wrangled NDS control back in the 90's and early 00's, and now it's Microsoft's turn. As with the last round of NDS wrangling, some departments have gleefully turned over control (GPOs and file-server management specifically) of their department over to us in ITS. Others, specifically one with a large local IT presence, is really holding out for complete control of their area. They're clearly angling to just use us as an authentication provider and they'll do the rest, something that... well... negotiations are ongoing.

My crystal ball says we have somewhere between 5 to 10 years before the next wave of 'directory' upgrade forces another consolidation. That consolidation just might involve consolidating with a State agency of some kind. Perhaps the State will force us to use a directory rooted in the wa.gov DNS domain (wwu.univ.wa.gov perhaps), and our Auth servers will be based in Olympia rather than on our local network. Don't know. What is true, is that we'll be going through this again, probably within the next decade.

The Novell purchase offer

| 1 Comment
I haven't mentioned the purchase proposal from Elliot Associates before now, in large part because coverage is a lot better elsewhere. For those of you who haven't paid attention, Elliot Associates, an investment fund, offered Novell a buy-out of $5.75/share. This is not the IBM purchase everyone has been expecting for the last 14 years. Until today, people had been theorizing that their motivation is to sell off the profitable bits, and quietly phase out the non-profitable bits while pocketing Novell's large cash stash.

According to PRNews Wire, Elliot has no plans to slice-n-dice and plans to own the company. They can still do a lot, like kill products surviving more on nostalgia and a historical userbase rather than profitability, while living within their statements. Small encouragement, at least.

The last provisions before we sail

When we got warning that the Governor would be putting a draconian spending freeze into place, our supreme masters informed us we had to spend a certain amount of money now or we would lose it. HP-Boxes.pngAdditionally, we were told that funds in the next 12-24 months would be downright scarce, so order now while we still could.  I've talked about this in a few previous posts, but the orders have started to arrive.

We have a nice pile of HP boxes in the data-center right now, and they haven't all arrived yet. Most of the boxes in this picture are dedicated to storage in one way or another.

We haven't gotten the box with 200 LTO4 tapes in it, which should be a nice, big box. We did get the box with the labels for the tapes, though; that's that little one on the foreground. That box contained two folders of tape bar-codes, that box was w-a-y overkill. It also looks likely that HP managed to not ship us a monster box with 20+ individually boxed hard-drives! Talk about over-packaging, Batman.

We're not touching these boxes until they're all here, and we're done with the Spring Break madness. So once quarter starts (3/30) we'll have time to do things like install the new tape library, add a few shelves to our EVA4400. And figure out what we're doing with a storage server we're building (OpenNAS is a strong contender). As well as integrating one or two new servers into our ESX cluster while we're at it.

And then... we wait. Perhaps until 2012.

Tragic password policies

| 3 Comments
I just completed an order with Newegg for some personal computing equipment. That part was OK. What wasn't OK was the "Verified by Visa" thingy that popped up during the ordering process. My primary credit cards aren't Visa so I haven't seen that yet, despite shopping on sites with the verified by Visa logo on 'em. Since I hadn't used it before I had to set the durned thing up. Which meant picking a password.

My jaw dropped.

6-8 characters is stated in the 'password policy' that was posted. And no matter what I threw at it, if I used my shift key it wouldn't take the password. I don't know about you, but complex password policies have been around long enough that my fingers automatically go for the shift key when entering passwords. NOT using it took mental effort. In fact, the password I ended up with is markedly less secure than the one I use for throw-away accounts on web-sites I don't care about.

That is not a way to run a bank.

I don't know what "Verified by Visa" really provides, but whatever it is, password security isn't it.

They've got a point

Yesterday on El Reg was a nice article about the sorry state of the stand-alone mail client. WebMail has captured what little email people do while not at work, and the in-application messaging features of certain large social networking sites is supplying most of the rest of the private asynchronous chat messaging people are doing. And yes, I'm seeing a lot less non mail-list traffic in my private mailboxes than I was 10 years ago (of course, 10 years ago I was also still on Usenet. For the articles. Really!). Of the messages that aren't list-traffic, the rest are the usual assortment of semi-legit come-ons and a very large percentage of status update type messages from various social networking sites.

Anyway, stand-alone email is not getting the developer attention it once was. The Register article pointed out on page 2 that Opera has a surprisingly good mail client hiding in it. And they're right, it's pretty darned good. I'm using it at home in preference to Thunderbird even. I keep Thunderbird around for those exceedingly rare cases when I need either GPG or S/MIME for something, a feature Opera hasn't gotten around to dealing with yet and probably never will. But for simple email management, the mail client in Opera really is quite good.
I'm going over some of my older posts and am reposting some of the good stuff that's still relevant. I've been at this a while, so there is a good week's worth of good essays hiding in the archives. 
Back in 2005 I posted a story of my first bit of serious Linux hacking. This was back in college, and involved a 1.2.x era kernel. I had a 1.2GB drive, but somehow both DOS and Linux were ignoring the partition table. I figured it out, and this is how I did it.

Linux hacking from way back.

Looking back on it, this would have been a prime opportunity for me to turn into a kernel-hacker. All of my C training was fresh, and back then the barrier to entry for kernel-hackers was a lot lower. But, I didn't.

This post is from back before Blogger supported comments or labels! Old times, man.

3rd party application headaches

A while back we managed to push through some new purchasing rules that required IT review of any IT technology purchases. This is needed, since end-user departments haven't the first clue what'll work with our existing infrastructure, and it helps us advise them of complications. For instance, if a product requires PHP on IIS for some reason, we really want to be able to let them know before they purchase that doing so will require a server purchase as well since we don't support that environment currently.

Unfortunately, a small number of things still slip through. Perhaps we didn't read the manuals enough. Perhaps a high enough manager expended sufficient political capital to Make It So. But complications can arise when we go to make the new thingy work.

A case in point:

For the last two weeks I've been attempting to get a certain package up and running that has email capabilities. This has to fit within our Exchange system, which is a rather common environment. What isn't so common, it seems, is our insistence on secure protocols for authentication. While Exchange 2007 is perfectly willing to support naked POP3 and even naked SMTP-Auth, we, on the other hand, are not so forgiving. We wisely have a security standard in place that says that all authentication traffic must be encrypted, and this prevents us from running POP3 and SMTP in a way that allows passwords in the clear.

This package has support for one SSLed service: POP3-SSL. We don't support POP3 since our users were forever screwing themselves thanks to the default of "Delete on retrieval" in most mailer clients, which kind of pissed them off when they got to the office the next morning and their mailbox was empty.

Thanks to the use of stunnel I was able to tunnel unencrypted IMAP to Exchange's IMAP-SSL port at least, so that channel got working.

Right now I'm trying to convince stunnel and the application to work together to get SMTP-TLS working. Sadly for me, I have to wait a couple of hours before the app attempts an SMTP check for me to see if it works.

On the 'up' side, we're charging this department by the hour to get this set up. So the labor bill on this will be fairly high.

Highlight Week: Explaining LDAP

I'm going over some of my older posts and am reposting some of the good stuff that's still relevant. I've been at this a while, so there is a good week's worth of good essays hiding in the archives.
This one got some inbound links and attracted several readers. The question was asked on ServerFault but I had to echo by reply on my blog since it was too juicy. I've been working with directories since 1996 when I first ran into Novell NDS, so the concepts behind LDAP are engraved on my bones it seems like. So explaining it to someone else is an effort in... restraint of information. They don't need to know every little detail, just enough to get the concepts.

I still get wordy.

Explaining LDAP.

A different kind of dedication

Ars Technica has a nice article up about the technology and science behind Air Traffic Control:

http://arstechnica.com/science/news/2010/03/the-science-and-technology-of-air-traffic-control.ars

One of my friends actually is an air-traffic-controller. She works in one of the Area Control Centers mentioned in the article.

What the article doesn't go into at all is the human side of ATC work. These are people who are responsible for not killing people. Their training regimen is ridiculously stressful and includes hazing, for a very good reason. Failures result in death. Big failures result in mass death. Stress is reasonable. Much as we joke about SysAdmin devotion to duty:



We merely hold a candle to the sense of duty of ATC controllers. Failures of any kind are incidents that require investigation. The control channel (radio) is recorded and is a public record subject to FOIA and subpoena, so the whole world can hear you say the wrong thing before a plane did something destructive involving loss of life. The only way to survive stress like that is to have a sense of duty beyond all otherwise reasonable extents (or, less optimally, a god-like ego).

Some trivia I've picked up over the years:

  • If an airport is not inside a TRACON but has a tower, the ACC handles approach and tower handles landing.
  • If an airport is not inside a TRACON and also doesn't have a tower (middle of nowhere kind of airport), ACC handles both approach and landing. This becomes a major headache during Pheasant season in the Dakotas, when private aircraft from everywhere want to land at any available airstrip they can.
  • Pilots not doing what they are told are a source of major on-the-job stress for controllers.
  • ACCs also monitor and guide Military flights to a point.
  • ACCs have Military liaisons for national security reasons.
  • There are two major classes of aircraft flight rules, which involve vastly different routing rules: Instrument Flight Rules (IFR), and Visual Flight Rules (VFR). IFR planes are full members of the ATC system and have transponders, the right kind of radios, and the whole shtick. VFR planes are general aviation craft that ATC has only limited interaction with.
  • ATC on our southern border is a lot more exiting than our northern border, thanks to drug-runners. I've heard rumors of them using UAV's for drug-drops, for instance. The F16's get more work down there.
  • During thunderstorm season when you can have a solid front of thunderstorms from Bismark, ND to Tulsa, OK, this delays aircraft due to the need to not fly planes through thunderstorms. North Dakota will get a LOT more traffic that way, as planes divert north of the storm systems.
  • Volcanic events in Alaska can mess up ATC in the northern US due to ash concentrations. Ash, pulverized rock, chews up jet engines and kills planes.

If you're a pilot you probably know all this. But many of us don't. ATC: I don't want that job, but I'm glad other people can and do.

Highlight Week: The OES Benchmark

I'm going over some of my older posts and am reposting some of the good stuff that's still relevant. I've been at this a while, so there is a good week's worth of good essays hiding in the archives.
Shortly after the release of Novell OES SP1, the version of Open Enterprise Server based on SuSE Linux 9, I ran a benchmark series to determine just how it would hold up in our environment. The results were pretty clear: not that good. I re-ran some of the tests with later versions and it got a lot better. SP2 improved things significantly, and has gotten even better with OES2 (based on SLES10).

The long and short of it is that the 32-bit Linux kernel has some design constraints that simply prevented Novell from designing a NetWare-equivalent system when it came to NCP performance. The 64-bit kernel that came with OES2 helped a lot. Also, more intelligent assumptions about usage.

Our big problem was concurrency. Our cluster nodes regularly ran between 2000-6000 concurrent connections. Anyway, for details about what I found, read the series:

Benchmark Results Summary

It has pictures. Oooo!
I'm going over some of my older posts and am reposting some of the good stuff that's still relevant. I've been at this a while, so there is a good week's worth of good essays hiding in the archives.
In 2008 the Western Front, our Campus newspaper, ran an article about the efforts of the Computer Science department to attempt to manipulate Moodle into something that could replace Blackboard. This sparked an essay on my part, and is the closest I've come to actual political advocacy in this blog. I try to avoid that, since it can get you canned. But it was on technical merits, so I felt somewhat safe.

For those of you who've never worked with education in a technical sense, Blackboard is a classroom Groupware product. It has all the things you'd expect; like whiteboards, homework and testing methods, as well as the all important grade-book. Blackboard also holds all the right patents so it's the only really serious commercial classroom groupware product out there, much the same reason that no one is really a direct for-profit competitor to Adobe PhotoShop. A lot of cash-strapped .edus out there (and there are a lot) have striven to replace the very expensive Blackboard with the very open-source Moodle.

This essay turned into a good illumination of the hurdles facing our conversion from a closed-source critical-path enterprise application to an open-source critical-path enterprise application. Some of the things in the article have changed, we're running MySQL in a couple of places and I know 'enterprise' support is available for Moodle now, but the main intent is still valid.

Overthrowing Blackboard
I'm going over some of my older posts and am reposting some of the good stuff that's still relevant. I've been at this a while, so there is a good week's worth of good essays hiding in the archives. 
This post from 2007 has been a top search-engine magnet, even though I'm not the one who coined the term that's getting the hits: Encryption and Key Demands

The situation I talked about in 2007 made XKCD in 2009, XKCD Gets it (unsurprisingly).

And the UK law I talk about in 2007 reaches a conviction, Encryption and key demands, in reality.