August 2012 Archives

I called that, sort of. Maybe

| No Comments
In my piece titled, The Migration Threshold back on March 31, 2006, I wrote:

I think Novell is doing some good in providing alternatives to Windows, but we're still a decade away from any serious chinking at the armor. Vista will provide a migration point for Windows shops, but that'll be soon enough that the Microsoft gravity-well won't have been diminished all that much.
It's 6 years later, not a decade, but Windows is definitely the minority desktop in my office right now. Only it isn't Linux that's providing the alternative, it's Apple. We are a Nimble Startup with a culture of providing the tools you need to do your job and not obsessing on standardizing those tools across people. So yes, we got to this state first.

But for the big corporates who consider the computing devices used by their employees to be part of the overall asset management system, it's less obvious. The iPad did more to disrupt the Microsoft desktop monopoly than Novell ever did, aided and abetted by Google, the major Telecom carriers, and Mozilla. Those big corps will still attempt to standardize, but the options provided to their workers will involve more than just Big Blue options.

Change-automation vs. LazyCoder

The lazyCoder is someone sees a need to write code, but doesn't because it's too much work. This describes a lot of sysadmins, as it happens. It also describes software engineers looking at an unfamiliar language. Part of the lazy_coder is definitely a disinclination to write something in a language they're not that familiar with, part of it is a disinclination to work.

It has been said in DevOps circles (though I can't hunt up the reference):
A good sysadmin can probably earn a living as a software engineer, though they choose not to.
A sentiment close to my heart as that definitely applies to me. I have that CompSci degree (before software engineering degrees were common, CSci was the degree-of-choice for the enterprising dot-com boom programmer) that says I know for code. And yet, when I hit the workplace I tacked as close to systems administration as I could. And I did. And like many sysadmins of my age cohort or older, I managed to avoid writing code for a very large part of my career.

I could do it as needed, as proven by a few rather complex scripts I cobbled together over that time. But I didn't go into full time code-writing because of the side-effects on my quality of life. In my regular day to day life problems came and went generally on the same day or with in a couple days of introduction. When I was heads down in front of an IDE the problem took weeks to smash, and I was angry most of the time. I didn't like being cranky that long, so I avoided long coding projects.

Problems are supposed to be resolved quickly, damnit.

Sysadmins also tend to be rather short of attention-span because there is always something new going on. Variety. It's what keeps some of us going. But being heads down in front of a wall of text? The only thing that changes is what aggravating bit of code is aggravating me right now[1]. Not variety.

So you take someone with that particular background and throw them into a modern-age scaled system. Such a system has a few characteristics:

  • It's likely cloud-based[2], so hardware engineering is no longer on the table.
  • It's likely cloud-based[2], so deploying new machines can be done from a GUI, or an API. And probably won't involve actual OS install tasks, just OS config tasks.
  • There are likely to be a lot of the same kind of machine about.

And they have a problem. This problem becomes glaringly obvious when they're told to apply one specific change to triple-digits of virtual machines. Even the laziest of LAZY_CODER will think to themselves:

Guh, there has got to be a better way than just doing it all by hand. There's only one of me.
If they're a Windows admin and the class of machines are all in AD as it should, they'll cheer and reach for a Group Policy Object. All done!

But if whatever needs changing isn't really doable via GPO, or requires a reboot to apply? Then... powershell starts looming[3].

If they're a *nix admin, the problem will definitely involve rolling some custom scripting.

Or maybe, instead, a configuration management engine like Puppet, CFEngine, Chef or the like. Maybe the environment already has something like that but the admin hasn't gone there since it's new to them and they didn't have time to learn the domain-specific-langage used by the management engine. Well, with triple digits of machines to update learning that DSL is starting to look like a good idea.

Code-writing is getting hard to avoid, even for sysadmin hold-outs. Especially now that Microsoft is starting to Strongly Encourage systems engineers to use automation tools to manage their infrastructures.

This changing environment is forcing the lazy coder to overcome the migration threshold needed to actually bother learning a new programming language (or better learning one they already kinda-sorta know). Sysadmins who really don't like to write code will move elsewhere, to jobs where hardware and OS install/config are still a large part of the job.

One of the key things that changes once the idea of a programmable environment starts really setting in is the workflow of applying a fix. For smaller infrastructures that do have some automation, I frequently see this cascade:

  1. Apply the fix.
  2. Automate the fix.

Figure out what you need to do, apply it to a few production systems to make sure it works, then put it into the automation once you're sure of the steps. Or worse, apply the fix everywhere by hand, and automate it so that new systems have it. However, for a fully programmable environment, this is backwards. It really should be:

  1. Automate the fix
  2. Apply the fix.

Because you'll get a much more consistent application of the fix this way. The older way will leave a few systems with slight differences of application; maybe config-files are ordered differently, or maybe the case used in a config file is different from the others. Small differences, but they can really add up. This transition is a very good thing to have happen.

The nice thing about Lazy Coders is that once they've learned the new thing they've been avoiding, they tend to stop being lazy about it. Once that DSL for Puppet has been learned, the idea of amending an existing module to fix a problem becomes something you just do. They've passed the migration threshold, and are now in a new state.

This workflow-transition is beginning to happen in my workplace, and it cheers me.

[1]: As Obi-Wan said, It all depends on your point of view. To an actual Software Engineer, this is not the same problem coming back to thwart me, it's all different problems. Variety! It's what keeps them going.
[2]: Or if you're like that, a heavily virtualized environment that may or may not belong to the company you're working for. So there just might be some hardware engineering going on, but not as much as there used to be. Sixteen big boxes with a half TB of RAM each is a much easier to maintain physical fleet than the old infrastructure with 80 phsysical boxes of mostly different spec.
[3]: Though if they're a certain kind of Windows admin who has had to reach for programming in the past, they'll reach instead for VBScript; Powershell being too new, they haven't bothered to learn it yet.

You have been hired by a company after a developer was fired for cause. Another developer looked at his stuff and found it seriously lacking. He failed to live up to his obligations and was let go. However, the project he was working on still needs to be completed. There isn't time to build it all afresh, so you have to make his non-working code work. You have one week.
This came up in the car yesterday as I was driving back to the office with one of our summer interns after a visit to our datacenter. Apparently he'd never had to deal with someone-elses code before, which made this summer a rather eye-opening experience for him. He came into the middle of a product that many people had contributed to, has over a year's complexity built into it, and is very much not like the kind of thing you work on in CSci courses. An exercise like the one I outlined above would be rather useful for aspiring Software Engineers, since they'll be working on other people's broken crap for a good part of their careers.

It's also a lot of what Sysadmins run into when starting a new job. The tendency to haaaaate your predecessor is strong because they did things so very differently than you do it. Also, a significant part of our troubleshooting time is spent diagnosing problems in other people's code, both operating-systems and applications, so this is a well used skill for the likes of us. Even closed-source black-box binary blobs.

Definitely a skill that can be taught, and a good idea too. Once the concept of 'maintainability' has dawned, it definitely does change how you build your programs. Or automation scripts.

Charles Stross, SF author, posted a talk he gave at TNG's Big Tech Day in June of this year. In the talk he postulated what one aspect of computing would look like that far in the future with Moore's and Koomey's laws continuing for some time. He suggested that a device with the power of a modern Tegra 3 driven tablet would cost pennies to make. That's cheap enough you could wall-paper a city with them cheaply.

He had a lot of good ideas in there, and not just the panopticon society stuff you might expect. City planners would be keen on them as it would allow them to much better instrument things that the rest of us take for granted like road-expansion (a.k.a. pot-hole risks), sewer/water-main/storm-drain flow, and truly accurate traffic-flow measurements (including bikes). If you're a Public Works manager in a city that's occasionally subjected to 3cm in 15 minutes downpours, you're really interested in how well your storm-drains are working.

So in true systems-engineer style my brain focused on the engineering details of managing such a network.

Gathering the data output of a billion sensors slathered liberally across a large city is not a small problem. It all has to get to a central point where decisions can be made on the gathered data, and that means a lot of aggregation somewhere along the way. Mesh networks are quite doable, especially at low data-rates, but that's still a lot of discrete inputs that have to be managed.

If processing nodes are that cheap, and you have a lot of them, then you don't really care when one or two die. Or ten. Or a hundred. You have enough for a lot of overlapping coverage, and if coverage gets light enough in one area that the artifacts are showing up in the decisioning aggregation tool then you need to pay attention to it. Lightning strike a street-lamp? It probably blew out all the embedded sensors in a certain radius around the strike, and you may see the City grind and overlay the street near there to replace the sensors, even if the street didn't actually need it yet. Some kid dismantles a microwave oven and manages to get the cyclotron both working and pointed in the right direction to fry electronics? Who cares about the alley, that may not get fixed for years.

There is one thing that such a network would NOT have: emergent AI.

Yes, it can have a beeeeeeelion nodes.

Yes, that is a lot of incomprehensible (to some) complexity.

Yes, the networking protocols likely involve dynamic relinking with neighbors, which sounds an awful lot like a neural network.

But it still isn't enough to bootstrap an AI we'd recognize as an AI. As a smart person I no longer remember said, "AI is just automation we can't comprehend." But that's for another post.

Implementation issues aside, Stross did ask for opinions on what could be done with such massively distributed computing. And this is not distributed like I'm doing in my day-job, this is processing sensor inputs and filtering them inwards to the big iron.

A computer in every doorknob.
We already have that in the form of the Hotel Doorknob, but in 2032 it'll be fancier. Imagine your cell-phone being your room-key, and all you have to do is walk up to the door and it unlocks the moment you touch the knob. The hallway network tracks the phone so it knows which side of the door you're on. When you walk out the door it remembers what RFID tags you're carrying (which could be a lot) to better improve the "is that you" detection.

The same thing could very easily be used in Corporate Headquarters buildings to track your every movement. Such systems already exist, but they'd be even cheaper to deploy and wouldn't require the employee to carry anything they didn't walk in the door with.

A computer in every aerodynamic bolt.
The use in airframe instrumentation is very interesting. Strain gauges for the entire airframe could definitely record to the flight recorders, which would yield very valuable information to crash investigators after an incident. Such systems could feed into avionics just as easily.

Reliable ambient voice-to-text.
We can do VTT with current technology, but it's buggy. Dragon Naturally Speaking. Siri. However, what if you had 15 devices all listening in to the same conversation? You could take the output from that VTT processing and statistically compare them to derive a very good guess of what was actually said.

Other Blogs

My Other Stuff

Monthly Archives