June 2012 Archives

A post Email world?

Any time I see some pitch about a post-email world I immediately look to see what kind of walled-garden I'm being talked into. Because that's what they're doing.

Yes, email has been around 40+ years now.

Yes, it is based on some truly antiquated standards.

Yes, doing it right requires spending big money on spam filtering.

Yes, there is no inherent organization to email other than timeline.

And yet it still persists, no matter how many up and coming startups or well entrenched nigh-ubiquitous megacorps try and subvert it. It's still there. We all have at least one email address, probably a few. Why has email survived this long when so many other 40+ year old technologies have been dead so long they only show up in CompSci History of Computing courses?

  • Email is simple, and helps people communicate.
  • Email is extensible, which allows introducing things like filtering, file-transfer, organization, and collaboration, while still working the same way it always has.
  • Email is platform independent, which means nearly anything can do it; the quintessential open standard.
  • Email is everywhere, it's the one messaging system that nearly everyone has access to so people and companies can assume such a messaging conduit exists.

It's like talking about a paper-free office, you can cut down on the paper but actually getting rid of it will take a heck of a lot of work.

We may get to an SMTP-free existence at some point, but it's not likely to happen in the next 10 years.

Recently a teacher's aide in Michigan was fired for cause. The reason? She failed to supply her employer with her facebook username and password after a complaint was filed by a parent. Reporting is a bit scanty on whether or not there was a formal inquiry into the event, but she was repeatedly asked for her credentials and she said no every time. So she got fired.

The real kicker is the following paragraph:

In a letter to Hester from the Lewis Cass ISD Special Education Director, he wrote "...in the absence of you voluntarily granting Lewis Cass ISD administration access to you[r] Facebook page, we will assume the worst and act accordingly.
Worst in this case is that the image was worse than the complaining parent described. As this is a school district, there is a whole lotta worse it can get depending on how far they want to go. They didn't end up pressing homicide or child pornography charges, so that's something, but they still fired her.

The "assume the worst" part is the critically wrong aspect of this.

Having spent time mucking about labor law in a previous life I have some appreciation for how the still largely unionized Educational workplace in the US works. When it comes to settling union grievances, which is an extra-judicial process up until the final step if things go that far, the right of due-process and 'innocent until proven guilty' is very much enshrined in the supporting legal statutes and precedent. And once it hits the courts (if it doesn't go into arbitration) the legal footing for those two principles is much stronger.

"Assume the worst" is guilty unless proven innocent, which is very wrong.

Wearing my info-security hat, I don't subject a user to disciplinary actions (or recommend such) until and unless strong evidence exists to support such an action. After an allegation of wrong-doing, the user suddenly stopping cooperation is actually a valid tactic on their part (just don't destroy evidence). Things get more complicated if evidence we need is held in non-company domains such as gmail, but there are ways to get at that information without demanding account login data.

Of course, private sector corporate employment agreements can abrogate a surprisingly large number of civil rights so long as the actions are entirely within the private domain. But this case was a public school district, and that's different. They actually get the civil rights private-sector workers sometimes are excluded from, a point some public-sector administrators forget sometimes.

That isn't going away

I've noticed a trend lately. It has been brewing for a while, but after $boss introduced us to a new site to log into every morning I had to look into what the heck is going on. It was yet another site with the following color-scheme:

  • Blue title-bar
  • White background
  • Green highlights

I have seen far, far, far too many of this scheme on tech-sites in the last two years. I'm very tired of it, some variety would be really nice about now. Since our next product is using the same palate I asked our head web-desiger about it. Is this just fashion, or is there science behind these colors?

Science!

This is the link he gave me.

It turns out that in the westernized world colors all have associations and meaning. Obvious to any parent, pink is code for "girl toys". Apparently blue is code for "corporate" and "responsible", things the business oriented tech site wants to associate with.

In my hunting for this post I have found a secondary trend, charcoal-gray (formal, professional). Microsoft and IBM are both using it which is a recent change. 

So, yes. The color of business oriented tech-sites does seem to be subject to the same fashion trends that drive the color of Men's Suits. It'll be blue, gray, black (formality, elegance), with occasional dalliances in brown (reliable, steadfast) from here on out.

Now I know.

When you break the click-wrap around a new piece of software you agree to certain terms of service. Right there in the ToS is usually a paragraph or five taken from the lesson IBM learned when Compaq figured out how to make IBM-compatible BIOS. Which is to say, Thou Shalt Not Reverse Engineer This Software. Perfectly understandable, they want to sue into oblivion anyone who tries to do that kind of thing (again).

Unfortunately, sometimes... you just have to do the reverse engineering. And sysadmins are kinda the people on the front line of that.

Here's how it works:

You have a piece of software. It was purchased and installed, and came as a binary blob as well. It has a user interface, and maybe an API. It does work. Your users are happy. The black-box hums the way it should hum. You are happy.

Then it breaks. It stops doing what it should. The manual and online docs are useless. Your users are unhappy. You are unhappy.

At this point, you have a choice. You can call up support and have THEM deal with it. This is the ideal option, since that's how this model of software works. However, what if the internal organization who bought this piece of software didn't buy the support contract since they're cheap bastards they blew the budget getting it at all? What if there isn't a support contract, but the vendor does per-incident pricing and there is no budget for that?

The pressure is still on you to fix the damned thing.

So you blow right past the NO USER SERVICEABLE PARTS INSIDE -- VOIDS WARRANTY IF BROKEN seal and try to figure out how it works so you can fix it. Or prove to the financial powers that be that it's really in their best interests to pay for support in this case. Enter now the land of reverse engineering.

There are a variety of tools to use in your quest to figure out WTF. A small list:

  • Packet captures to figure out how it talks on the wire.
  • Utilities like strace to figure out what files it's looking at, and what rights it expects.
  • Debugging tools to tease out what system functions it calls, or isolate where in their code the fault lay.

These will give you a good idea how it interacts with its environment, which in turn gives you clues about how it runs internally. This can help solve problems.

We're hiring!

My employer is looking to hire a Software Engineer. (The position is closed) Interesting details:

  • 0-5 years experience (more is better, but we totally hire eager people just out of college)
  • Technologies that will make us sit up and take notice if they're in your history:
    • Ruby on Rails
    • ElasticSearch
    • Dynamic web-site engineering
  • PC, Mac, whichever platform works for you (so long as git works on it). Go for it.
  • Full self+1 health coverage

And you don't even have to move to the Washington, DC area! We do remote development too.

A couple more things to help decide if this is the job for you:

  • We have a major product in "coming soon!" status, so the pace of development is really kicking up.
  • We're in startup-mode again, but we have a profitable existing product. No worrying about the burn-rate!
  • We don't use the following words in our job-announcements:
  • We have beer in the office. Heck, we even have a keg.
  • Our ping-pong table is well used.

And finally, as we're a startup with a new "coming soon!" product looming, faster is better when it comes to applying. As is your start-date if hired.

If you do apply, drop a comment here. Comments are screened so I'll know it happened even if you don't see it. Candidates with internal advocates tend to fare better. That whole "networking" thingy.


I first committed to this proposal back when I was working at WWU and knew people over in the Libraries. It only recently came to beta and I, er, no longer work with people who work in Libraries.

So!

I know some of my readers do work in Higher Ed and probably know their own Library people. Even Library IT.

So please, ask them to check it out. It's early yet, but that just means they'll get to help create the culture.

http://libraries.stackexchange.com/
The bane of the tech industry, and I suffer from it. As it happens, RSI is one of the reasons I really shouldn't attempt to earn a living as a Software Engineer. As I re-learned this week. Those reaches for strange punctuation really start wearing me down. It didn't help that there was some work-related hammer-drilling Tuesday (vibrating power-tools, funtimes).

The combo that was really getting me was this one:  "]}'")

All right hand, all far reaches and shift-key work. As I was refactoring code I was running into that combo a lot. And... ow. Just, ow. I left work early yesterday because I was simply done typing for the day. And for people who do what we do, that's really kinda bad.

I've learned through other venues that I have about 2 hours of constant typing in me before I have to give it a multi-hour break. Intermittent typing, such as firing command lines off while troubleshooting, allows me to go much longer. But constant typing really wears, and this is after I've addressed most of the ergonomic issues.

Today I'm leaving code alone since I still need to recover. There are some non-typing office related things I need to do, and allow me a light-duty day.

A change of direction for OpenSUSE

This morning on the opensuse-factory mailing-list it was pointed out that the ongoing problems getting version 12.2 to a state where it can even be reliably tested was a symptom of structural problems within how OpenSUSE as a whole handles package development. Steven Kulow posted the following this morning:

Hi,

It's time we realize delaying milestones is not a solution. Instead,
let's use the delay of 12.2 as a reason to challenge our current
development model and look at new ways. Rather than continue to delay
milestones, let's re-think how we work.

openSUSE has grown. We have many interdependent packages in Factory. The
problems are usually not in the packages touched, so the package updates
work. What's often missing though is the work to fix the other packages
that rely on the updated package. We need to do a better job making sure
bugs caused by updates of "random packages" generate a working system.
Very fortunately we have an increasing number of contributors that
update versions or fix bugs in packages, but lately, the end result has
been getting worse, not better. And IMO it's because we can't keep up in
the current model.

I don't remember a time during 12.2 development when we had less than
100 "red" packages in Factory. And we have packages that fail for almost
five months without anyone picking up a fix. Or packages that have
unsubmitted changes in their devel project for six months without anyone
caring to submit it (even ignoring newly introduced reminder mails).

So I would like to throw in some ideas to discuss (and you are welcome
to throw in yours as well - but please try to limit yourself to things
you have knowledge about - pretty please):

1. We need to have more people that do the integration work - this
  partly means fixing build failures and partly debugging and fixing
  bugs that have unknown origin.
  Those will get maintainer power of all of factory devel projects, so
  they can actually work on packages that current maintainers are unable
  to.
2. We should work way more in pure staging projects and less in develop
  projects. Having apache in Apache and apache modules in
  Apache:Modules and ruby and rubygems in different projects may have
  appeared like a clever plan when set up, but it's a nightmare when it
  comes to factory development - an apache or ruby update are a pure
  game of luck. The same of course applies to all libraries - they never
  can have all their dependencies in one project.
  But this needs some kind of tooling support - but I'm willing to
  invest there, so we can more easily pick "green stacks".
  My goal (a pretty radical change to now) is a no-tolerance
  strategy about packages breaking other packages.
3. As working more strictly will require more time, I would like to
  either ditch release schedules all together or release only once a
  year and then rebase Tumbleweed - as already discussed in the RC1
  thread.

Let's discuss things very openly - I think we learned enough about where
the current model works and where it doesn't so we can develop a new one
together.

Greetings, Stephan

The discussion is already going.

The current release-schedule is an 8 months schedule, which is in contrast to a certain other highly popular Linux distro that releases every 6. Before they went to the every-8 schedule, they had a "when it's good and ready" schedule. The "when it's good and ready" schedule lead to criticisms that OpenSUSE was an habitually stale distro that never had the latest shiny new toys that the other distros had; which, considering the then-increasing popularity of that certain other distro, was a real concern in the fight for relevancy.

Looks like the pain of a hard-ish deadline is now exceeding the pain of not having the new shiny as fast as that other distro.

One thing that does help, and Stephan pointed this out, is the advent of the Tumbleweed release of OpenSUSE. Tumbleweed is a continual integration release similar in style to Mint; unlike the regular release it does get regular updates to the newest-shiny, but only after it passes testing. Now that OpenSUSE has Tumbleweed, allowing the mainline release to revert to a "when it's good and ready" schedule is conceivable without threatening to throw the project out of relevance.

These are some major changes being suggested, and I'm sure that they'll keep the mailing-list busy for a couple of weeks. But in the end, it'll result in a more stable OpenSUSE release process.
I've been talking about this one for a while around the new .xxx top-level-domain. Simply put, some organizations such as my old employer consider a certain string of characters to be their trademark regardless of what follows the . in the name. This is why WWU had at the time I left mid twenties of domains that all redirect to wwu.edu. The same will happen with .xxx

And also with .sex, .adult and .porn.

Each new TLD means yet another domain to buy defensively.

For the extremely cash-strapped non-profit, now entering year 5 of shrinking or stagnating budgets, such forced trademark defense expenses are highly resented.
Microsoft is releasing an out-of-band patch to invalidate two of their Intermediate Certificate Authorities.

http://technet.microsoft.com/en-us/security/advisory/2718704
http://isc.sans.edu/diary/Microsoft+Emergency+Bulletin+Unauthorized+Certificate+used+in+Flame+/13366

In essence, the Flame malware appears to have code signed by a valid MS certificate authority. As this particular malware is suspected to have been written by a "state actor" (a.k.a. cyber-warfare unit for a government), chances are good that this CA is not circulating in the general unseemly underbelly of the Internet. However, it does present a compromise of those certificates, so Microsoft is issuing revocation certificates for them.

The core problem here is that this CA is trusted by all Windows installs by default, and could be used to sign patches or other software. This has obvious problems in the form of valid-seeming software installs, but less obvious ones in that somewhere a state-actor has the ability to perform man-in-the-middle attacks for SSL using valid certificates.

The PKI system was designed around the idea that certificate authorities would occasionally get compromised and does contain mechanisms for handling that. However, those mechanisms are not frequently exercised so the process of issuing such a revocation is, shall we say, not as smooth as it really should be.

You really should apply this patch as soon as it arrives.