Breath-taking honesty

| No Comments

An essay has been making the rounds about how much you (developer type person) are worth for hourly-rate. It is breath-taking in its honesty. This person sets out a pay-rate scale based strictly on public-access reputation markets and evidence of community activity, with a premium on community contributions. To get top dollar, you will need to tick all of these boxes:

  • Have high-rated open-source libraries on GitHub.
  • Have a StackOverflow reputation over 20K.
  • Vendor-based code certifications, such as those from Oracle (Java) or Zend (Php)
  • Evidence of mastery in multiple languages. So, Ruby AND Erlang, not Erlang and maybe Ruby if you have to.
  • Published talks at conferences

If you don't have all those boxes ticked, you can still get paid. It just won't be enough to live on in most hot technical job-markets. The author is also very explicit in what they don't care about:

  • Cost-of-living. With fully remote work, location is elective. Want to make boatlads of cash? Move to the Montana prairie. You won't get more money by living in London.
  • Education. Masters, BA, BS, whatever. Don't care.
  • Past employment. Blah blah blah corporate blah.
  • Years of experience. I call bull on this one, since I'm dead certain that if they see "10 years of experience in Blah" this job-reviewer is going to look more critically on the lack of an auditable career than someone with 2 years.

Before long we're going to get a startup somewhere that will take evidence of all of the first list of bullet-points and distil it down to a Klout-like score.

One not-mentioned feature of this list is it means there are a variety of career suicide moves if more companies start adopting this method of pricing developer talent:

  • Working on closed-source software.
  • Working for a company that doesn't contribute to open source projects.
  • Working for a company that doesn't pay to present at conferences.
  • Working for a company that doesn't pay for continuing education.
  • Working for a company that has strict corporate communications rules, which prevent personal blogging on techical topics.
  • Working for a company with employment contracts that prohibit technical contributions to anything, anywhere that isn't the company (often hidden as the no-moonlighting rule in the employment contract).

Career suicide, all of it. I'm glad the systems engineering market is not nearly as prone to these forces.

No, I'm talking about that fancy wristband some of you wear, the one that talks to a smartphone. That's a monitoring system, but for your body.

We're IT. We do monitoring systems, so lets take a look at this one!

Getting stuck in Siberia

| No Comments

I went on a bit of a twitter rant recently.

Good question, since that's a very different problem than the one I was ranting about. How do you deal with that?


I hate to break it to you, but if you're in the position where your manager is actively avoiding you it's all on you to fix it. There are cases where it isn't up to you, such as if there are a lot of people being avoided and it's affecting the manager's work-performance, but that's a systemic problem. No, for this case I'm talking about you are being avoided, and not your fellow direct-reports. It's personal, not systemic.

No, it's not fair. But you still have to deal with it.

You have a question to ask yourself:

Do I want to change myself to keep the job, or do I want to change my manager by getting a new job?

Because this shunning activity is done by managers who would really rather fire your ass, but can't or won't for some reason. Perhaps they don't have firing authority. Perhaps the paperwork is too much to bother with firing someone. Perhaps they're the conflict-avoidant type and pretending you don't exist is preferable to making you Very Angry by firing them.

You've been non-verbally invited to Go Away. You get to decide if that's what you want to do.

Going Away

Start job-hunting, and good riddance. They may even overlook job-hunt activities on the job, but don't push it.

Staying and Escalating

They can't/won't get rid of you, but you're still there. It's quite tempting to stick around and intimidate your way into their presence and force them to react. They're avoiding you for a reason, so hit those buttons harder. This is not the adult way to respond to the situation, but they started it.

I shouldn't have to say that, but this makes for a toxic work environment for everyone else so... don't do that.

Staying and Reforming

Perhaps the job itself is otherwise awesome-sauce, or maybe getting another job will involve moving and you're not ready for that. Time to change yourself.

Step 1: Figure out why the manager is hiding from you.
Step 2: Stop doing that.
Step 3: See if your peace-offering is accepted.

Figure out why they're hiding

This is key to the whole thing. Maybe they see you as too aggressive. Maybe you keep saying no and they hate that. Maybe you never give an unqualified answer and they want definites. Maybe you always say, 'that will never work,' to anything put before you. Maybe you talk politics in the office and they don't agree with you. Maybe you don't go paintballing on weekends. Whatever it is...

Stop doing that.

It's not always easy to know why someone is avoiding you. That whole avoidant thing makes it hard. Sometimes you can get intelligence from coworkers about what the manager has been saying when you're not around or what happens when your name comes up. Ask around, at least it'll show you're aware of the problem.

And then... stop doing whatever it is. Calm down. Say yes more often. Start qualifying answers only in your head instead of out loud. Say, "I'll see what I can do" instead of "that'll never work." Stop talking politics in the office. Go paintballing on weekends. Whatever it is, start establishing a new set of behaviors.

And wait.

Maybe they'll notice and warm up. It'll be hard, but you probably need the practice to change your habits.

See if your peace-offering is accepted

After your new leaf is turned over, it might pay off to draw their attention to it. This step definitely depends on the manager and the source of the problem, but demonstrating a new way of behaving before saying you've been behaving better can be the key to get back into the communications stream. It also hangs a hat on the fact that you noticed you were in bad graces and took effort to change.

What if it's not accepted?

Then learn to live in Siberia and work through proxies, or lump it and get another job.

If you're wondering why comments aren't working, as I was, and are on shared hosting, as I am, and get to looking at your error_log file and see something like this in it:

[Sun Oct 12 12:34:56 2014] [error] [client 192.0.2.5] 
ModSecurity: Access denied with code 406 (phase 2).
Match of "beginsWith http://%{SERVER_NAME}/" against "MATCHED_VAR" required.
[file "/etc/httpd/modsecurity.d/10_asl_rules.conf"] [line "1425"] [id "340503"] [rev "1"]
[msg "Remote File Injection attempt in ARGS (/cgi-bin/mt4/mt-comments.cgi)"]
[severity "CRITICAL"]
[hostname "example.com"]
[uri "/cgi-bin/mt/mt-comments.cgi"]
[unique_id "PIMENTOCAKE"]

It's not just you.

It seems that some webhosts have a mod_security rule in place that bans submitting anything through "mt-comments.cgi". As this is the main way MT submits comments, this kind of breaks things. Happily, working around a rule like this is dead easy.

  1. Rename your mt-comments.cgi file to something else
  2. Add "CommentScript ${renamed file}" to your mt-config.cgi file

And suddenly comments start working again!

Except for Google, since they're deprecating OpenID support.

"Over the next few decades demand in the top layer of the labor market may well centre on individuals with high abstract reasoning, creative, and interpersonal skills that are beyond most workers, including graduates."
-Economist, vol413/num8907, Oct 4, 2014, "Special Report: The Third Great Wave. Productivity: Technology isn't Working"

The rest of the Special Report lays a convincing argument that people who have automation-creation as part of their primary job duties are in for quite a bit of growth and that people in industries subject to automation are going to have a hard time of it. This has a direct impact to sysadminly career direction.

In the past decate Systems Administration has been moving away from mechanics who deploy hardware, install software and fix problems and towards Engineers who are able to build automation for provisioning new computing instances, installing application frameworks, and know how to troubleshoot problems with all of that. In many ways we're a specialized niche of Software Engineering now, and that means we can ride the rocket with them. If you want to continue to have a good job in the new industrial revolution, keep plugging along and don't become the dragon in the datacenter people don't talk to.

Abstract Reasoning

Being able to comprehend how a complex system works is a prime example of abstract reasoning. Systems Administration is more than just knowing the arcana of init, grub, or WMI; we need to know how systems interact with each other. This is a skill that has been a pre-requisite for Senior Sysadmins for several decades now, so isn't new. It's already on our skill-path. This is where System Engineers make their names, and sometimes become Systems Architects.

Creative

This has been less on our skill-path, but is definitely something we've been focusing on in the past decade or so. Building large automation systems, even with frameworks such as Puppet or Chef, takes a fair amount of both abstract reasoning and creativity. If you're good at this, you've got 'creative' down.

This has impacts for the lower rungs of the sysadmin skill-ladder. Brand new sysadmins are going to be doing less racking-and-stacking and more parsing and patching ruby or ruby-like DSLs.

Interpersonal Skills

This is where sysadmins tend to fall down. A lot of us got into this gig because we didn't have to talk to people who weren't other sysadmins. Technology made sense, people didn't.

This skill is more a reflection of the service-oriented economy, and sysadmins are only sort of that, but our role in product creation and maintenance is ever more social these days. If you're one of two sysadmin-types in a company with 15 software engineers, you're going to have to learn how to have a good relationship with software engineers. In olden days, only very senior sysadmins had to have the Speaker to Management skill, now even mid-levels need to be able to speak coherently to technical and non-technical management.

It is no coincidence that many of the tutorials at conferences like LISA are aimed at building business and social skills in sysadmins. It's worth your time to attend them, since your career advancement depends on it.


Yes, we're well positioned to do well in the new economy. We just have to make a few changes we've known about for a while now.

And if there isn't a stipend...

| No Comments

Sysadmin-types, we kind of have to have a phone. It's what the monitoring system makes vibrate when our attention is needed, and we also tend to be "always on-call", even if it's tier 4 emergency last resort on-call. But sometimes we're the kind of on-call where we have to pay attention any time an alert comes in, regardless of hour, and that's when things get real.

So what if you're in that kind of job, or applying for one, and it turns out that your employer doesn't provide a cell phone and doesn't provide reimbursement. Some Bring Your Own Device policies are written this way. Or maybe your employer moves to a BYOD policy and the company paid telecoms are going away.

Can they do that?

Yes they can, but.

As with all labor laws, the rules vary based on where you are in the world. However, in August 2014 (a month and a half ago!) Schwann's Home Services, Inc lost an appeal in California Appellate court. This is important because California contains Silicon Valley and what happens there tends to percolate out to the rest of the tech industry. This ruling held that employees who do company business on personal phones are entitled to reimbursement.

The ruling didn't provide a legal framework for how much reimbursement is required, just that some is.

This thing is so new that the ripples haven't been felt everywhere yet. No-reimbursement policies are not legal, that much is clear, but beyond that, not much is. For non-California based companies such as those in tech hot-spots like Seattle, New York, or the DC area this is merely a warning that the legal basis for such no-reimbursement policies is not firm. As the California-based companies revise policies in light of this ruling, accepted-practice in the tech field will shift without legal action elsewhere.

My legal google-fu is too weak to tell if this thing can be appealed to the state Supreme Court, though it looks like it might have already toured through there.

Until then...

I strongly recommend against using your personal phone for both work and private. Having two phones, even phones you pay for, provides an affirmative separation between your work identity subject to corporate policies and liability, and your private identity. This is more expensive than just getting an unlimited voice/text plan with lots of data and dual-homing, but you face fewer risks to yourself that way. No-reimbursement BYOD policies are unfair to tech-workers the way that employers that require a uniform to be worn who don't provide a uniform allowance are unfair; for some of us, that phone is essential to our ability to do our jobs and should be expensed to the employer. Laws and precedent always take a while to catch up to business reality, and BYOD is getting caught up.

There is something that not many people seem to realize about how your personal email can get sucked into a lawsuit filed against your company. It all comes down to ediscovery...

When it comes to things to send alarming emails about, CPU, RAM, Swap, and Disk are the four everyone thinks of. If something seems slow, check one or all of those four to see if it really is slow. This sets up a causal chain...

It was slow, and CPU was high. Therefore, when CPU is high it is slow. QED.

We will now alarm on high CPU.

It may be true in that one case, but high CPU is not always a sign of bad. In fact, high CPU is a perfectly normal occurrence in some systems.

  1. Render farms are supposed to run that high all the time.
  2. Build servers are supposed to be running that high a lot of the time.
  3. Databases chewing on long-running queries.
  4. Big-data analytics that can run for hours.
  5. QE systems grinding on builds.
  6. Test-environment systems being ground on by QE.

Of course, not all CPU checks are created equal. Percent-CPU is one thing, Load Average is another. If Percent-CPU is 100% and your load-average matches the number of cores in the system, you're probably fine. If Percent-CPU is 100% and your load-average is 6x the number of cores in the system, you're probably not fine. If your monitoring system only grabs Percent-CPU, you won't be able to tell what kind of 100% event it is.

As a generic, apply-it-to-everything alarm, High-CPU is a really poor thing to pick. It's easy to monitor, which is why it gets selected for alarming. But, don't do that.

Cases where a High-CPU alarm won't actually tell you that something is going wrong:

  • Everything in the previous list.
  • If your app is single-threaded, the actual high-CPU event for that app on a multi-core system is going to be WELL below 100%. It may even be as low as 12.5%.
  • If it's a single web-server in a load-balanced pool of them, it won't be a BOTHER HUMANS RIGHT NOW event.
  • During routine patching. It should be snoozed on a maintenance window anyway, but sometimes it doesn't happen.
  • Initializing a big application. Some things normally chew lots of CPU when spinning up for the first time.

CPU/Load Average is something you probably should monitor, since there is value in retroactive analysis and aggregate analysis. Analyzing CPU trends can tell you it's time to buy more hardware, or turn up the max-instances value in your auto-scaling group. These are all the kinds of thing you look at in retrospective, they're not things that you want waking you up at 2:38am.

Only turn on CPU alarms if you know that is an error condition worthy of waking up a human. Turning it on for everything just in case is a great way to train yourself out of ignoring high-CPU alarms, which means you'll miss the ones you actually care about. Human factors, they're part of everything.

Redundancy in the Cloud

| No Comments

Strange as it might be to contemplate, but imagine what would happen if AWS went into receivership and was shut down to liquidate assets? What would that mean for your infrastructure? Project? Or even startup?

It would be pretty bad.

Startups have been deploying preferentially on AWS or other Cloud services for some time now, in part due to venture-capitalist push to not have physical infrastructure to liquidate should the startup go *pop* and to scale fast should a much desired rocket-launch happen. If AWS shut down fully for, say, a week, the impact to pretty much everything would be tremendous.

Or what if it was Azure? Fully debilitating for those that are on it, but the wide impacts would be less.

Cloud vendors are big things. In the old physical days we used to deal with the all-our-eggs-in-one-basket problem by putting eggs in multiple places. If you're on AWS, Amazon is very big about making sure you deploy across multiple Availability Zones and helping you become multi-region in the process if that's important to you. See? More than one basket for your eggs. I have to presume Azure and the others are similar, since I haven't used them.

Do you put your product on multiple cloud-vendors as your more-than-one-basket approach?

It isn't as easy as it was with datacenters, that's for sure.

This approach can work if you treat the Cloud vendors as nothing but Virtualization and block-storage vendors. The multiple-datacenter approach worked in large part because colos sell only a few things that impact the technology (power, space, network connectivity, physical access controls), though pricing and policies may differ wildly. Cloud vendors are not like that, they differentiate in areas that are technically relevant.

Do you deploy your own MySQL servers, or do you use RDS?
Do you deploy your now MongoDB servers, or do you use DynamoDB?
Do you deploy your own CDN, or do you use CloudFront?
Do you deploy your own Redis group, or do you use SQS?
Do you deploy your own Chef, or do you use OpsWorks?

The deeper down the hole of Managed Services you dive, and Amazon is very invested in pushing people to use them, the harder it is to take your toys and go elsewhere. Or run your toys on multiple Cloud infrastructures. Azure and the other vendors are building up their own managed service offerings because AWS is successfully differentiating from everyone else by having the widest offering. The end-game here is to have enough managed services offerings that virtual private servers don't need to be used at all.

Deploying your product on multiple cloud vendors requires either eschewing managed-services entirely, or accepting greater management overhead due to very significant differences in how certain parts of your stack are managed. Cloud vendors are very much Infrastructure-as-Code, and deploying on both AWS and Azure is like deploying the same application in Java and .NET; it takes a lot of work, the dialect differences can be insurmountable, and the expertise required means different people are going to be working on each environment which creates organizational challenges. Deploying on multiple cloud-vendors is far harder than deploying in multiple physical datacenters, and this is very much intentional.

It can be done, it just takes drive.

  • New features will be deployed on one infrastructure before the others, and the others will follow on as the integration teams figure out how to port it.
  • Some features may only ever live on one infrastructure as they're not deemed important enough to go to all of the effort to port to another infrastructure. Even if policy says everything must be multi-infrastructure, because that's how people work.
  • The extra overhead of running in multiple infrastructures is guaranteed to become a target during cost-cutting drives.

The ChannelRegister article's assertion that AWS is now in "too big to fail" territory, and thus requiring governmental support to prevent wide-spread industry collapse, is a reasonable assertion. It just plain costs too much to plan for that kind of disaster in corporate disaster-response planning.

The alerting problem

| No Comments

4100 emails.

That's the approximate number of alert emails that got auto-deleted while I was away on vacation. That number will rise further before I officially come back from vacation, but it's still a big number. The sad part is, 98% of those emails are for:

  • Problems I don't care about.
  • Unsnoozable known issues.
  • Repeated alarms for the first two points (puppet, I'm looking at you)

We've made great efforts in our attempt to cut down our monitoring fatigue problem, but we're not there yet. In part this is because the old, verbose monitoring system is still up and running, in part this is due to limitations in the alerting systems we have access to, and in part due to organizational habits that over-notify for alarms under the theory of, "if we tell everyone, someone will notice."

A couple weeks ago, PagerDuty had a nice blog-post about tackling alert fatigue, and had a lot of good points to consider. I want to spend some time on point 6:

Make sure the right people are getting alerts.

How many of you have a mailing list you dump random auto-generated crap like cron errors and backup failure notices to?

This pattern is very common in sysadmin teams, especially teams that began as one or a very few people. It just doesn't scale. Also, you learn to just ignore a bunch of things like backup "failures" for always-open files. You don't build an effective alerting system with the assumption that alerts can be ignored; if you find yourself telling new hires, "oh ignore those, they don't mean anything," you have a problem.

The failure mode of tell-everyone is that everyone can assume someone else saw it first and is working on it. And no one works on it.

I've seen exactly this failure mode many times. I've even perpetrated it, since I know certain coworkers are always on top of certain kinds of alerts so I can safely ignore actually-critical alerts. It breaks down if those people have a baby and are out of the office for four weeks. Or were on the Interstate for three hours and not checking mail at that moment.

When this happens and big stuff gets dropped, technical management gets kind of cranky. Which leads to hypervigilence and...

The failure mode of tell-everyone is that everyone will pile into the problem at the same time and make things worse.

I've seen this one too. A major-critical alarm is sent to a big distribution list, six admins immediately VPN in and start doing low-impact diagnostics. Diagnostics that aren't low impact if six people are doing them at the same time. Diagnostics that aren't meant to be run in parallel and can return non-deterministic results if run that way, which tells six admins different stories about what's actually wrong sending six admins into six different directions to solve not-actually-a-problem issues.

This is the Thundering Herd problem as it applies to sysadmins.

The usual fix for this is to build in a culture of, "I've got this," emails and to look for those messages before working on a problem.

The usual fix for this fails if admins do a little "verify the problem is actually a problem" work before sending the email and stomp on each other's toes in the process.

The usual fix for that is to build a culture of, "I'm looking into it," emails.

Which breaks down if a sysadmin is reasonably sure they're the only one who saw the alert and works on it anyway. Oops.


Really, these are all examples of telling the right people about the problem, but you really do need to go into more detail than "the right people". You need, "the right person". You need an on-call schedule that will notify one or two of the Right People about problems. Build that with the expectation that if you're in the hot seat you will answer ALL alerts, and build a rotation so no one is in the hotseat long enough to start ignoring alarms, and you have a far more reliable alerting system.

PagerDuty sells such a scheduling system. But what if you can't afford X-dollars a seat for something like that? You have some options. Here is one:

An on-call distribution-list and scheduler tasks
This recipe will provide an on-call rotation using nothing but free tools. It won't work with all environments. Scripting or API access to the email system is required.

Ingredients:

    • 1 on-call distribution list.
    • A list of names of people who can go into the DL.
    • A task scheduler such as cron or Windows Task Scheduler.
    • A database of who is supposed to be on-call when (can substitute a flat file if needed)
    • A scripting language that can talk to both email system management and database.

Instructions:

Build a script that can query the database (or flat-file) to determine who is supposed to be on-call right now, and can update the distribution-list with that name. Powershell can do all of this for full MS-stack environments. For non-MS environments more creativity may be needed.

Populate the database (or flat-file) with the times and names of who is to be on-call.

Schedule execution of the script using a task scheduler.

Configure your alert-emailing system to send mail to the on-call distribution list.

Nice and free! You don't get a GUI to manage the schedule and handling on-call shift swaps will be fully manual, but you at least are now sending alerts to people who know they need to respond to alarms. You can even build the watch-list so that it'll always include certain names that always want to know whenever something happens, such as managers. The thundering herd and circle-of-not-me problems are abated.

This system doesn't handle escalations at all, that's going to cost you either money or internal development time. You kind of do get what you pay for, after all.

How long should on-call shifts be?

That depends on your alert-frequency, how long it takes to remediate an alert, and the response time required.

Alert Frequency and Remediation:

  • Faster than once per 30 minutes:
    • They're a professional fire-fighter now. This is their full-time job, schedule them accordingly.
  • One every 30 minutes to an hour:
    • If remediation takes longer than 1 minute on average, the watch-stander can't do much of anything else but wait for alerts to show up. 8-12 hours is probably the most you can expect reasonable performance.
    • If remediation takes less than a minute, 16 hours is the most you can expect because this frequency ensures no sleep will be had by the watch-stander.
  • One every 1-2 hours:
    • If remediation takes longer than 10 minutes on average, the watch-stander probably can't sleep on their shift. 16 hours is probably the maximum shift length.
    • If remediation takes less than 10 minutes, sleep is more possible. However, if your watch-standers are the kind of people who don't fall asleep fast, you can't rely on that. 1 day for people who sleep at the drop of a hat, 16 hours for the rest of us.
  • One every 2-4 hours:
    • Sleep will be significantly disrupted by the watch. 2-4 days for people who sleep at the drop of a hat. 1 day for the rest of us.
  • One every 4-6 hours:
    • If remediation takes longer than an hour, 1 week for people who sleep at the drop of a hat. 2-4 days for the rest of us.
  • Slower than one every 6 hours:
    • 1 week

Response Time:

This is a fuzzy one, since it's about work/life balance. If all alerts need to be responded to within 5 minutes of their arrival, the watch-stander needs to be able to respond in 5 minutes. This means no driving or doing anything that requires not paying attention to the phone such as kid's performances or after-work meetups. For a watch-stander that drives to work, their on-call shift can't overlap their commute.

For 30 minute response, things are easier. Driving short trips is easier, and longer ones so long as the watch-stander pulls over to check what each alert is when they arrive. Kid performances are still problematic, and longer commutes just as much.

And then there is the curve-ball known as, "define 'response'". If Response is acking the alert, that's one thing and much less disruptive to off-hours lie. If Response is defined as "starts working on the problem," that's much more disruptive since the watch-stander has to have a laptop and bandwidth at all times.

The answers here will determine what a reasonable on-call shift looks like. A week of 5 minute time-to-work is going to cause the watch-stander to be house-bound for that entire week and that sucks a lot; there better be on-call pay associated with a schedule like that or you're going to get turnover as sysadmins go work for someone less annoying.


It's more than just make sure the right people are getting alerts, it's building a system of notifying the Right People in such a way that the alerts will get responded to and handled.

This will build a better alerting system overall.

Other Blogs

My Other Stuff

Monthly Archives