Recently in microsoft Category

What my CompSci degree got me

The what use is a csci degree meme has been going around again, so I thought I'd interrogate what mine got me.

First, a few notes on my career journey:

  1. Elected not to go to grad-school. Didn't have the math for a masters or doctorate.
  2. Got a job in helpdesk, intending to get into Operations.
  3. Got promoted into sysadmin work.
  4. Did some major scripting as part of Y2K remediation, first big coding project after school.
  5. Got a new job, at WWU.
  6. Microsoft released PowerShell.
  7. Performed a few more acts of scripting. Knew I so totally wasn't a software engineer.
  8. Manage to change career tracks into Linux. Started learning Ruby as a survival mechanism.
  9. Today: I write code every day. Still don't consider myself a 'software engineer'.

Elapsed time: 20ish years.

As it happens, even though my career has been ops-focused I still got a lot out of that degree. Here are the big points.

Way back when I first got into Group Policies, which was just after Group Policies were released, one of the things we mooted about the BoF den was a simple thing we could do to tell users that they were on a managed station. What we came up with was pretty simple: manage the desktop background.

No, we didn't put an all-seeing-eye on it. That would be creepy, don't be silly. We used a logo of the company.

It made sense! A simple cue, and we'd save RAM (back in those days the desktop background took more than trivial RAM). We were happy.


It turns out, that's not how you build a happy user-base. By doing so, we told people explicitly everything you do can and will be used against you in an HR action. People don't like to be told they're being monitored.

You know who likes to be told they're being monitored like that? No one.

You know who we want to be monitored that way? Prisoners and people likely to become prisoners.

No one wants to be thought of as a prisoner, or likely to be one.

In fact, later GPO guides specifically discouraged doing things like managing the desktop background or theme. It could be done, but... why would you want to? Desktop theme is one very low impact thing on the system and the single biggest thing the user can customize to their preferences. It's a very low challenge to the system to increase user experience by a great amount. Let them customize and don't worry about it.

But still manage their IE zones, certificate enrollment policies, software distribution methods, and event-log reporting.

They can make their jail-cell a pink polka-dot wonder, far better than bare cinder-block! It's still a cell, but without that camera in their face, they're happier about living in it.


It looks like consumer-focused big-data stuff is suffering the same faults as early GPOs did: they're being too obvious about the surveillance.

"Hello, Mister ${mispronounced last name}," said the sales-clerk I'd never met before. I sighed in resignation, vowing to factory reset my cell-phone. Again. One of these days I'm just going go cash only.

Or another one I almost guarantee will happen:

TSA Customer Service
@sysadm1138 We noticed you were in DFW security line for 49 minutes. We would like some feed back about that, https://t.co/...

Er, wait. That's Big Brother. Sorry, dial slipped. Let's try again.

VIctorias Secret
@sysadm1138 We noticed you spent time in our DES MOINES, IA store. If you have time, please take a short survey about your visit. https://t.co/...

You've probably run into this one, but hitting a random website, and then that site haunts your web-ads (for those of you who don't run on AdBlock-Strict) for weeks.

They haven't figured out that a large percentage of us don't like being reminded we live in a panopticon. Give me my false illusion of anonymity and I'm happy!

It's all about the user-factors. What's good for the retailer, is not always good for their consumers. Obviously. But the best kind of thing like that are things that aren't obviously not-good for the consumer.

User-factors, people!

It all began with a bit of Twitter snark:


SmallLAMPStack.png

Utilities follow a progression. They begin as a small shell script that does exactly what I need it to do in this one instance. Then someone else wants to use it, so I open source it. 10 years of feature-creep pass, and then you can't use my admin suite without a database server, a web front end, and just maybe a worker-node or two. Sometimes bash just isn't enough you know? It happens.

Anyway...

Back when Microsoft was just pushing out their 2007 iteration of all of their enterprise software, they added PowerShell support to  most things. This was loudly hailed by some of us, as it finally gave us easy scriptability into what had always been a black box with funny screws on it to prevent user tampering. One of the design principles they baked in was that they didn't bother building UI elements for things you'd only do a few times, or would do once a year.

That was a nice time to be a script-friendly Microsoft administrator since most of the tools would give you their PowerShell equivalents on one of the Wizard pages, so you could learn-by-practical-example a lot easier than you could otherwise. That was a real nice way to learn some of the 'how to do a complex thing in powershell' bits. Of course, you still had to learn variable passing, control loops, and other basic programming stuff but you could see right there what the one-liner was for that next -> next -> next -> finish wizard was.

SmallLAMPStack-2.png

One thing that a GUI gives you is a much shallower on-ramp to functionality. You don't have to spend an hour or two feeling your way around a new syntax in order to do one simple thing, you just visually assemble your bits, hit next, then finish, then done. You usually have the advantage of a documented UI explaining what each bit means, a list of fields you have to fill out, syntax checking on those fields, which gives you a lot of information about what kinds of data a task requires. If it spits out a blob of scripting at the end, even better.

An IDE, tab-completion, and other such syntactic magic help scripters build what they need; but it all relies upon on the fly programatic interpretation of syntax in a script-builder. It's the CLI version of a GUI, so doesn't have the sigma of 'graphical' ("if it can't be done through bash, I won't use it," said the Linux admin).

Neat GUIs and scriptability do not need to be diametrically opposed things, ideally a system should have both. A GUI to aid discoverability and teach a bit of scripting, and scripting for site-specific custom workflows. The two interface paradigms come from different places, but as Microsoft has shown you can definitely make one tool support the other. More things should take their example.

As I look around the industry with an eye towards further employment, I've noticed a difference of philosophy between startups and the more established players. One easy way to see this difference is on their job postings.

  • If it says RHEL and VMWare on it, they believe in support contracts.
  • If it says CentOS and OpenStack on it, they believe in community support.

For the same reason that tech startups almost never use Windows if they can get away with it, they steer clear of other technologies that come with license costs or mandatory support contracts. Why pay the extra support cost when you can get the same service by hiring extremely smart people and use products with a large peer support community? Startups run lean, and all that extra cost is... cost.

And yet some companies find that they prefer to run with that extra cost. Some, like StackExchange, don't mind the extra licensing costs of their platform (Windows) because they're experts in it and can make it do exactly what they want it to do with a minimum of friction, which means the Minimum Viable Product gets kicked out the door sooner. A quicker MVP means quicker profitability, and that can pay for the added base-cost right there.

Other companies treat support contracts like insurance: something you carry just in case, as a hedge against disaster. Once you grow to a certain size, business continuity insurance investments start making a lot more sense. Running for the brass ring of market dominance without a net makes sense, but once you've grabbed it keeping it needs investment. Backup vendors love to quote statistics on the percentage of business that fail after a major data-loss incident (it's a high percentage), and once you have a business worth protecting it's good to start protecting it.

This is part of why I'm finding that the long established companies tend to use technologies that come with support. Once you've dominated your sector, keeping that dominance means a contract to have technology experts on call 24/7 from the people who wrote it.

"We may not have to call RedHat very often, but when we do they know it'll be a weird one."


So what happens when startups turn into market dominators? All that no-support Open Source stuff is still there...

They start investing in business continuity, just the form may be different from company to company.

  • Some may make the leap from CentOS to RHEL.
  • Some may contract for 3rd party support for their OSS technologies (such as with 10gen for MongoDB).
  • Some may implement more robust backup solutions.
  • Some may extend their existing high-availability systems to handle large-scale local failures (like datacenter or availability-zone outages).
  • Some may acquire actual Business Continuity Insurance.

Investors may drive adoption of some BC investment, or may actively discourage it. I don't know, I haven't been in those board meetings and can argue both ways on it.

Which one do I prefer?

Honestly, I can work for either style. Lean OSS means a steep learning curve and a strong incentive to become a deep-dive troubleshooter of the platform, which I like to be. Insured means someone has my back if I can't figure it out myself, and I'll learn from watching them solve the problem. I'm easy that way.

In the last month-ish I've had a chance to read about and use two new graphical shells that impact my life:

  • Windows 8 Metro, which I already have on my work-phone
  • Gnome 3

Before I go further I must point out a few things. First of all, as a technologist change is something I must embrace. Nay, cheer on. Attitudes like, "If it ain't broke, don't fix it," are not attitudes I share, since 'broke' is an ever moving target.

Secondly, I've lived through just enough change so far to be leery of going through more of it. This does give me some caution for change-for-change's-sake.



As you can probably guess from the lead-up, I'm not a fan of those two interfaces.

They both go in a direction that the industry as a whole is going, and I'm not fond of that area. This is entirely because I spend 8-12 hours a day earning a living using a graphical shell. "Commoditization" is a battle-cry for change right now, and that means building things for consumers.

The tablet in my backpack and the phones in my pocket are all small, touch-screen devices. Doing large-scale text-entry in any of those, such as writing blog posts, is a chore. Rapid task-switching is doable through a few screen-presses, though I don't get to nearly the window-count I do on my traditional-UI devices. They're great for swipe-and-tap games.

When it comes to how I interact with the desktop, the Windows 7 and Gnome 2 shells are not very different other than the chrome and are entirely keyboard+mouse driven. In fact, those shells are optimized for the keyboard+mouse interaction. Arm and wrist movements can be minimized, which extends the lifetime of various things in my body.

Windows 8 MetroUI brings the touch-screen metaphor to a screen that doesn't (yet) have a touch-screen. Swipe-and-tap will have to be done entirely with the mouse, which isn't a terribly natural movement (I've never been a user of mouse-gesture support in browsers). When I do get a touch-screen, I'll be forced to elevate my arm(s) from the keyboarding level to tap the screen a lot, which adds stress to muscles in my back that are already unhappy with me.

And screen-smudges. I'll either learn to ignore the grime, or I'll be wiping the screen down ever hour or two.

And then there is the "One application has the top layer" metaphor, which is a change from the "One window has the top layer" metaphor we've been living with on non-Apple platforms for a very long time now. And I hate on large screens. Apple has done this for years, which is a large part of why Gnome 3 has done it, and is likely why Microsoft is doing it for Metro.

As part of my daily job I'll have umpty Terminal windows open to various things and several Browser windows as well. I'll be reading stuff off of the browser window as reference for what I'm typing in the terminal windows. Or the RDP/VNC windows. Or the browser windows and the java-console windows. When an entire application's windows elevate to the top it can bury windows I want to read at the same time, which means that my window-placement decisions will have to take even more care than I already apply.

I do not appreciate having to do more work because the UI changed.

I may be missing something, but it appears that the Windows Metro UI has done away with the search-box for finding Start-button programs and control-panel items. If so, I object. If I'm on a desktop, I have a hardware keyboard so designing the UI to minimize keystrokes in favor of swiping is a false economy.

Gnome 3 at least has kept the search box.



In summation, it is my opinion that bringing touch-UI elements into a desktop UI is creating some bad compromises. I understand the desire to have a common UI metaphor across the full range from 4" to 30" screens, but input and interaction methodologies are different enough that some accommodation-to-form-factor needs to be taken.

Why would you use Windows?

| 1 Comment
This is a question from ServerFault that was there and then was no longer there because it's rampant flame-bait and got mod-hammered. But sometimes flame bait can make for good blog-posts, so here it is. Unattributed since the source no longer exists and I don't want to embarrass the asker.

As someone who has a good amount of experience with basic server setup exclusively on Linux, I'm wondering why anybody would want to use Windows.

I'm not asking this to make it into some snide comment, I just don't see any advantages.

The big things I think I would miss are:

  • SSH access. As far as I know, the only real way to remotely access a Windows service is via RDP or VNC or something similar, which is a lot more work if all I want to do is restart a service.
  • Open source software. From my experience, almost all open source server software is made for Linux. While some, like Apache, can also be run on Windows, a lot of the times it feels like it was added as an afterthought.
  • Easy configuration. I've never used Windows tools, but I love being able to apt-get install libapache2-mod-whatever. While package systems aren't technically part of Linux, most popular distributions use yum or aptitude or some packaging system which makes it a lot easier to handle updates.

Again, I've not used Windows extensively as a server, so please forgive me if some of these points are inaccurate.

A valid question. We had a thread much like this one on the LOPSA mailing list a while ago. And really, to a Linux admin, Windows looks like an expensive, opaque, and above all annoying way of doing what Linux can do in its sleep. This view is very couched in the observer's biases.

The consensus of the web this year is that if you want to do large scale web-application infrastructures, Linux is where it is at in spades. During my job hunt there were exceedingly few job-postings for Linux admins that mentioned something other than Web or DB duties. Web, DB, load-balancing, routing, orchestration, caching layers, it's all there and very well documented.

So why WOULD you use use Windows?

The number one reason I know of...

Because the application you're using requires it.

At WWU we had quite a number of off-the-shelf products require a Windows server because they were .NET applications. FLOSS versions may exist, but that's not what our users wanted. They wanted this piece of software that they picked out and is kinda standard in their industry, not some half baked open source project out of some other University.

Or for my current employer, a number of the key processing tasks we need to do are most accurately accomplished on Windows. The open source versions of these software packages get close enough, but part of what distinguishes us from our competitors is that we get closer then that.

The number two reason...

Because that's what you know.

This was why WWU was running Blackboard on Windows servers, even though it's a Tomcat application at the core. I'm pretty sure the reason for this is because what came before Blackboard was also running on Windows and our Windows admin inherited the new environment, not that the Linux admin said "Not it!" faster than the Windows admin. I know that admin found Linux confusingly opaque and convoluted.

The number three reason...

Because you don't have time/skill to maintain it yourself, and/or you're willing to pay someone else to do it for you.

If that application comes in a box, wrapped in an installation wizard, and comes complete with phone-home abilities to pull updates, notify the vendor (and later you) of problems, a lot of the effort in keeping that application going has now been outsourced to the vendor. Few FLOSS-stack products can do that, they need some skilled time to keep 'em up. To an organization looking to fire-and-forget, this kind of software is really attractive.



Now on to some of the asker's specific concerns regarding remote access, scalability, and software installs. Below the fold.

An older problem

| 1 Comment
I deal with some large file-systems. Because of what we do, we get shipped archives with a lot of data in them. Hundreds of gigs sometimes. These are data provided by clients for processing, which we then do. Processing sometimes doubles, or even triples or more, the file-count in these filesystems depending on what our clients want done with their data.

One 10GB Outlook archive file can contain a huge number of emails. If a client desires these to be turned into .TIFF files for legal processes, that one 10GB .pst file can turn into hundreds of thousands of files, if not millions.

I've had cause to change some permissions at the top of some of these very large filesystems. By large, I mean larger than the big FacShare volume at WWU in terms of file-counts. As this is on a Windows NTFS volume, it has to walk the entire file-system to update permissions changes at the top.

This isn't the exact problem I'm fixing, but it's much like in some companies where granting permissions to specific users is done instead of to groups, and then that one user goes elsewhere and suddenly all the rights are broken and it takes a day and half to get the rights update processed (and heaven help you if it stops half-way for some reason).

Big file-systems take a long time to update rights inheritance. This has been a fact of life on Windows since the NT days. Nothing new here.

But... it doesn't have to be this way. I explain under the cut.

Compatibility mode fun

Last week and this week had me upgrading a certain piece of software that desperately needed it. We're moving from a version released about 2006 to one released this year. The 2006 version (v06 for this blog-post) was installed on a Server 2003 32-bit install, and was written at a time where 64-bit was still in the future a fair piece for most offices. As you'd expect v11 was written with 64-bit in mind, heck, it's a .net 3.5 app so it should work just peachy.

The problem comes with upgrading this package. Not only am I upgrading several major point revs, but I'm also installing to Server 2008 R2 which is unavoidably 64-bit. Happily, there is a migration path. I have to upgrade to v08 first to deal with a data-conversion routine no longer present in v11, and then move up to v11.

The thing that had me pulling my hair was how to get v06 to install to Server 2008 R2. The migration documentation was pretty clear on how to migrate v06 to a new server:

  1. Install the software to the new server
  2. Export this one reg-key in HKLM/Software on the old server
  3. Stop the services on the new server
  4. Import the .reg file on the new server
  5. Copy over the datafiles to the new server
  6. Start the service.
Straight forward! However, there was a problem.

v06 didn't install right to 2008 R2. Whatever it was using for installing the needed service wasn't creating everything it needed to create. Even after I nudged things along, it was pretty clear that all was not right. In the end I had to set the installer executable to run in "Server 2003 SP1" compatibility mode. THAT got it installed, yay.

But the reg-import clearly wasn't giving it the data it needed. It was only after I threw "procmon" at it to see where it was trying to pull registry data from that I noticed it was doing it from some new place. Down there in HKLM is a new reg-key for 32-bit applications on 64-bit installs, called [HKLM/Software/Wow6432Node/]. Since this is my first time going to the registry level for this kind of pure-32 application, I hadn't run into this before.

Sure enough, the install created the needed nodes under THAT reg-key instead of HKLM/Software like the upgrade doc said. A simple search-and-replace on the exported .REG file for "Software/$Vendor" to "Software/Wow6432Node/$Vendor" and reimport got it all going again.

Yay!

And the upgrade to v08 worked just fine, as did the upgrade to v11. Whew.

Robocopy limitations

| 6 Comments
Today I ran into a limitation of robocopy that just might have bitten me in the past and I never knew it.

I got pulled aside to ask my opinion of how a robocopy of a large filesystem yielded no errors in the robocopy log, but differed in file-count by 6 files. The user was able to identify a directory with a file that got missed, which helped. The files in the directory were named similar to this:

  1. ITWA-National Introduction.doc
  2. ITWA-National Overview.doc
  3. ITWANA~2.DOC
The destination directory had only the first two files, but most interestingly, the second file had the same size as the third file on the source. Hmmmm.

Looking at the short-names of the two directories I noticed something else. In the source directory...

ITWA-National Introduction.doc = ITWANA~1.DOC
ITWA-National Overview.doc = ITWANA~3.DOC
ITWANA~2.DOC

And on the destination directory...

ITWA-National Introduction.doc = ITWANA~1.DOC
ITWA-National Overview.doc = ITWANA~2.DOC

AHA! Robocopy isn't preserving the short-names! When it copied the second document to the destination it allowed the system to auto-generate the short-name, and as that was the second ITWANA document, it got ~2. So when it copied ITWANA~2.DOC in, it overwrote the already existing ITWANA~2.DOC file.

We're now looking for a tool, preferably scriptable, that'll preserve short-names. I found a couple of 'em, but they're !free. Currently we're laundering things through the Windows backup facility since that preserves ALL attributes. If you know of such a tool, drop a comment.
The historic 'rdesktop' product (located here) hasn't been updated in a while, and critically lacks support for a major new feature Microsoft introduced in 2008. Network-level authentication. This enhancement requires a login to even connect the RDP session. The rdesktop client hasn't been able to support this, so for my servers that I wish to RDP to I have to remember to turn that off.

Happily, the FreeRDP fork of rdesktop now has support for NLA. It's in the GIT repo, not the stable branch, but it is there. The next stable version should have support for NLA.

Yay!