Recently in linux Category

Confusion on containers

My dayjob hasn't brought me into much contact with containers, which is becoming a problem. I mean, I've been doing virtualization for a decade and a half at this point so the concepts aren't new. But how containers work these days isn't something I'm all that up on.

So when the OpenSuSE factory mailing list exploded recently with news of what the next versions of what's now Leap would look like, I paid attention. To quote a weekly status update that caused a lot of debate:

we have mostly focused this week's meeting on how the ALP Desktop is going to look like and how we can dispel fears around flatpaks and containers when it comes to the ALP desktop.

They're moving away from a traditional RPM-based platform built on top of the SuSE Linux Enterprise Server (SLES) base, and are doing it because (among other reasons) Python on SLES is an officially unsupported version. Most Leap users want more recent Python, which the OpenSuSE project can't do due to a lack of volunteer support to rewrite all the OS-related Python bits.

What they're replacing it with is something called Adaptable Linux Platform (ALP) which is built in part on the work done in OpenSUSE MicroOS. It will use Flatpak packages (or RPM wrappers around Flatpaks), which is a contentious decision let me tell you. This debate is what made me look into Flatpak and related tech.

The hell no side of the debate are old-guard sysadmins from my era who have been doing vulnerability management for a long, long time and know that containers make that work problematic. Building a Configuration Management Database (CMDB) that lists all of your installed packages is a core competency for doing vulnerability management, because that's the list you consult when yet another OpenSSL vulnerability arrives and you need to see how bad it'll be this time. This work is made way harder when every container you have comes bundled with its own version of the OpenSSL libraries. We now have emerging tooling that will scan inside containers for bad packages, but that's only half the problem; the other half is convincing upstream container providers to actually patch on your vulnerability program's schedule. Many won't.

On the well actually, that's kind of a good idea side of the debate are the package maintainers. For a smaller project like OpenSUSE, package maintainers need to support packages for a few different distributions:

  • Tumbleweed, the rolling release that is kept up to date at all times
  • The current Leap release, which has base OS libraries getting older every year (and might be old enough to prevent compiling 'newest' without lots of patches)
  • Leap Micro (based on SLES Micro), has a read-only file system assumption, using Flatpaks, is designed to be container-first, and isn't meant to be a desktop distribution
  • Any additional architectures beyond x86_64/amd64 they want to support (various ARM flavors are in demand, but OpenSuSE also has s390 support)

For some packages this is no big deal. For others that involve touching the kernel in any way, such as VirtualBox, each distribution can have rather different requirements. This adds to the load of supporting things, and consumes more volunteer time.

Flatpak lets you get away with making a single package that will support all the distributions. Quite the labor saving device for a project strapped for volunteers. Of course, this leads to the usual 54 versions of openssl libraries problem, but the distributions can all still be made. That's valuable.

This also makes it easier to sandbox desktop utilities. I say sandbox instead of containerize because the key benefit you're getting from this isn't the containers, but the sandboxing that comes with them. I've spent time in the past attempting to build an AppArmor profile for Firefox (it mostly worked, but took too much maintenance to tune). These days you can use systemd user units to apply cgroups to processes and get even more control of what they're allowed to do, including restricting them to filesystem namespaces. This isn't a bad thing.

Also in the not a bad thing camp is ALP's stance that most of the filesystem will be marked read-only. This improves security, because malicious processes that break out of their sandbox will have a harder time replacing system binaries. Having partitions like /usr mounted as read-only has shown up on hardening guides for a couple of decades at this point.

The final thing is that the Open Build Service, which creates all the OpenSUSE packages, already has support for creating Flatpaks along side traditional RPMs. This will make maintenance even easier.

What do I think of all of this?

I'm still making up my mind. I'm going to have to get over no longer having "tarball with some added scripts and metadata" style packagers like I've used since the 1990s, that writing is on the wall. We'll still have some of that around, but major application-packages are going to get shipped as functional filesystem images for even "base" Linux installs. It won't be all that space-efficient, a package needing modern Python will end up shipping an entire Python 3.10 interpreter in the Flatpak for a SLES server, but that's less important in the days of 1TB microSD cards.

Ubuntu has this idea in their Snaps, flatpack and snap are highly similar in goals and features, which has been controversial all by itself. At DayJob we've had the vulnerability management conversation regarding snaps and our ability to introspect into snaps for managing CVEs, which isn't great yet, and decided not to go there at this time.

All in all, the industry is slowly shifting from treating the list of system-packages as the definitive list of software installed on a server towards a far more nuanced view. NPM is famous for building vast software dependency trees, and we've had most of a decade for vulnerability scanning tooling to catch up to that method. For all that OS-side updates were among the first to create dependency resolvers (all that metadata on .deb and .rpm packages is there for a reason) the not-developing-on-C software industry as a whole has taken this approach to whole new levels, which in turn forced vulnerability management tooling to adapt. We're about at the point where SLES Micro makes a good point; the use of traditional methods for a minimal OS and container-blobs for everything else is just about supportable given today's tooling.

The only constant in this industry is change, and this is feeling like one of those occasional sea changes.

This weekend's project was replacing the home server/router. This isn't a high-spec machine for my internal build tooling, this is pretty much the router and file/print server. Given cloud, it's more of a packet-shoveller than anything else. When I built it, I was going for low power to maximize how long the UPS would last in a power-outage, and increase the chances that I'd get my old IP addresses when the power came back.

mdadm tells me the mirror-set was created Sun Sep 25 22:25:34 2011.

I hadn't quite realized it was that old. As it happened I blogged about it the day after the build, so consider this an update. The system this one replaced was also nine years old, so I guess my replacement cycle is 9 years. I never bothered to get UEFI booting working, which in the end doomed this box. The /boot partition couldn't take another kernel update as of Friday; kernels had just grown too big!

Rather than homebrew the home server, this time I went off-the-shelf and bought a System76 Meerkat. It's a fifth the size of the old server + drobo box and has as much storage in SSD/NVME. Also, it's vastly quieter. The fan in the old server had picked up a buzz once in a while, it needed a therapeutic tap to knock it out of vibration. My office is so quiet now (it couldn't live in the basement because the Internet enters the house in my office). Being 9 years newer means it probably draws half the power of the old one, so UPS endurance just shot up. Whee!

Over its 9 year life:

  • Glombed onto hardware from the old MythTV setup, which was the Drobo array. The Drobo is probably 13 or 14 years old by this point.
  • Upgraded one at a time from OpenSuse 11.4 all the way to 15.2, but couldn't handle the size of the 15.2 kernel. It just worked, no surprises, could do a headless update every time.
  • Moved houses twice, no failures. Even after the move where it was powered off for three days.
  • When we left the land of Verizon FiOS five years ago, it became my internet gateway.
  • Did the work to get IPv6 working, with prefix delegation, which was tricker than I liked and is still feeling hacky. But it let me do IPv6 in the house without using the ISP router.
  • Two years ago put telegraf/influxdb/grafana on it to track internet gateway usage, and a few other details. Such as temperature. Which showed rather nicely how big the temperature swings are in our house over the winter.

The hard-drives (a pair of 160GB Western Digital Blacks):

  • 79,874 power-on hours.
  • 50 power-cycles
    • 10 of those were planned maintenance of various types, like moving, painting, and other things.
    • The rest were power-outages of various durations.
  • Zero reallocated sectors on either drive
  • Load_Cycle_Count of 5.4 million. Comes to about 78 load-cycles an hour. Clearly this is an important SMART metric.

In a hero of the revolution moment, I managed to get enough of the DHCP state transferred between the old and new hardware that we're on the same IPv4 address and IPv6 prefix from before the migration!

Let's see if this one also goes nine years. Check back in 2029.

Systemd dependencies

There is a lot of hate around Systemd in unixy circles. Like, a lot. There are many reasons for this, a short list:

  • For some reason they felt the need to reimplement daemons that have existed for years. And are finding the same kinds of bugs those older daemons found and squashed over a decade ago.
    • I'm looking at you Time-sync and DNS resolver.
  • It takes away an init system that everyone knows and is well documented in both the official documentation sense, and the unofficial 'millions of blog-posts' sense. Blog posts like this one.
  • It has so many incomprehensible edge-cases that make reasoning about the system even harder.
  • The maintainers are steely-eyed fundamentalists who know exactly how they want everything.
  • Because it runs so many things in parallel, bugs we've never had to worry about are now impossible to ignore.

So much hate. Having spent the last few weeks doing a sysv -> systemd migration, I've found another reason for that hate. And it's one I'm familiar with because I've spent so many years in the Puppet ecosystem.

People love to hate on puppet because of the wacky non-deterministic bugs. The order resources are declared in a module is not the order in which they are applied. Puppet uses a dependency model to determine the order of things, which leads to weird bugs where a thing has worked for two weeks but suddenly stops working that way because a new change was made somewhere that changed the order of resource-application. A large part of why people like Chef over Puppet is because Chef behaves like a scripting language, where the order of the file is the order things are done in.

Guess what? Systemd uses the Puppet model of dependency! This is why its hard to reason. And why I, someone who has been handling these kinds of problems for years, haven't spent much time shaking my tiny fist at an uncaring universe. There has been swearing, oh yes. But of a somewhat different sort.

The Puppet Model

Puppet has two kinds of dependency. Strict ordering, and do this if that other thing does something. Which makes for four ways of linking resources.

  • requires => Do this after this other thing.
  • before => Do this before this other thing.
  • subscribes => Do this after this other thing, but only if this other thing changes something.
  • notifies => Do this before this other thing, and tell it you changed something.

This makes for some real power, while also making the system hard to reason about.

Thing is, systemd goes a step further

The Systemd Model

Systemd also has dependencies, but it was also designed to run as much in parallel as possible. Puppet was written in Ruby, so has a strong single-threaded tendencies. Systemd is multi-threaded. Multi-threaded systems are harder to reason about in general. Add on dependency ordering to multi-threaded issues and you get a sheer cliff of learning before you can have a hope of following along. Even better (worse), systemd has more ways of defining relationships.

  • Before= This unit needs to get all the way done before the named units are even started. And, the named units only get started if this unit finishes successfully.
  • After= This unit only gets started if the named units run to completion first, successfully.
  • Requires= The named units will get started if this one is, and do so at the same time. Not only that, but if the named units are explicitly stopped, this one will be stopped as well. For puppet-heads, this breaks things since this works backwards.
  • BindsTo= Does everything Requires does, but will also stop this unit if the named unit stops for any reason, not just explicit stops.
  • Wants= Like Require, but less picky. The named units will get started, but not care if they can't start or end up failing.
  • Requisite= Like Require, but will fail immediately if the named services aren't started yet. Think of mount units not starting unless the device unit is already started.
  • Conflicts= A negative dependency. Turn this unit off if the named unit is started. And turn this other unit off if this unit is started.

There are several more I'm not going into. This is a lot, and some of these work independently. The documentation even says:

It is a common pattern to include a unit name in both the After= and Requires= options, in which case the unit listed will be started before the unit that is configured with these options.

Using both After and Requires means that the named units need to get all the way done (After=) before this unit is started. And if this unit is started, the named units need to get started as well (Require=).

Hence, in many cases it is best to combine BindsTo= with After=.

Using both configures a hard dependency relationship. After= means the other unit needs to be all the way started before this one is started. BindsTo= makes it so that this unit is only ever in an active state when the unit named in both BindsTo= and After= is in an active state. If that other unit fails or goes inactive, this one will as well.

There is also a concept missing from Puppet, and that's when the dependency fires. After/Before are trailing-edge triggers, they fire on completion, which is how Puppet works. Most of the rest are leading-edge triggered, where the dependency is satisfied as soon as the named units start. This is how you get parallelism in an init-system, and why the weirder dependencies are often combined with either Before or After.


Systemd hate will continue for the next 10 or so years, at least long enough for most Linux engineers to have been working with it to stop grumbling about how nice the olden days were.

It also means that fewer people will be writing startup services due to the complexity of doing anything other than 'start this after this other thing' ordering.

What my CompSci degree got me

The what use is a csci degree meme has been going around again, so I thought I'd interrogate what mine got me.

First, a few notes on my career journey:

  1. Elected not to go to grad-school. Didn't have the math for a masters or doctorate.
  2. Got a job in helpdesk, intending to get into Operations.
  3. Got promoted into sysadmin work.
  4. Did some major scripting as part of Y2K remediation, first big coding project after school.
  5. Got a new job, at WWU.
  6. Microsoft released PowerShell.
  7. Performed a few more acts of scripting. Knew I so totally wasn't a software engineer.
  8. Manage to change career tracks into Linux. Started learning Ruby as a survival mechanism.
  9. Today: I write code every day. Still don't consider myself a 'software engineer'.

Elapsed time: 20ish years.

As it happens, even though my career has been ops-focused I still got a lot out of that degree. Here are the big points.

It all began with a bit of Twitter snark:


SmallLAMPStack.png

Utilities follow a progression. They begin as a small shell script that does exactly what I need it to do in this one instance. Then someone else wants to use it, so I open source it. 10 years of feature-creep pass, and then you can't use my admin suite without a database server, a web front end, and just maybe a worker-node or two. Sometimes bash just isn't enough you know? It happens.

Anyway...

Back when Microsoft was just pushing out their 2007 iteration of all of their enterprise software, they added PowerShell support to  most things. This was loudly hailed by some of us, as it finally gave us easy scriptability into what had always been a black box with funny screws on it to prevent user tampering. One of the design principles they baked in was that they didn't bother building UI elements for things you'd only do a few times, or would do once a year.

That was a nice time to be a script-friendly Microsoft administrator since most of the tools would give you their PowerShell equivalents on one of the Wizard pages, so you could learn-by-practical-example a lot easier than you could otherwise. That was a real nice way to learn some of the 'how to do a complex thing in powershell' bits. Of course, you still had to learn variable passing, control loops, and other basic programming stuff but you could see right there what the one-liner was for that next -> next -> next -> finish wizard was.

SmallLAMPStack-2.png

One thing that a GUI gives you is a much shallower on-ramp to functionality. You don't have to spend an hour or two feeling your way around a new syntax in order to do one simple thing, you just visually assemble your bits, hit next, then finish, then done. You usually have the advantage of a documented UI explaining what each bit means, a list of fields you have to fill out, syntax checking on those fields, which gives you a lot of information about what kinds of data a task requires. If it spits out a blob of scripting at the end, even better.

An IDE, tab-completion, and other such syntactic magic help scripters build what they need; but it all relies upon on the fly programatic interpretation of syntax in a script-builder. It's the CLI version of a GUI, so doesn't have the sigma of 'graphical' ("if it can't be done through bash, I won't use it," said the Linux admin).

Neat GUIs and scriptability do not need to be diametrically opposed things, ideally a system should have both. A GUI to aid discoverability and teach a bit of scripting, and scripting for site-specific custom workflows. The two interface paradigms come from different places, but as Microsoft has shown you can definitely make one tool support the other. More things should take their example.

As I look around the industry with an eye towards further employment, I've noticed a difference of philosophy between startups and the more established players. One easy way to see this difference is on their job postings.

  • If it says RHEL and VMWare on it, they believe in support contracts.
  • If it says CentOS and OpenStack on it, they believe in community support.

For the same reason that tech startups almost never use Windows if they can get away with it, they steer clear of other technologies that come with license costs or mandatory support contracts. Why pay the extra support cost when you can get the same service by hiring extremely smart people and use products with a large peer support community? Startups run lean, and all that extra cost is... cost.

And yet some companies find that they prefer to run with that extra cost. Some, like StackExchange, don't mind the extra licensing costs of their platform (Windows) because they're experts in it and can make it do exactly what they want it to do with a minimum of friction, which means the Minimum Viable Product gets kicked out the door sooner. A quicker MVP means quicker profitability, and that can pay for the added base-cost right there.

Other companies treat support contracts like insurance: something you carry just in case, as a hedge against disaster. Once you grow to a certain size, business continuity insurance investments start making a lot more sense. Running for the brass ring of market dominance without a net makes sense, but once you've grabbed it keeping it needs investment. Backup vendors love to quote statistics on the percentage of business that fail after a major data-loss incident (it's a high percentage), and once you have a business worth protecting it's good to start protecting it.

This is part of why I'm finding that the long established companies tend to use technologies that come with support. Once you've dominated your sector, keeping that dominance means a contract to have technology experts on call 24/7 from the people who wrote it.

"We may not have to call RedHat very often, but when we do they know it'll be a weird one."


So what happens when startups turn into market dominators? All that no-support Open Source stuff is still there...

They start investing in business continuity, just the form may be different from company to company.

  • Some may make the leap from CentOS to RHEL.
  • Some may contract for 3rd party support for their OSS technologies (such as with 10gen for MongoDB).
  • Some may implement more robust backup solutions.
  • Some may extend their existing high-availability systems to handle large-scale local failures (like datacenter or availability-zone outages).
  • Some may acquire actual Business Continuity Insurance.

Investors may drive adoption of some BC investment, or may actively discourage it. I don't know, I haven't been in those board meetings and can argue both ways on it.

Which one do I prefer?

Honestly, I can work for either style. Lean OSS means a steep learning curve and a strong incentive to become a deep-dive troubleshooter of the platform, which I like to be. Insured means someone has my back if I can't figure it out myself, and I'll learn from watching them solve the problem. I'm easy that way.

A change of direction for OpenSUSE

This morning on the opensuse-factory mailing-list it was pointed out that the ongoing problems getting version 12.2 to a state where it can even be reliably tested was a symptom of structural problems within how OpenSUSE as a whole handles package development. Steven Kulow posted the following this morning:

Hi,

It's time we realize delaying milestones is not a solution. Instead,
let's use the delay of 12.2 as a reason to challenge our current
development model and look at new ways. Rather than continue to delay
milestones, let's re-think how we work.

openSUSE has grown. We have many interdependent packages in Factory. The
problems are usually not in the packages touched, so the package updates
work. What's often missing though is the work to fix the other packages
that rely on the updated package. We need to do a better job making sure
bugs caused by updates of "random packages" generate a working system.
Very fortunately we have an increasing number of contributors that
update versions or fix bugs in packages, but lately, the end result has
been getting worse, not better. And IMO it's because we can't keep up in
the current model.

I don't remember a time during 12.2 development when we had less than
100 "red" packages in Factory. And we have packages that fail for almost
five months without anyone picking up a fix. Or packages that have
unsubmitted changes in their devel project for six months without anyone
caring to submit it (even ignoring newly introduced reminder mails).

So I would like to throw in some ideas to discuss (and you are welcome
to throw in yours as well - but please try to limit yourself to things
you have knowledge about - pretty please):

1. We need to have more people that do the integration work - this
  partly means fixing build failures and partly debugging and fixing
  bugs that have unknown origin.
  Those will get maintainer power of all of factory devel projects, so
  they can actually work on packages that current maintainers are unable
  to.
2. We should work way more in pure staging projects and less in develop
  projects. Having apache in Apache and apache modules in
  Apache:Modules and ruby and rubygems in different projects may have
  appeared like a clever plan when set up, but it's a nightmare when it
  comes to factory development - an apache or ruby update are a pure
  game of luck. The same of course applies to all libraries - they never
  can have all their dependencies in one project.
  But this needs some kind of tooling support - but I'm willing to
  invest there, so we can more easily pick "green stacks".
  My goal (a pretty radical change to now) is a no-tolerance
  strategy about packages breaking other packages.
3. As working more strictly will require more time, I would like to
  either ditch release schedules all together or release only once a
  year and then rebase Tumbleweed - as already discussed in the RC1
  thread.

Let's discuss things very openly - I think we learned enough about where
the current model works and where it doesn't so we can develop a new one
together.

Greetings, Stephan

The discussion is already going.

The current release-schedule is an 8 months schedule, which is in contrast to a certain other highly popular Linux distro that releases every 6. Before they went to the every-8 schedule, they had a "when it's good and ready" schedule. The "when it's good and ready" schedule lead to criticisms that OpenSUSE was an habitually stale distro that never had the latest shiny new toys that the other distros had; which, considering the then-increasing popularity of that certain other distro, was a real concern in the fight for relevancy.

Looks like the pain of a hard-ish deadline is now exceeding the pain of not having the new shiny as fast as that other distro.

One thing that does help, and Stephan pointed this out, is the advent of the Tumbleweed release of OpenSUSE. Tumbleweed is a continual integration release similar in style to Mint; unlike the regular release it does get regular updates to the newest-shiny, but only after it passes testing. Now that OpenSUSE has Tumbleweed, allowing the mainline release to revert to a "when it's good and ready" schedule is conceivable without threatening to throw the project out of relevance.

These are some major changes being suggested, and I'm sure that they'll keep the mailing-list busy for a couple of weeks. But in the end, it'll result in a more stable OpenSUSE release process.

Moving /tmp to a tmpfs

| 1 Comment
There is a move afoot in Linux-land to make /tmp a tmpfs file-system. For those of you who don't know what that is, tmpfs is in essence a ramdisk. OpenSUSE is considering the ramifications of this possible move. There are some good points to this move:

  • A lot of what goes on in /tmp are ephemeral files for any number of system and user processes, and by backing that with RAM instead of disk you make that go faster.
  • Since a lot of what goes on in /tmp are ephemeral files, by backing it with RAM you save writes on that swanky new SSD you have.
  • Since nothing in /tmp should be preserved across reboots, a ramdisk makes sense.

All of the above make a lot of sense for something like a desktop oriented distribution or use-case, much like the one I'm utilizing right this moment.

However, there are a couple of pretty large downsides to such a move:

  • Many programs use /tmp as a staging area for potentially-larger-than-ram files.
  • Some programs don't clean up after themselves, which leaves /tmp growing over time.

Working as I do in the eDiscovery industry where transforming files from one type to another is common event, the libraries we use to do that transformation can and do drop very big files in /tmp while they're working. All it takes is one bozo dropping a 20MB file with .txt at the end (containing a single SMTP email message with MIME'ed attachment) to generate an eight thousand page PDF file. Such a process behaves poorly when told "Out Of Space" for either RAM or '/tmp.

And when such processes crash out for some reason, they can leave behind 8GB files in /tmp.

That would not be happy on a 2GB RAM system with a tmpfs style /tmp.

In our case, we need to continue to keep /tmp on disk. Now we have to start doing this on purpose, not just trusting that it comes that way out of the tin.

Do I think this is a good idea? It has merit on high RAM workstations, but is a very poor choice for RAM constrained environments such as VPSs, and virtual machines.

Are the benefits good enough to merit coming that way out of the box? Perhaps, though I would really rather that the "server" option during installation default to disk-backed /tmp.

In the last month-ish I've had a chance to read about and use two new graphical shells that impact my life:

  • Windows 8 Metro, which I already have on my work-phone
  • Gnome 3

Before I go further I must point out a few things. First of all, as a technologist change is something I must embrace. Nay, cheer on. Attitudes like, "If it ain't broke, don't fix it," are not attitudes I share, since 'broke' is an ever moving target.

Secondly, I've lived through just enough change so far to be leery of going through more of it. This does give me some caution for change-for-change's-sake.



As you can probably guess from the lead-up, I'm not a fan of those two interfaces.

They both go in a direction that the industry as a whole is going, and I'm not fond of that area. This is entirely because I spend 8-12 hours a day earning a living using a graphical shell. "Commoditization" is a battle-cry for change right now, and that means building things for consumers.

The tablet in my backpack and the phones in my pocket are all small, touch-screen devices. Doing large-scale text-entry in any of those, such as writing blog posts, is a chore. Rapid task-switching is doable through a few screen-presses, though I don't get to nearly the window-count I do on my traditional-UI devices. They're great for swipe-and-tap games.

When it comes to how I interact with the desktop, the Windows 7 and Gnome 2 shells are not very different other than the chrome and are entirely keyboard+mouse driven. In fact, those shells are optimized for the keyboard+mouse interaction. Arm and wrist movements can be minimized, which extends the lifetime of various things in my body.

Windows 8 MetroUI brings the touch-screen metaphor to a screen that doesn't (yet) have a touch-screen. Swipe-and-tap will have to be done entirely with the mouse, which isn't a terribly natural movement (I've never been a user of mouse-gesture support in browsers). When I do get a touch-screen, I'll be forced to elevate my arm(s) from the keyboarding level to tap the screen a lot, which adds stress to muscles in my back that are already unhappy with me.

And screen-smudges. I'll either learn to ignore the grime, or I'll be wiping the screen down ever hour or two.

And then there is the "One application has the top layer" metaphor, which is a change from the "One window has the top layer" metaphor we've been living with on non-Apple platforms for a very long time now. And I hate on large screens. Apple has done this for years, which is a large part of why Gnome 3 has done it, and is likely why Microsoft is doing it for Metro.

As part of my daily job I'll have umpty Terminal windows open to various things and several Browser windows as well. I'll be reading stuff off of the browser window as reference for what I'm typing in the terminal windows. Or the RDP/VNC windows. Or the browser windows and the java-console windows. When an entire application's windows elevate to the top it can bury windows I want to read at the same time, which means that my window-placement decisions will have to take even more care than I already apply.

I do not appreciate having to do more work because the UI changed.

I may be missing something, but it appears that the Windows Metro UI has done away with the search-box for finding Start-button programs and control-panel items. If so, I object. If I'm on a desktop, I have a hardware keyboard so designing the UI to minimize keystrokes in favor of swiping is a false economy.

Gnome 3 at least has kept the search box.



In summation, it is my opinion that bringing touch-UI elements into a desktop UI is creating some bad compromises. I understand the desire to have a common UI metaphor across the full range from 4" to 30" screens, but input and interaction methodologies are different enough that some accommodation-to-form-factor needs to be taken.
Since I had some time this weekend and actually had my work laptop home, I decided to take the plunge and get OpenSUSE 12.1 onto it. I"ve been having some instability issues that trace to the kernel, so getting the newer kernel seemed like a good idea.

Oops.

The upgrade failed to take for two key reasons:

  1. VMWare Workstation doesn't yet work with the 3.1 kernel.
  2. Gnome 3 and my video card don't get along in a multi-monitor environment.

The first is solvable with a community patch that patches the VMware kernel modules to work with 3.1. The down side is that every time I launch a VM I get the "This is running a newer kernel than is known to work" message, and that gets annoying. Also, rather fragile since I'd have to re-patch every time a kernel update comes down the pipe (not to mention breaking any time VMWare releases a new version).

The second is where I spent most of my trouble-shooting time. I've been hacking on X since the days when you had to hand-roll your own config files, so I do know what I'm looking at in there. What I found:

  • When X was correctly configured to handle multi-monitors the way I want, the Gnome shell won't load.
  • When the Gnome shell was told to work right, GDM won't work.

Given those two, and how badly I need more screen real-estate at work, I'm now putting 11.4 back on the laptop.