July 2009 Archives

Datacenter environment

We're having a major heat-wave. The Sea-Tac airport set a record yesterday for hottest temperature on record at 103 degrees. Bellingham too, the old record of 94 set in 2007 was surpassed by the 96 reading of yesterday. Today is cooler, but still well above average for out here.

Much as I was tempted to show up for work today in a tank top, shorts, and flip-flops, I resisted. First of all, I did have a meeting up on campus with some executive-types higher than me so I had to keep up appearances. Also, flip-flops aren't that good for hiking the mile plus to campus.

Of course, today is a day when I get to do a surprise server rebuild in the datacenter! I just spent the last hour standing on a vent tile setting up a server reformat. While I'm not wearing flip-flops, I am wearing shorts. I was cold, so I went for a walk around the building to warm up, and it performed admirably.

Happily, since we have a data-center in the building, the building itself has AC. Not all buildings here do. In fact, the building I had that meeting in did not have any AC, just some moving air.

We have enough AC in the datacenter that the room isn't any hotter today than it gets in mid January. That's nice to have.

One of the perks of working here

Even though WWU is located smack in the middle of Bellingham, WA, it abuts an arboretum. Sehome Hill Arboretum. It so happens that the shortest-by-distance pedestrian route between my office and campus takes me through there on the foot-paths. Here is the trail guide.

Even though today is going to get very hot for up here, I stilled walked up and back this morning to do some work on printing in the labs by way of Microsoft/PCounter. Along the way, I ran into this:
Tree arch in the Sehome Arboretum
The trail I was on crossed this road.

The trail itself is one of the main paths between the Bernam Wood dorms and campus. During regular session there is a fair amount of traffic on this trail. I think I was the first one down it today, as I ran into more than a few spider webs. Also? Even though this route is hillier than the slightly longer one, it's a good 5-7 degrees F cooler under the trees than by the main road.

It was a nice walk. I'll do it again tomorrow to head up to a meeting on campus. I believe that meeting is in a building that currently lacks AC. That could be a very sweaty meeting.

Service delivery in .EDU-land

Matt of standalone-sysadmin fame asked:
I take it from the terminology ("fall quarter") that you work at a university.

How often do you re-engineer your infrastructure, or roll out new servers? Do yo align them to the school quarters? I'm interested in knowing how other people make decisions on roll-outs.
Until a couple weeks ago, this blog was hosted on a server named, "myweb.facstaff.wwu.edu," which should give you a real good idea of where I work ;). So yes, a university. We're also on quarters, not semesters, so our school year is a bit different than those that have only three terms a year instead of four.

For things that will require disruptive downtime for critical systems that'll exceed a few hours, we keep those to the times we're not teaching. We have on the order of 21,000 actual students kicking around (the FTE count is much smaller, we have a lot of part-timers) so outages get noticed. We have students actively printing and handing in homework to Blackboard at 4am, so 'dark of night' is only good for so many things.

The biggest is the summer intersession, which this year is between 8/25 @ Noon (the point grades are due from faculty) to roughly 9/18 (when students start moving into the dorms), is reserved for the big and disruptive projects. Things like completely migrating every file we have to new hardware, upgrading the ERP system we use (SCT Bannder), replacing the router core, upgrading our SAN-based disk-arrays, or upgrading Blackboard. Winter break and Spring break are the other times during the year when this kind of activity can take place.

Winter has a couple weeks to work with, but we're generally rather short-staffed during that period so we try not to do big stuff. Spring is just a few days, so things like a quick point-level upgrade to Blackboard could be done, something that doesn't require extensive testing, validation, or data conversion. Summer intersession is where the big heavy lifting can take place, and we do try and work our various vacations around this particular time of the year.

But we can and do roll new stuff out during session. If the new thing isn't disruptive to established work-flow it is a lot easier, or it just adds functionality to something they're already using. Anything student-visible gets extra scruiteny, as the potential for massive amounts of work on the part of our helpdesk is a lot higher. A lot of our decisions have significant inputs from the, "How much extra work will our Helpdesk experience as a result of this change?" question.

Also, the work varies. Some years we have a lot going on in the summer. This year we only have the one major project. In years when we have a lot going on, we've started planning the summer project season as early as March. Some things, like the router core update and the Banner updates, are known about 18 months or more in advance due to budgeting requirements. Other things, like Blackboard updates and oddly enough this Novell -> Windows migration project, aren't really committed to until May or later.

As for determining when what gets updated/upgraded, that's the responsibility of the maintainers of that application, infrastructure, or hardware to start. Due to the budget cycle, big ticket items are generally known about very far in advance of the actual project implementation stage. Everything eventually falls into the project coordination sphere, which is a very large part of the Technical Services Manager's job (you too can be my new boss! But wouldn't THAT be awkward?) . The TS Manager coordinates with the Academic Computing director and the Administrative Computing director, as well as the Vice Provost of course, to mutually set priorities and allocate resources.

p.s.: The Technical Services page for Organization Size is horribly horribly wrong. We have more servers then that for both MS and Linux. We have less NetWare servers, and by now less Unix servers. And way more disk space then that.
TWO Updates!

One for Visual Studio, and a second for Internet Explorer. Due to the relative lack of IE use (okay, downright zero) on our Servers, we'll probably not hustle this one out the door. Our developers, on the other hand, should pay attention.

Migrations

| 2 Comments
We'll be getting our hardware early next week. Still no ETA on the storage we need to manage things.

Also, with Server 2008R2 being released on the 14th, that allows us to pick which OS we want to run. Microsoft just made it under the bar for us to consider that rev for fall quarter. Server 2008 R2 has a lot of new stuff in it, such as even more PowerShell (yes, it really is all that and a bag of chips) stuff, and PowerShell 2.0! "Printer Driver Isolation," should help minimize spooler faults, something I fear already even though I'm not running Windows Printing in production yet. Built in MPIO gets improvements as well, something that is very nice to have.

And yet... if we don't get it in time, we may not do it for the file serving cluster. Especially if pcounter doesn't work with the rebuilt printing paths for some reason. Won't know until we get stuff.

But once we get a delivery date for the storage pieces, we can start talking actual timelines beyond, "we hope to get it all done by fall start." Like, which weekends we'll need to block out to babysit the data migrations.

Fixing links and history

I just went through the 1072 past posts to this blog looking for links in posts to earlier posts. I do that a lot, it seems. It took a LONG time. I do wonder how many words I've committed to this blog in the five and a half years I've been doing it. There are some long essays back there! Also, I started back when Blogger didn't have:
  • Post-pages, the per-post link for direct linking to posts
  • Tags, or labels as they call it
  • Subjects, though it may have been there and I didn't elect to use it.
I seem to have covered, "the future of [netware|novell]" a lot (5/31/05, 11/9/05, 4/12/06 and that's just the posts with that as the title). There are a few other recurring themes as well. It's always interesting to look back like that.

Pcounter for AD is cool

| 1 Comment
Part of the migration project involves moving the print environment over to Windows. One thing is clear, pcounter makes Windows printing actually work. We haven't had a chance to do anything like scale testing to see how stable it is, that'll sadly have to wait until we get enough traffic (i.e. in production).

One of the ways that pcounter helps Windows printing actually work is to rationalize the print-pooling setup. Under standard Windows printing, print pooling is not a load balancing configuration. Additional printers are used when the first printers are busy with jobs, which means that one printer will get the majority of the print jobs. PCounter does true round-robbin to spread the wear-n-tear.

Pcounter does this by creating a new printer-port type called pcounter. This allows it to get creative with how it handles talking to printers.

PCounter ports display

Hitting Configure brings up the pcounter port config dialog, which you can also get at through the pcounter tools themselves.

PCounter port config dialog

There are many types here. Three ways to talk to printers directly, direct IP, LPR, and direct-attach, and two virtual methods. "Load Balance" does what you expect, you pick a number of already configured pcounter ports to put into the list and any jobs submitted to this new port will print in a round-robin fashion to printers in this list. The "OtherPrinter" option allows you to print directly via non-Pcounter printers, but only one. This last option could be useful if you had some kind of print-to-PDF software on the server that you wanted to regulate through pcounter.

The other nifty thing we're getting into is setting up a web-based release station. We could have done this in the past with NetWare, but never got around to it. It has a lot of features we won't use (such as client-codes), but it does what we want and adds a nice feature: quota listing.

Web release web-page

That sort of quota list is a very nice thing. One thing that pcounter does NOT do that it did do with eDirectory, was put the quota as an attribute on the user object. Instead it uses a custom database. Unfortunately, this means that quota is no longer LDAP accessible, which will have some implications for some of our internal tools.

Cool stuff. I'm still working with ATUS to come up with a config everyone likes, so we'll see where we end up come fall start.

Digesting Novell financials

It's a perennial question, "why would anyone use Novell any more?" Typically coming from people who only know Novell as "That NetWare company," or perhaps, "the company that we replaced with Exchange." These are the same people who are convinced Novell is a dying company who just doesn't know it yet.

Yeah, well. Wrong. Novell managed to turn the corner and wean themselves off of the NetWare cash-cow. Take the last quarterly statement, which you can read in full glory here. I'm going to excerpt some bits, but it'll get long. First off, their description of their market segments. I'll try to include relevant products where I know them.

We are organized into four business unit segments, which are Open Platform Solutions, Identity and Security Management, Systems and Resource Management, and Workgroup. Below is a brief update on the revenue results for the second quarter and first six months of fiscal 2009 for each of our business unit segments:


•


Within our Open Platform Solutions business unit segment, Linux and open source products remain an important growth business. We are using our Open Platform Solutions business segment as a platform for acquiring new customers to which we can sell our other complementary cross-platform identity and management products and services. Revenue from our Linux Platform Products category within our Open Platform Solutions business unit segment increased 25% in the second quarter of fiscal 2009 compared to the prior year period. This product revenue increase was partially offset by lower services revenue of 11%, such that total revenue from our Open Platform Solutions business unit segment increased 18% in the second quarter of fiscal 2009 compared to the prior year period.

Revenue from our Linux Platform Products category within our Open Platform Solutions business unit segment increased 24% in the first six months of fiscal 2009 compared to the prior year period. This product revenue increase was partially offset by lower services revenue of 17%, such that total revenue from our Open Platform Solutions business unit segment increased 15% in the first six months of fiscal 2009 compared to the prior year period.

[sysadmin1138: Products include: SLES/SLED]


•


Our Identity and Security Management business unit segment offers products that we believe deliver a complete, integrated solution in the areas of security, compliance, and governance issues. Within this segment, revenue from our Identity, Access and Compliance Management products increased 2% in the second quarter of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 45%, such that total revenue from our Identity and Security Management business unit segment decreased 16% in the second quarter of fiscal 2009 compared to the prior year period.

Revenue from our Identity, Access and Compliance Management products decreased 3% in the first six months of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 40%, such that total revenue from our Identity and Security Management business unit segment decreased 18% in the first six months of fiscal 2009 compared to the prior year period.

[sysadmin1138: Products include: IDM, Sentinal, ZenNAC, ZenEndPointSecurity]


•


Our Systems and Resource Management business unit segment strategy is to provide a complete “desktop to data center” offering, with virtualization for both Linux and mixed-source environments. Systems and Resource Management product revenue decreased 2% in the second quarter of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 10%, such that total revenue from our Systems and Resource Management business unit segment decreased 3% in the second quarter of fiscal 2009 compared to the prior year period. In the second quarter of fiscal 2009, total business unit segment revenue was higher by 8%, compared to the prior year period, as a result of our acquisitions of Managed Object Solutions, Inc. (“Managed Objects”) which we acquired on November 13, 2008 and PlateSpin Ltd. (“PlateSpin”) which we acquired on March 26, 2008.

Systems and Resource Management product revenue increased 3% in the first six months of fiscal 2009 compared to the prior year period. The total product revenue increase was partially offset by lower services revenue of 14% in the first six months of fiscal 2009 compared to the prior year period. Total revenue from our Systems and Resource Management business unit segment increased 1% in the first six months of fiscal 2009 compared to the prior year period. In the first six months of fiscal 2009 total business unit segment revenue was higher by 12% compared to the prior year period as a result of our Managed Objects and PlateSpin acquisitions.

[sysadmin1138: Products include: The rest of the ZEN suite, PlateSpin]


•


Our Workgroup business unit segment is an important source of cash flow and provides us with the potential opportunity to sell additional products and services. Our revenue from Workgroup products decreased 14% in the second quarter of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 39%, such that total revenue from our Workgroup business unit segment decreased 17% in the second quarter of fiscal 2009 compared to the prior year period.

Our revenue from Workgroup products decreased 12% in the first six months of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 39%, such that total revenue from our Workgroup business unit segment decreased 15% in the first six months of fiscal 2009 compared to the prior year period.

[sysadmin1138: Products include: Open Enterprise Server, GroupWise, Novell Teaming+Conferencing,

The reduction in 'services' revenue is, I believe, a reflection in a decreased willingness for companies to pay Novell for consulting services. Also, Novell has changed how they advertise their consulting services which seems to also have had an impact. That's the economy for you. The raw numbers:


Three months ended


April 30, 2009

April 30, 2008

(In thousands)


Net revenue
Gross
profit


Operating
income (loss)


Net revenue
Gross
profit


Operating
income (loss)

Open Platform Solutions


$ 44,112
$ 34,756

$ 21,451

$ 37,516
$ 26,702

$ 12,191

Identity and Security Management



38,846

27,559


18,306


46,299

24,226


12,920

Systems and Resource Management



45,354

37,522


26,562


46,769

39,356


30,503

Workgroup



87,283

73,882


65,137


105,082

87,101


77,849

Common unallocated operating costs



—

(3,406 )

(113,832 )

—

(2,186 )

(131,796 )























Total per statements of operations


$ 215,595
$ 170,313

$ 17,624

$ 235,666
$ 175,199

$ 1,667



























Six months ended


April 30, 2009

April 30, 2008

(In thousands)


Net revenue
Gross
profit


Operating
income (loss)


Net revenue
Gross
profit


Operating
income (loss)

Open Platform Solutions


$ 85,574
$ 68,525

$ 40,921

$ 74,315
$ 52,491

$ 24,059

Identity and Security Management



76,832

52,951


35,362


93,329

52,081


29,316

Systems and Resource Management



90,757

74,789


52,490


90,108

74,847


58,176

Workgroup



177,303

149,093


131,435


208,840

173,440


155,655

Common unallocated operating costs



—

(7,071 )

(228,940 )

—

(4,675 )

(257,058 )























Total per statements of operations


$ 430,466
$ 338,287

$ 31,268

$ 466,592
$ 348,184

$ 10,148

So, yes. Novell is making money, even in this economy. Not lots, but at least they're in the black. Their biggest growth area is Linux, which is making up for deficits in other areas of the company. Especially the sinking 'Workgroup' area. Once upon a time, "Workgroup," constituted over 90% of Novell revenue.
Revenue from our Workgroup segment decreased in the first six months of fiscal 2009 compared to the prior year period primarily from lower combined OES and NetWare-related revenue of $13.7 million, lower services revenue of $10.5 million and lower Collaboration product revenue of $6.3 million. Invoicing for the combined OES and NetWare-related products decreased 25% in the first six months of fiscal 2009 compared to the prior year period. Product invoicing for the Workgroup segment decreased 21% in the first six months of fiscal 2009 compared to the prior year period.
Which is to say, companies dropping OES/NetWare constituted the large majority of the losses in the Workgroup segment. Yet that loss was almost wholly made up by gains in other areas. So yes, Novell has turned the corner.

Another thing to note in the section about Linux:
The invoicing decrease in the first six months of 2009 reflects the results of the first quarter of fiscal 2009 when we did not sign any large deals, many of which have historically been fulfilled by SUSE Linux Enterprise Server (“SLES”) certificates delivered through Microsoft.
Which is pretty clear evidence that Microsoft is driving a lot of Novell's Operating System sales these days. That's quite a reversal, and a sign that Microsoft is officially more comfortable with this Linux thing.

Powershell and ODBC

| 2 Comments
One nice thing about PowerShell is that it can talk to databases without a predefined ODBC connection. That makes them a lot more portable! I approve. However, I had trouble finding out how to set up and read data. So here is what I have.

##### Key variables
$SQLServerName="sqlserver"
$SQLDatabase="YourDatabaseInTheServer"

##### Start the database connection and set up environment
$DbString="Driver={SQL Server};Server=$SQLServerName;Database=$SQLDatabase;"
$DBConnection=New-Object System.Data.Odbc.OdbcConnection
$DBCommand=New-Object System.Data.Odbc.OdbcCommand
$DBConnection.ConnectionString=$DbString
$DBConnection.Open()
$DBCommand.Connection=$DBConnection

$InsertStatement="INSERT into Mbox_DB (MBServer, MBStore) values ('$MBServer', '$MBStore')"
$DBCommand.CommandText=$InsertStatement
$DBResult=$DBCommand.ExecuteNonQuery()

$SelectStatement="SELECT MBDBID From Mbox_DB WHERE (MBServer=$MBServer) AND (MBStore=$MBStore)"
$DBComand.CommandText=$SelectStatement
$DBResult=$DBCommand.ExecuteScalar()

$Date=get-date
$PurgeDate=$Date.AddMonths(-3)
$SelectStatement="SELECT Users From LastLogon WHERE (DateTime < '$PurgeDate')"
$DBCommand.CommandText=$SelectStatement
$DBResult=$DBCommand.ExecuteReader()
$UserTable=New-Object system.data.datatable
$UserTable.load($DBResult)

Yes, this is part of a larger script I'm writing. When that finishes, I'll probably post it too.
I have VMWare Workstation installed on my workstation. It is very handy. We have an ESX cluster so I could theoretically export VMs I work up on my machine directly to the ESX hosts. I haven't done that yet, but it is possible.

Unfortuantely, I've run into several performance problems since I installed it. The base system as it started, right after I switched from Xen:
  • OpenSUSE 10.2, 64-bit
  • The VM partition is running XFS
  • Intel dual core E6700 processor
  • 4GB RAM
  • 320GB SATA drives
The system as it exists now:
  • OpenSUSE 11.0 64-bit
  • The VM Partition is running XFS, with these fstab settings: logbufs=8,noatime,nodiratime,nobarrier
  • The XFS partition was reformatted with lazy-count=1, and log version 2
  • Intel quad core Q6700
  • 6GB RAM (would have been 8, but I somehow managed to break one of my RAM sockets)
  • 320GB SATA drives, with the 'deadline' scheduler set for the disk with the VM partition on it.
It still doesn't perform that great. I've done enough system monitoring to know that I'm being I/O bound. I hear ext4 is supposed to be better at this than XFS, so I just might go there when openSUSE 11.2 drops. One of these would go a long way to helping fix this problem, but I don't think I'll be able to get the funding to do it.

Where DIY belongs

| 1 Comment
The question of: "When should you built it your self and when should you get it off the shelf?" is one that varies from workplace to workplace. We heard several different variants of that when were interviewing for the Vice Provost for IT last year. Some candidates only did home-brew when no off the shelf package was available, others looked at the total cost of both and chose from there. This is a nice proxy question for, "What is the role of open source in your environment," as it happens.

Backups are one area where duct tape and bailing wire is to be discouraged most emphatically.

And now, a moment on tar. It is a very versatile tool, and is what a lot of unixy backup packages are built around. The main problem with backup and restore is not getting data to the backup medium, it is keeping track of what data is on which medium. Also in these days of the backup-to-disk, de-duplication is also in the mix and that's something tar can't do yet. So while you can build a tar-and-bash backup system from scratch without paying a cent, it will be lacking in certain very useful features.

Also? Tar doesn't work nearly as well on Windows.

Your backup system is one area you really do not want to invest a lot of developer creativity. You need it to be bullet proof, fault tolerant, able to handle a variety of data-types, and easy to maintain. Even the commercial packages fail some of these points some of the time, and the home brew systems fall apart much more often relative to these. The big backup boys have agents that allow backups of Oracle DBs, Linux filesystems, Exchange, and Sharepoint all to the same backup system, a home-brew application would have to get very creative to do the same thing; the problem gets even worse when it comes to restore.

Disaster Recovery is another area in which duct tape and bailing wire are to be discouraged most emphatically.

There are battle-tested open-source packages out there that will help with this (DRBD for one), depending on your environment. They're even widely used so finding someone to replace the sysadmin who just had a run in with a city bus is not that hard. Rsync can do a lot as well, so long as the scale is small. Most single systems can have something cobbled together.

Problems arise when you start talking Windows, very complex installations, or money is a major issue. If you throw enough money at a problem, most disaster recovery problems become a lot less complex. There is a lot of industry investment in DR infrastructure, so the tools are out there. Doing it on a shoe-string means that your disaster recovery also hangs by a shoe-string. If you're doing DR just to satisfy your auditors and don't plan on ever actually using it, that's one thing. But if you really expect to recover from a major disaster on that shoe-string you'll be sorely surprised when that string snaps.

Business Continuity is an area where duct tape and bailing wire should be flatly refused.

BC is in many ways DR with a much shorter recovery time. If you had problems getting your DR funded correctly, BC shouldn't even be on the timeline. Again, if it is just so you can check a box on some audit report, that's one thing. Expecting to run on such a rig is quite another.

And finally, if you do end up cobbling together backup, disaster recovery, or business continuity systems from their component parts, testing the system is even more important. In many cases testing DR/BC takes a production outage of some kind, which makes it hard to schedule tests. But testing is the only way to find out if your shoe-string can stand the load.

Email reputation

One of the hot new things in anti-spam technology is something that's rather old. Yes, the Realtime Blackhole List is back. Only these RBL's aren't the old school DNS servers of yesteryear, these RBLs are maintained by the big anti-spam vendors and are completely proprietary. The new name is now, "IP Reputation," and that's showing up on the marketing glossies.

The idea is that you deploy a network of sensors (say, every anti-spam appliance you ship, or software-package installed) that relay spam/ham information back to home base. Home base then builds a profile of the behaviors of the incoming IP connections. Once certain completely proprietary threshold are crossed, the anti-spam vendor then publishes that particular IP addresses reputation to their service. The installed base then queries the reputation service on every incoming TCP connection to see how to handle that connection.

The response varies from vendor to vendor, but include:
  • Outright blocking. Do not accept traffic from this IP address. The connection is terminated before any SMTP commands can be issued. Do not pass EHLO. Do not collect 220-ESMTP.
  • Deferr. Issue a 421 error message. Smart mailers will attempt redelivery later. Bots are generally too stupid to try this and just pass on to the next address on their list.
  • Throttle. Get very slow in accepting mail. Take a long time to issue 250-Ready statuses after SMTP commands.
The nice thing about IP reputation is that it is fast and cheap. Instead of having to lexically scan every incoming email for spamminess, you can just look at the source's reputation and block a very large percentage of messages. When we turned this on for our spam product a while back, the reputation filter blocked between 90% to 95% of all messages ultimately blocked as spam. Clean email is the single most expensive mail to pass since it has to go through every single stage of the spam/ham test pipeline, and blocking things earlier in the pipeline is a good way to shed load.

Not all optimizations are without side effects, and this one wasn't. The former student email server, titan, got itself 'greylisted' due to spam quantities. Around 50% of the message traffic into Exchange from this system was ultimately blocked as Spam according to the old anti-spam appliances we had (we'd routed its mail through the 'outbound' queue on those appliances so it wouldn't be subject to reputation tests, but would still scan email). As part of the migration of student email to OutlookLive.Edu, we set up forwards from the old cc.wwu.edu addresses to the new addresses. The spam-checkers on titan were of poor enough quality that enough spam got through to cause OutlookLive to start grey-listing Titan, causing mail to really back up on it.

That's not the only thing. Certain mailers managed by departments other that ITS here at WWU have managed to get themselves greylisted or outright blacklisted on these proprietary reputation lists. The one common denominator we've found is that certain specific UNIXy mailers do not apply their anti-spam processes to mail that is subjected to a .forward. At least, not without specific config telling it to scan that traffic. So if a person on one of these mailers has a .forward sending all mail into Exchange, the full spam-filled feed heads to Exchange and the reptuation of that mailer gets dinged.

Which is a long way of saying that, ahem:

In this era of IP reputation, outbound spam filtering is now just as required as inbound.

Really. Go do it. It'll help prevent blacklistings, and that sucks for anyone subjected to it.
Tired of Thunderbird's quirks? Want something else but don't like either Evolution or Outlook? You have another option.

GroupWise@Home!

As near as I can figure, this is a GroupWise client tweaked for use without a GroupWise server. This'll allow you to do IMAP/POP email and have all those other nifty GroupWise features like richly featured rules. I haven't tried it myself, but I am sorely tempted.

Google and Microsoft square off

As has been hard to avoid lately, Google has announced that it's releasing an actual operating system. And for hardware you can build yourself, not just on your phone. Some think this is the battle of the titans we've been expecting for years. I'm... not so sure of that.

What the new Google OS, called Chrome to make it confusing, is under the hood is Linux. They created, "a new windowing system on top of a Linux kernel." This might be a replacement for X-Windows, or it could just be a replacement for Gnome/KDE.

To my mind, this isn't the battle of the titans some think it is. Linux on the net-top has been around for some time. Long enough for a separate distribution to gain some traction (hello Moblin). What google brings to the party that moblin does not is its name. That alone will drive up adoption, regardless of how nice the user experience ends up being.

And finally, this is a distribution aimed at the cheapest (but admittedly fastest growing) segment of the PC market: sub-$500 laptops. Yes, Chrome could further chip away at the Microsoft desktop lock-in, but so far I have yet to see anything about Chrome that could actually do something significant about that. Chrome is far more likely to chip away at the Linux market-share than it is the Windows market-share, since it shares an ecosystem with Linux.

Microsoft is not quaking in its boots about this anouncement. With the release of Android, it was pretty clear that a move like this was very likely. Microsoft itself has admitted that it needs to do better in the, "slow but cheap," hardware space. They're already trying to compete in this space. Chrome will be another salvo from Google, but it won't make a hole below the water-line.