March 2006 Archives

The migration threshold

| 1 Comment
Novell is having a problem. We've known this for about 8 years now, but there we are. The future of operating systems at Novell is SLES and OES. Not NetWare. This, they tell us, doesn't matter much since OES looks so much like NetWare to end-users that it isn't much of a problem.

But this does overlook a sad fact in the market place. Take a look at this chart I drew up, 'leet drafting skills that they are:

What you see here is a base state, call it NetWare. At the far end is the new state, call it OES-Linux. And on the side you have a Great Attractor called "Microsoft". Also marked is a threshold that I call the migration threshold. This is the threshold that once crossed, a migration to a new platform is possible. The 'energy' driving the leap to a new state is twofold, money, and mind-share.

The problem is that anyone who is going to leap platforms has to overcome the attraction of Microsoft or end up on Microsoft. To put it in the context of the chart, you have to have enough energy to reach the desired (by us techies) new base-state to overcome the attraction of Microsoft.

In the tech world here, a lot of NetWare shops are still NetWare because the pain of moving to anything else outweighs the pain of staying put. Now that Novell has signed the death-warrant of NetWare in the phrase, "no further development," the pain of staying put is now greater than the pain of moving. So people are moving.

And falling into the gravity well that is Microsoft. And as I learned recently, that may actually happen here too. Scary thought.

But as I said earlier, the 'energy' is quantified by some combination of money and mind-share. Money will be required in any move, be it moving to a new licensing scheme, a complete repurchase of your whole infrastructure, or retraining all of your sysadmins on this new linux thingy. Mind-share is more complicated. If the management mind-share is that Novell is a dead-end product, that doesn't provide much energy at all. If management thinks that Novell has a future, that provides more of a boost.

But it just is one of those things in the marketplace these days that you have to provide a solid reason why you are not on Microsoft. Not why you are on it, but why you are NOT on it. As Novell said in one of the keynotes last week, only 90% of Novell's desktops run their Desktop Linux product, and only 50% of them do so exclusively. While 50% is quite a lot, it isn't anywhere near the penetration rate of Windows.

I think Novell is doing some good in providing alternatives to Windows, but we're still a decade away from any serious chinking at the armor. Vista will provide a migration point for Windows shops, but that'll be soon enough that the Microsoft gravity-well won't have been diminished all that much.

There is hope though. IBM was selling Microchannel well after it was clear that they had lost that battle. "You can't go wrong buying IBM" had fallen by the wayside. But it'll be several to many years before the "you can't go wrong buying Microsoft" goes away.

Tags: ,

Passing of an era

Today we're having the retirement party for one Bent Faber. He started work at this campus back in September of 1965 and grew up in occupied Denmark. That wasn't the begining of IT at WWU, that was 1962. But he is the oldest and longest serving IT employee we have. He has been elegible for retirement for a number of years now, and only decided that this was the year recently.

Over his long career he has been everything from the most knowledgable techie on campus, to the go-to person for student account complaints.

The retirement party was a veritible who's who of IT at Western over the last 40-odd years. Several people came out of retirement to attend. In one of the interesting twists of fate, the current Technical Services director was once a student employee working for the same department Bent was back in the '60's. That Tech Services director is now within a decade of retirement himself.

Thank you, Bent, for your long service.

Posting errors again

*sigh* Still having SSH issues. This looks like the same kind of issues I was having before the latest LibC seemed to have fixed things. The error-entry from the SSHD.LOG:

Failed to create identity for cn=user.ou=context.o=wwu on local server. rc: 116, errno 7, h_errno: 10053, clienterrno: 10053

The virtual circuit was terminated due to time-out or other failure. The application should close the socket as it is no longer usable.
RC: 116Generic NCP Error
h_errno: 10053
clienterrno: 10053:The virtual circuit was terminated due to time-out or other failure. The application should close the socket as it is no longer usable.

Which looks kind of "there was a problem on the remote-end" like to me. Hmmm.

Tags: ,

Zen goodness

I found a way to get workstations to import without having them as members of our AD domain! The b-i-g problem we've had with Zen workstation importing has been our scads of DNS sub-domains. Everything has its own subdomain.

So throwing a "zenwsimport" entry in each won't work. And since we have no unified desktop configuration, we can't tell the desktop folk to "always set up as a look-up domain" and do it that way. I had somewhat solved this one earlier by putting "zenwsimport" into the Active Directory DNS domain which all domained PCs have in their resolve list. This got about 1100 machines at its height.

But far from every machine on campus is domained. How do we pick up the stragglers?

Turns out the answer to that one is to create a force-run application object associated to our Users context (not students). That runs the "zwsimp -importserver" command on the local workstation, and in the last 40 minutes I've had well over 100 machines register. And we're now over 1200 machines imported! Yay! Unfortunately I don't have a real good idea how many non-lab desktops we have out there, so I'm not sure what percentage of the total that 1200 represents.

And for those of you WWU paranoids reading this (I think I have a few) this doesn't do much of anything to your machines. It gives your workstation a presence in NDS, and allows some nifty tricks. The best trick is allowing your local SYSTEM account access to WUF-hosted files so automated processes running in the local Task Scheduler can update files on the cluster. Neat, eh?

Tags: ,

On the Novell strategy

| 1 Comment
I was speaking with a Novell rep on Thursday night's Midwest Region party,
and in 30 seconds he explained the switch to Linux better than the CEO's did
all week. The nutshell version is...what SUSE does now for features is what
Netware 8 and 9 were slated to be for their development. Novell realized
that it would be far more cost efficient to switch to Linux and get the
features today rather than spend lots of money and time and develop the
tools for a Netware platform that won't be developed for another 5 or more
I can believe that. The direction that Novell was taking NetWare was to be more and more of an application server. One thing that NetWare doesn't do all that well is Application Serving. It'll serve things like this blog and MyFiles/NetStorage just peachy. But the development environment is, in a word, unforgiving. Linux, Windows, and Solaris all provide more forgiving development and operation environments.

In order to get to where Novell wanted NetWare to be, they would have had to made significant changes to the NetWare kernel. Including things like an improved thread model, enhanced process-space protection, and better memory management. These are fundamental features of an operating system and 5 years is a reasonable guess for developing them from where NetWare is today.

And Linux already has those, and just needs to have better file-and-print crammed into it. By the time OES2 comes out, Novell should have solved the file-and-print problem. 5 years of development solved in 2.

Bummer about the colateral damage, eh?

Tags: ,

New Novell Client for Linux

Release or not, I'm not sure. But the file exists:

The "315" bit is the build date from the looks of it. Probably Very Beta.

Tags: ,
This session went into depth into the state of Novell Cluster Services in the new world. It was interesting! I also was introduced to Novell's Business Continuity Cluster product. I had been peripherally aware of it, but I hadn't taken a real look at it. This is exactly what we want to have to set up the backup datacenter in Bond Hall. But we'll never get the money for it.

That said, BCC supports cluster-level failover. If Cluster 1 fails, activate Cluster 2. It doesn't handle the data replication, but all the big SAN vendors provide some way to do that. This is the kind of thing that we need. It also provides a failover in under 5 minutes if all the conditions are correct, which is very nice.

Another thing I brought out of this session is the concept of NSS-level RAID1. I knew I could do it, but it didn't occur to me that this could be a data-export method. So we set up an iSCSI target in Bond Hall, and then mirror all of the WUF pools over there. Writes will be slower due to the transit-time to the off-site, but if Bad People blow up 32nd street where our datacenter is, all the data is live in Bond Hall! Interesting idea... will have to ponder that one. The NSS guys in the Tech Lab endorsed it so long as our network is up to the task, something that I'm not 100% on.

Tags: ,
This session described the foundation behind Novell Cluster Services and the pure-linux Heartbeat 2 that comes in SLES10. And for me provided an introduction to Heartbeat 2. Among the features:
  • 16 node clusters. NCS supports 32 node clusters, but apparently Storage Vendors have a price-point at 16 where things get eye-crosingly expensive when you go past 16 nodes. Ergo, Novell is capping support at 16 nodes.
  • HB2 has dependancy tracking! Does Tomcat need to be up in order to launch Apache? HB2 can track that! Cooooooool.
  • Run via EVMS, like NCS, and can support a number of filesystems like Reiser, OCFS2, and XFS.
  • Split Brain Detection like NCS has will be in SLES11. Not SLES10, since at feature-freeze the SBD code was unsupportably unstable.
Tags: ,
This session lasted 22 minutes. Let me summarize:

Using separate servers to separate your stuff is the most effective way to separate things, but it is the most expensive.

Using VMs to separate virtual servers is a bad idea, since the VM can be escaped pretty easilly. Even Xen. Even VMWare. Medium cost, doesn't give you much.,

AppArmor wraps around applications so they can't misbehave. It is cheap, and gives you a lot. Use it.

The end.

Unsatisfying in detail, but there you are.

Tags: ,

The Brainshare wireless network

From the NOC:
Something else you guys can try; force your wireless card to 801.11a rather
than b or g. The b and g frequency are pretty saturated.
Something else that may interest many of you, we get reports regularly from
ELI concerning usage of the 100 meg pipe coming into the Salt Palace and you
all are averaging over 70 meg of that a day. Amazing.

Tags: ,

Meet the Experts night

This is my favorite night of BrainShare. All the developers and other related geeks come out to mingle with us end users. The Technology Lag is a lot like MTE night in miniature, but you're not guaranteed to have the person who wrote it at the table to answer your questions. It is a lot of fun for the technically minded.

Among the things I learned:
  • The faces of the developers that have been working this LibC issue I've been fighting for most of the life of this blog.
  • Due to ReiserFS, SuSE actually runs eDirectory faster than NetWare.
  • The Server Consolidation Utility doesn't support linux-to-linux migrations yet, but that apparently was a very frequently requested feature this week. So the geeks will take this info back to their managers to Make It So.
  • You can use chocolate fountains for BBQ-sauce.
The one that really grabbed my attention was the second item. They had a rack of 8 opteron 8-proc (4 dual core) servers running a 100-million object eDirectory, and pumping 30,000 queries a second. Apparently CNN, which uses eDir to drive a lot of what they do, has a 64-million object eDir.

The thing that got me thinking hard was a chart they had of eDirectory performance for the various processors. The top performer was the 4x Operton, and it blew the nearest competition (4 Operton single-core) out of the water. The holder of the bottom performer was a tie between NetWare 6.5 SP4, and Solaris SPARC.


Linux now outperforms NetWare for eDirectory. The reasons are partly due to OS. ReiserFS has much less metadata to fiddle with, so is generally faster than NSS. Linux's multi-thread model is much more robust than NetWare, which also greatly improves performance in this area. The other area of improvement is the fact that SLES has been compiled with Opteron optimizations, which is quite visible when compared to the Intel-64 on SUSE test. NetWare will never be 64-bit, so it is crippled by itself right there.

And let me tell you, this has me thinking.

Tags: ,
Another very interesting session. It is here that they explained WHY it is that Novell does not recommend running NetWare and Linux in the same cluster for very long. It is intended to be a migration step, not an extended state of being. This is very true in light of the various problems living in that environment poses.

Once you've added a Linux node to the cluster, you can't add a NetWare node. Or you can, but it isn't supported. The session demo had some troubles that could be related to this problem.

Extending NSS Pools requires reboots to get the visibility settled down. In pure NetWare, a simple "CLUSTER SCAN FOR NEW DEVICES" command will take care of it. In pure Linux, the method is less clear but is apparently well supported. But when mixed... reboots.

When adding trustees to a NetWare hosted volume, Linux will not see the added trustees until a manual process is kicked off to pick them up. This is probably one of the bigger problems. If the trustees won't migrate, that presents a big problem.

Services created in Linux will not migrate to NetWare. However, services created on NetWare will migrate to Linux. Usually. Some debugging of the startup scripts may be needed.

The folk in this session made a statement that broke my head. Apparently with the advent of eDirectory 8.7, Novell has started to recommend placing on all cluster-nodes replicas that contain any object that might need access to the cluster. One of the guys in the session asked:
I have a 12 node cluster. Are you saying Novell wants me to place a replica with tens of thousands of users in it, on each node? And that works?
To which the cluster guys said,
With eDirectory 8.7, it shouldn't have a problem with that. So, yes, that's what we're saying.
Right. So I went down to the eDirectory booth in the tech-lab after the session and asked their opinion of it. Apparently eDir IS robust enough to handle replicas with 30,000 user objects in them and do it on 12 nodes. My quibble is that my cluster nodes do very good yo-yo imitations, and I have a problem with replica holders doing that.

The clusters guys, and the eDir guys, both said that the cluster nodes really should have a replica on them that contains the cluster-context in eDir. That I don't have as much of a problem with, since that particular partition has only 1200-odd objects in it, and doesn't get updated all that often. We'll see if we do this once I get back into the office.

And in other news, Storage Routers are some cool things. In a few years when the price has dropped, we may be able to do the 'business continuity cluster' that TPTB want us to put together.

Tags: ,
This session was surprisingly NetWare heavy! Nice to see. And useful, since I HAVE A NetWare cluster right now, and not a Linux cluster. The person giving the session was Jerry Levy from EMC, but it was NOT a sales-pitch; Novell apparently learned that lesson.

Rather than give an in-depth, I'll just give the key points I picked up. Some of it is 'well, duh,' but others are useful things I didn't know before.
  • IDE/ATA/SATA drives make good boot & OS drives. They're also good for single-user single-threaded applications due to their architecture.
  • SATA is a good choice for backup-to-disk applications, which are usually pretty 'single-user' and 'single-thread'
  • It isn't Disk Manager that causes Windows to stomp all over any media it sees, it is the use of SCSI Reservation!
  • Boot-from-SAN isn't a good fit for how we use our servers. Though for other environments it can be a good fit.
  • Novell is apparently recommending one NSS Pool per LUN. Um...
  • The drive letters in "/dev/sda/", "/dev/sdb/" etc can change without warning.
Interesting stuff. Then I had to leave for another session, this was fit in during a dead spot.

Tags: ,

Wednesday Keynote

The biggest thing in the Wednesday keynote was a demo of SuSE Linux Desktop 10beta (SLED10). And in a word, "wow". Novell pulled out the stops when it came to getting the interface worked up and usable, and getting integration working between the various bits.

The interface takes a big cue from Apple and presumably Vista in relying on 3D acceleration to drive desktop chrome. Yes, chrome. Some of the nifty features:
  • Spotlight-like desktop search, complete with live update
  • Darwin-like desktop zoom by hot-key
  • 'Rotating cube' animation for different screens
  • Fast thumbnailing
  • Alt-Tab rotates a transparent version of the app in the background, and updates live
  • Scalable icons
  • Hot-key to 'tile' open applications for selection, also updates live
To me it looks a lot like Mac OS-X. Clearly, this is a desktop that can only run well on a machine with a good 3D card, and this particular elderly Dell that I'm writing this blog-entry on is quite decidedly not butch enough. But this is a full desktop package, not just some fancy spinny things:
  • Novell Open Office has several new features
    • Translators for VB macros into StarBasic. Not 100%, but it can handle most simple macros reportedly.
    • Pivot-table support. Though they did not say if this included database tie-ins
  • A 'Foto' application that looks a lot like a similar ap I've seen on OS.X (name escapes me)
  • 'Basalisk,' a media-player module that works with iPods (a Nano was used in a live demo, no word on playlist support), and includes legal MP3 support. Built in partnership with Real Networks.
This is a very interesting product. It is very flashy, and Novell is clearly working on making it a usable Linux desktop. Things that I'll need to see before I get really excited are:
  • Novell Client. You can get it separate, but it still needs work
  • Simple database tie-in capability with both MS-SQL Server and Oracle. Not just MySQL.
Those are some needed steps, and unfortunately my linux knowledge about ODBC-like things in OpenOffice is nonexistant. But once you can do custom graphs from closed-source databases you are almost to the point of being able to compete with Microsoft.

All in all, this desktop is a major leap ahead from what NLD9 was. Novell is clearly focusing quite a bit of effort into turning Linux into an end-user friendly operating system, and not just a geek-friendly operating-system. They're doing usability studies (they had video of it) of their interfaces, which is something the average Open Source project can't really do. If Novell can turn a profit on this, it represents a major coming forward of Open Source into the marketplace.

Tags: ,

Novell client for Vista

One of the things that I found out today was Novell's plans for a client for Vista. This was one of the prime questions I was sent to BrainShare to answer. And the answer is...
Novell will release a preview client for Vista 60-90 days after Vista releases. There will also be a Vista64 client. But there will never be an XP-64 client due to XP-64 missing key bits of the network stack.
So there you have it. As for how long it'll take until there is a release-quality client, that remains to be seen. But it took Novell quite a while to get an XP-compatible client. Here is hoping that it doesn't take that long with Vista. But rigarous testing by all of us and reporting defects back to Novell will help that process long.

[update 2/5/2007: the 'technology preview' client is out. See here]

[update 6/28/2007: the Public Beta client is out. See the beta page]

[update 7/27/2007: Jason Williams says that it should be out in mid-August]

[update 8/20/2007: The 1.0 client is out. Get it here.]

Tags: , ,

IO 104: File system roadmap

The presenter of this session was Richard Jones, who I've had the pleasure of being in sessions before. A very good speaker, and I always take away something good about whatever the topic is. He is also one of the CoolBlog people, with his Novell blog here.

That said, the session was very good. This was a file-system primer for all the many and various file-systems you can find on SLES and OES. It also included some time on the various access protocols such as Samba, NCP Server, and NetATalk.

Samba, as it sits right now, is at something of a crossroads. The version shipping with SLES9 has issues when being pounded on by lots of connections which would make it completely unsuitable for a server that the entire campus maps to. Our WUF-cluster NetWare servers frequently have up to 7000 simultanious connections, and Samba would melt under that kind of work-load. Version 4 is aiming to fix a lot of that, but is still too buggy and didn't made the code-freeze date for SLES10. Expect it in SLES11 in 18-36 months. V4 can also act as a Domain Controller in an AD domain, which is a very interesting development.

NCP Server is faster than NetWare for two primary reasons. One, when they ported it to Linux the dev-team rewrote 85% of the code to get rid of assumptions that were designed into the NCP-on-NetWare server back in 1989. Specifically, NCP-on-NetWare can run in 32 megs of RAM just fine. You can't get a server that scanty on memory any more, so they wrote it to be more piggy about I/O resources. Second, they removed all hints of IPX from the code. These two things combined are why NCP-on-Linux is faster. And Richard indirectly mentioned my benchmark in January, which was neat.

In terms of futures, NCPServ on Linux will soon support Kerberos. This will, in theory, permit a single login for AD Domains and eDirectory stuff. Neat.

As for Apple.... the NetATalk in SLES9 only supports the version of AFP that came with OS9. At the time of the SLES9 code-freeze the OSX support was cruddy. That isn't the case for SLES10, so NetATalk v2 is included in it. And it includes full OSX support, including long-password support!

Richard also gave some hints as to the future of NSS on OES. Right now the goal is to maintain datastructure compatibility with the NetWare NSS, so there aren't going to be a lot of new features developed soon. There will be an X86-64 native code module built in order to support NSS on 64-bit Linux machines (it IS a kernel module), but the code structure on the file-system will still remain as is.

As for desires, and we're talking several years out, include the ability to do background rebuilds, do cluster parallel file serving, and getting rid of the 8TB volume size restriction.

The other neat thing in this session was the coverage of Cluster Parallel filesystems, and what they're good for. This is a facinating topic, since it can provide a single datasource with multiple block-level access to data. No need to shim it over NFS, it happens in kernel over the I/O subsystem. Not network I/O. This can provide serious performance gains over NFS in certain cases.

Tags: ,

IO 202: Data Protection Concepts

It started off fairly basic, with a definition of the problem: data is growing faster than it can be thrown to tape.

No biggie there. We've known that for years.

Then he went into the various data protection methods out here, roughly:
  1. Tape backup
  2. Tape backup with off-site
  3. Remote archive of data (snapshots, that sort of thing)
  4. Remote archive of data with live servers
  5. Business contiunity clusters
As you move down the tree, the cost goes up, but your restoration-of-service interval shortens. Here at WWU we're at step 2, looking to skip 3 and go direct to 4. We'll see how this goes.

But anyway, there were some other useful tidbits in this class. Hierarchical Storage Management has been around since the 80's. NetWare has had HSM hooks in it for as long as I can remember, and that's at least as far back as NW4.10. Possibly earlier. We keep looking at HSM as a way to cut down backup costs.

As it happens, Gartner has studied the issue and discovered a few things. For general purpose file-server like we run, HSM is a really poor fit. For specific applications with a predictable data-add and access rate, such as Blackboard, it is a really good fit. So HSM isn't going to save our bacon on the 2.7TB WUF cluster. Oh well.

Distributed File System is something that's also been around for a while, and has been something of an unsung hero. It also allows separate backup policies for sections of a directory tree. We don't use it because the NetWare DFS doesn't behave like the Microsoft DFS, which does what we want it to. Though I heard a solid rumor that Novell is going to fix that. That's on my 'to confirm' list for Meet The Experts night.

Also in the future, Novell's Archive and Versioning server will also use NSS Snapshot and Salvage data. It isn't there yet, but it will provide a one-stop-shop for recovering deleted files. Not quite useful for full system restores, but very useful for the "I deleted it last week by mistake" restores we currently fulfill through Salvage.

Tags: ,

TUT104: Introduction to App Armor

AppArmor is some seriously neat stuff. It takes SELinux, and makes it user-friendly. Like SELinux, it takes a profile of an application under normal usage and builds walls around that. So if the application steps out of normal usage, the kernel will prevent that activity. For things like PHP-BBS systems, this should be mandatory considering all the problems those systems have had of late.

There are some caviats, though. While AppArmor will keep the process being protected from accessing files its not supposed to, it won't prevent it from accessing the files it already has access to. Though what it does with those files may or may not be affected.

Reportedly, the overhead for running Apache in an AppArmor profile is less than 2%. Not too shabby.

Plus, AppArmor has been ported to Slackware!

Tags: , , ,

TUT121: Virtualization of NetWare

I saw NetWare boot! This is the first time since BS-01!

That said, it wasn't a very encouraging session for NetWare fanatics. Key points:
  • There will almost definitely not be another point-rev of NetWare. No NW7, no NW6.6
  • SP6 may, or may not, be the OES2 release.
  • Service Packs past SP6 will be developed towards a virtualized environment, not a 'bare metal' environment.
  • Xen will include virtual storage drivers, and 3rd parties will continue to offer drivers as usual.
  • 3rd parties dropping NetWare driver support will most likely force NetWare into Xen in the future
  • Due to how Xen works, certain keyboard combinations we all know and love will have to change.
When the presenter said that there would be no further versions of NetWare the room was dead silent. Not even paper shuffles or coughing. That was a sign.

Tags: , ,

SLES10 still in beta

Mr. Messman was not clear. Beta 8 is released. That's it. Not formally released. Just beta.

Tags: , ,

And for GroupWise

GroupWise Mobile Server, powered by Intellisync, is announced! This will match Exchange's Mobile thing they've had for a while. Good for keeping up. Reportedly.... this provides more mobile support than Exchange. The geeks will tell later.

Also, the Blackberry Enterprise Server will soon support GroupWise. Out of the box, if you will. Though it sounds like the plug-in is a paid add-on.

Tags: ,

I was right

SLES 10 is being announced/released at BrainShare. More information as we have it.

NetWare will be supported until 2015.

Tags: , ,

Sunday aft

Registration was a snap. I came by about 1pm and there was a scant crowd. The bag this year is an un-wheeled backpack in grey-tones. I'm unsure how a laptop is supposed to fit in it, but I got mine in anyway. it is a bit better than the BS-03 bag I brought with me.

Unfortunately, the lack of a way to get cameras off of my picture prevent me from posting pictures of the bag or other things so far. The welcome reception is this evening at 5:30, with a theme of "black and white". No idea what that means. The Sponsor Party Tuesday night has a disco theme, and this child of the 80's fears.

And golly I wish wireless on linux worked better. Grr.

Tags: ,

Arrived, Brainshare goodies!

Things are already hopping. The NCCI party at the Shilo was fun as usual. And we even had a visit by Mike Morgan, the person who organizes Brainshare for Novell. To quote:

"It's kind of nice to go to a party that I didn't have to plan."

Glad to oblige.

Meanwhile, other Novell folk have been busy! On the Wiki today I noticed this gem, File System Primer, which just happens to be a topic of one of the sessions I'm going to. Take a read! This goes over a lot of file-systems on both NetWare and Linux (mostly Linux since there are a LOT more options there). And at the very bottom a bit about 'parallel cluster filesystems' which permit mutliple access to the same files by different servers; very neat, and not a lot supports it yet. And a quote from the article, which is news to me:
The NetWare [traditional] File System is used in NetWare 3.x through 5.x as the default file system, and is supported in NetWare 6.x for compatibility. It is one of the fastest file systems on the planet, however it does not scale, nor is it journaled. An Open Source version of this file system is available on Linux to allow access to its file data. However, the OSS version lacks the identity management tie-ins so it has found little utility
I didn't know that Traditional Filesystem had hit the OSS market! Not that it has gained any traction, but still, it is there somewhere.

Also new is a BrainShare wiki. Not much on there now, but I suspect that there will be in due time.

Tags: ,

Nifty news

Apparently Delta has signed a deal with Bellingham International Airport (BLI). They'll be providing non-stop service to Salt Lake City starting real soon now. This is good news for the locals, since having direct service to somewhere other than Seattle opens up our options for plane travel out of here.

And next year I can go to Brainshare on a direct flight! Woo!!


Kernels and I/O schedulers

| 1 Comment
While troubleshooting a problem on the laptop yesterday, I spent some 'what does that mean' time figuring out what the various kernel options mean. The one I was working on was 'splash', more on that later, but the one that grabbed my curiosity was the enigmatic 'elevator'. After much googling, I turned up a page at that explains what it means.

Choosing an I/O Scheduler for Red Hat® Enterprise Linux® 4 and the 2.6 Kernel

The 2.6 kernel introduced options for how I/O is queued up to the storage devices. Once I read this article, I was reminded of a comment made during Meet the Experts last year at Brainshare. The comment went like this:
"NetWare makes a great iSCSI target. That way you can take advantage of the NetWare elevators, which are very, very good."
Very interesting. The RedHat article goes into detail about what the four types of I/O schedulers are, and in general what type of access-patterns make the best use of them. For a bulk file-server, "cfq" or "completely fair queueing" is probably the best.

Another thing to note is that the optimization we're talking about here is re-ordering I/O requests before they get passed to drivers. Optimizations done by the drivers and the hardware itself can't be tuned this way, of course. If you have a single LUN that your entire file-server serves out of, there won't be a lot of gain. But when you start adding in multipel LUNs like we have (in general Student and FacStaff volumes are in different LUNs), it starts helping.

I'd really like to see some benchmarking between a Linux 2.6 kernel and a NetWare 6.5 kernel for raw I/O throughput. But due to the different OS, a true apples-to-apples comparison will take a LOT of engineering to accomplish. Certainly beyond my ken.

Maybe this is a good question for Meet the Experts night in a week. Hmmmm.

Tags: ,

Linux laptop update

Unfortunately, while the drivers do load and I sometimes get data, the wireless card is far from stable. Unusable. This seems to be a side-effect of attempting to run a Windows driver in a Linux bos, which is what 'ndiswrapper' is supposed to do. Running a kernel with a 16K stack helps it, but it is still unusably crashy. Darnit.

Happily, I learned that the office has a Cisco AiroNet 350 they'll let me borrow for the week. This is good, since this is an old card. And with Linux, old cards are well supported cards generally speaking. I'll find out how well it is supported tomorrow when it shows up. This is good.

And I'm liking thing 2.6.x kernel! Last year I went with a 2.4.0 since I understood it, and I couldn't get things like suspend to work. But 2.6 introduced that, and now "shutdown -z now" puts the laptop into a disk-suspend mode! W-a-y cool! Especially since booting this puppy takes too long to bother with if I have to shutdown/restart every time I want to use the laptop. It still isn't great, and it doesn't work when I'm in X-Windows, but it saves shutdown/reboot time.

Since I'm going to be spending time in text-land, I took the liberty of installing the elinks text-based web-browser. No need for X-Windows if I can do it all from console.

I also set up Evolution to hit both my private mail (SIMAP) and work (Exchange-OWA via the connector). Works like a champ so far. This'll help me keep in touch.

Tags: ,

Wireless in NLD

Last year I took my elderly laptop to Brainshare. I reformatted to Linux since I knew I could make it hardened enough to withstand the hordes of packet-scanners out there. It was Slackware, since Novell Linux Desktop wasn't quite out yet, and my linux skills were still firmly rooted in Slackware. I had a hard time getting my network card to even run in it, due to the problems Linux has with drivers in general. The only blog reference to that time was this entry.

I ended up borrowing an Orinoco card from another admin here, and it worked like a charm. It wasn't pretty, but it was solid.

Today, I managed to get NLD installed onto this laptop and also managed to kludge ndiswrapper into running the driver for my card! Yay! But it took a bit of hacking to get it working, since I don't have a Red Carpet subscription.

In short:
  1. Download latest ndiswrapper from sourceforge
  2. Install kernel-sources from source media
  3. Install compiler from source media
  4. Install gnu-make from source media
  5. Expand ndiswrapper somewhere (I used /usr/src/)
  6. From the ndiswrapper directory do "make install"
  7. Go to /lib/modules/2.6.5-7.244-default
  8. cd into 'extra'
  9. move the "ndiswrapper.ko" some where else
  10. ln -s ../misc/ndiswrapper.ko
    1. This is because the 'make install' puts the module in a different place than YaST does, and the kernel finds the YaST module first. Then pukes.
  11. modprobe ndiswrapper
  12. Joy! no errors!
  13. Locate the windows drivers for your card. You'll need the .inf and .sys files.
  14. "modprobe -i <.inf file>"
  15. "modprobe -l" to verify your driver loaded
  16. Insert your Wireless card
  17. Hope.
From there, you're on your own. Nor am I guaranteeing that this'll work every time. This worked on my NetGear wg511v2 card that I had such trouble with last year. Now to see if it is actually STABLE.


Ahah! The announcement.

The links to the blogs were announced:

Now this page has links to the support forums, product wiki pages, product
support web pages, and product blog lists. I am hoping this will give our
customers a set of tools to use when trying to find, or publish information
about Novell products. Each media type has it's advantage. If you have
the time and the inclination, please take the time to contribute. Just
like the forums, the value is determined by the contributions made.

For some products, the wiki and blog pages are bare, just waiting for
someone to contribute the first bit of information. For others (such as
ZENworks) there is already quite a bit of information but we're hoping for
more. Over time, these tools should help refine information that can be of
value to everyone.

Forums encourage ongoing conversation to address issues.

Wiki is a great place to post and refine a FAQ type of information with
contributions from the community.

Blogs are a good source of information from experts that would like to share
some general knowledge or experience.

Don't be shy about contributing!


Kim Groneman
Program Manager
Novell Product Support Forums
If you go there, noticed the right-hand column. It says 'Blog'. And there is a 'Wiki' link too. Yes, even NetWare. One of the sessions I'm going to in a week is BOF177: FORUM, Communities, Sharing information. I suspect these will be brought up.

Tags: ,

More Novell blogs

This morning (March 13, for you archive viewers) I found some interesting activity in the "Recent Changes" section of the CoolSolutions Wiki. Check it out. Looks like they might be setting up some 'community' pages for blogs that talk about Novell stuff. It will be interesting to see who shows up there. I know I'm going to be in there before too long.

So far, they haven't linked the pages to anywhere else, but I expect that'll change. One of the pages is "Blogging about NetWare", which is a nice sign.

I met "Kgroneman" at Brainshare last year. He is the program director of the Support Forums and related activities.

Just came out of the Mars shadow, and telemetry says the orbiter is on orbit. Go NASA!

Some fun factoids:
  • The insertion burn required a delta-V of about 1000 meters/second, or about 2200 MPH.
  • The radio antenna is the largest sent to another planet, which will greatly increase the amount of data that can be sent from Mars.
  • The MRO was designed as a telecom sat for other Mars missions, such as landers. The Mars Global Surveyor does some of that, but the MRO was designed from the ground up for this role.
  • The MRO has the most detailed optical camera ever flown to another planet. Resolution up to "kitchen table" size. Good stuff.

Tags: ,

Novell & Blogs

| 1 Comment
Novell announced their "cool blogs" feature. You can find the announcement here, and the actual blogs here. They're just getting going, so the only blog-folk up right now seem to be ZEN folk. Mostly-useless as far as I'm concerned, but I have hopes for future relevancy as new topics are added in.

This could go good or bad. It all depends on how much lattitude the higher-ups allow the bloggers, since they're blogging on "" and thus part of the overall PR of Novell. If they're allowed wide latitude, it could be very good. If the hammer falls on initiative, it could go badly. We shall see how this goes, and how active these guys are.


Coooool tool

Hamish Spires is a SysOp in the Novell Support Forums, and wrote a wonderful little tool. You can get the details here:

This wonderful little thing will take the output of another cool tool called SEG:

And spit out memory tuning recommendations for your NetWare box. So long as the server has more than 2GB of RAM, though. If it at or under 2GB it doesn't spit any recommendations, and IMHO it kinda should. But still, very useful.

NetWare 6.5 Sp3 introduced a new way of managing memory in NetWare, and it has taken Novell a year to get it nominally right and it still needs work. From SP3 on one of the most frequently posted questions in the support forums have been of this kind:

"I'm getting cache-allocator errors, what do I do?"

And that's attributable, according to Hamish, to Novell picking the wrong thing to optimize memory for. He provides the hand tuning you can do to get those month long uptimes back. In my case the server.exe from SP5 helped a LOT. But these settings can be used to get the server's memory more stable. It'll also keep it from fragmenting nearly as bad as it would out of the box.

Perhaps SP6 will include the 'correct' tuning parameters.

Tags: ,

Feed oddities

Blogger is being weird again. It isn't always posting full posts regular like it should, instead giving the 255-letter short-versions. I'm not sure why.

Playing with Linux again

| 1 Comment
One of the things you can do with reiserfs is put the journal onto a different device. Why this would be a good idea is not anywhere in the documentation that I can find, but you can. I also remember from a DBA of my aquaintance that it is a Very Good Idea to put your transaction-logs on different drive-spindles than your database. These two ideas go together, from the sounds of it.

mkreiserfs -j /dev/cciss/c0d0p5 --format 3.6 /dev/cciss/c0d1p1

That'll make the partition known as /dev/cciss/c0d0p5 the transaction-log for the partition at /dev/cciss/c0d1p1. The transaction-partition is a whopping 37MB. Note in the above example that the two /dev entries are for different physical devices. And in this case, they actually are.

The question has been raised about how reiserfs and nss handle 'very large directories'. I'd like to test that, but I don't really have a good test suite for it. The utility iozone is just for testing throughput, not for large directory sizes. This needs thought.

Putting ACLs into MyWeb

At the moment this blog is restricted to on-campus access only. I figured I would explain how I did that in case any of you who read this want to know the trick as well. The trick is the use of what apache calls "htaccess" files. The Apache docs on the critters are here, but this is how I set this up for WWU.

The problem is that ".htaccess" is the standard name of the files, and that's a very unixy name. Windows doesn't like creating files named with a preceeding dot. So I configured myweb to use a second file-name the same way, "ht.acl". MyWeb will use both, so if you manage to actually create an .htaccess file it'll honor it.

This is the "ht.acl" file in my blog directory:

Redirect permanent /~riedesg/sysadmin1138/rss.xml
deny from all
ErrorDocument 403 /~riedesg/noblogfornow.html
allow from
allow from

  1. Redirect permanent This directive redirects attempts to get the non-existant "rss.xml" file in the blog-directory to the feedburner feed. This has been around for some time.
  2. deny from all This says that all access is denied unless specifically allowed.
  3. ErrorDocument 403 This specifies the HTML page to serve when 403 errors are thrown, such as when off-campus users attempt to hit this blog. This page contains the explanation for the temporary outage.
  4. allow from This specifies that the WWU academic sub-net is to be permitted in.
  5. allow from This specifies that the WWU ResTek sub-net is to be permitted in. I THINK I have all of it.
When I return access to normal, everything from the "deny from all" line down will be removed. Since this is placed in the /~riedesg/sysadmin1138/ directory, anything in the archives, such as /~riedesg/sysadmin1138/2006/02/anything.html will also apply, since htaccess files are cumulative per-directory in the tree.

Myweb going normal

Since LibC seems to be strong enough to handle bouncing cluster-nodes finally, this morning I'm moving my Apache loglevel from debug back to warn. It had been in Debug since that's the log-level you have to be in for mod_edir to tell you where it botched something. But I haven't had to look at the error-logs once since the most recent LibC from Novell. And it has been over a month since I got it. Is this thing closed?

So I'm going to tempt fate!

Now if we ever DO get slashdotted, we'll be able to stand for longer.

I still believe that our myweb server is beefy enough to withstand a slashdotting, so long as the item being slashdotted isn't that skateboard video.

Searching for an extension

One of the things that I consistantly hate about Firefox is the sort it uses for the URL drop-down bar. Mozilla, IE, and Opera all use a most-recently-used sort. Firefox sorts it by age. Newest addition at the top, longest used at the bottom. So for me things like end up at the bottom and stay there for months, and between it and the top are a couple of weeks worth of random typed-in URLs. If I bookmark a most-recently-visited URL in that particular drop-down list, it'll stay in the drop-down list until I manage to not visit it for the requisite expire time.

I had a list reset when I was away from my work computer for the week for training, so it has been nice. I'll get another one with Brainshare.

But I want the sort Mozilla (now Sea-Monkey) used. That method makes the most sense for me, and the stuff I use the most stays at the top and not drifts to the bottom of a potentially long scroll. If'n you've heard of a way to change the sort method through about:config trickery or an extension, let me know!

Novell News

Novell announced a while back their "Novell Open Audio" service for distributing pod-casts. The first one didn't interest me, but the second one covered Brainshare (not much new stuff) and Open Enterprise Server. The OES was where the good stuff was. A list of things covered in the pod-cast:
  • 64-bit computing is coming, but has no firm release dates.
  • There will be a 64-bit version of Client32. Novell is heading to Redmond to do some product testing on Vista of their development builds.
  • SLES10 will be released soon (Carnac says: Brainshare)
  • XEN is a big component of the future of OES and SLES (Carnac says: Lots of XEN demos in Keynotes during Brainshare)
  • XEN supports SMP
  • XEN isn't Virtual Machines, it's Hypervisor. I'll have to look up what the difference is. Apparently Hypervisor allows more efficient use of resources.
  • Novell is saying that NSS blows Reiser out of the water when it comes to handling millions of files on large volumes, such as the kind we have in our WUF cluster.
Interesting things.

ISC entry this morning

| 1 Comment
Tom Liston of the Internet Storm Center posted a follow-up of their earlier 'packetslinger' article. It would seem that the previous article was one of their top articles in terms of controversy sparked. And Liston takes time to point out where the controversy lay.

If you look at the Slashdot article it is clear that the top thread is "is port-scanning illegal?". And in the words of Liston:
The legality of port scanning is an unsettled matter. The legality of breaking someone else's machine or causing monetary damage isn't. The problem is this: there's no difference between the two when it happens... and then it's too late.
So, port scanning by itself isn't illegal, but it becomes so when the port-scan actually does damage. Again:
I've been there, I've done that, and I've got the "I tipped over a system using Nmap" t-shirt to prove it.
And I have that shirt too as it happens, it was an nmap service-scan and it knocked over something on a NetWare server. The article of today is much less sensationalist as the original article, which is a nice thing. This addresses the issues involved without raising the specter of jack-booted fascists knocking on your door in the depth of night for having the temerity to port-scan a friend's PC without them knowing it.

The original assignment told students to perform their scans over the internet. I've since learned that in class the professor said that they should not do the scans from inside the WWU perimeter in order to keep things fair. This is a good policy. Our stuff gets scanned from the internet multiple times an hour, and with stuff more probing than a simple NMAP scan with service-scanning turned on. By originating the scans on that side, the incoming traffic looks like the normal crud we deal with on a daily basis and is therefore much less likely to crash systems. So in our case simple port-scans from the outside have a very low chance of causing damage.

But, that doesn't change the notification requirement. All this setup does is minimize the chance of damage (and coincidentally, the chances of outright detection). Port-scanning in general is risky, though it is a lower risk activity than an outright vulnerability scan. Even so, before such actions are commenced it is required that you gain permission in order to mitigate any potential civil-penalties that might ensue in the case of an unfortunate crash.

The professor has now limited his students to specific ranges of IPs that the professor has pre-cleared. This is all to the good. And as this comment points out, the student hadn't started scanning. The situation is handled.
I'm taking this blog campus-only for the time being. This is only temporary! Should be back in a week or two. And definitely in time for Brainshare.

A very interesting article


It is long, but it goes into the case-law surrounding the words "access" and "authorization". Two words that appear in pretty much all Computer Security laws on the book. The interpretation of said is very tricky, and as of 2003 very inconsistant. Very recommneded read!