Tuesday, January 05, 2010

Desktop virtualization

Virtualizing the desktop is something of a rage lately. Last year when we were still wondering how the Stimulus Fairy would bless us, we worked up a few proposals to do just that. Specifically, what would it take to convert all of our labs to a VM-based environment?

The executive summary of our findings: It costs about the same amount of money as the normal regular PC and imaging cycle, but saves some labor compared to the existing environment.

Verdict: No cost savings, so not worth it. Labor savings not sufficient to commit.

Every dollar we saved in hardware in the labs was spent in the VM environment. Replacing $900 PCs with $400 thin clients (not their real prices) looks cheap, but when you're spending $500/seat on ESX licensing/Storage/Servers, it isn't actually cheaper. The price realities may have changed from 12 months ago, but the simple fact remains that the stimulus fairy bequeathed her bounty upon the salary budget to prevent layoffs rather than spending on swank new IT infrastructure.

The labor savings came in the form of a unified hardware environment minimizing the number of 'images' needing to be worked up. This minimized the amount of time spent changing all the images in order to install a new version of SPSS for instance. Or, in our case, integrating the needed changes to cut over from Novell printing to Microsoft printing.

This is fairly standard for us. WWU finds it far easier to commit people resources to a project than financial ones. I've joked in the past that $5 in salary is equivalent to $1 cash outlay when doing cost comparisons. Our time management practices generally don't allow hour by hour level accounting for changed business practices.

Labels: ,


Thursday, September 10, 2009

Lemonade

Two days ago (but it seems longer) the drive that holds my VM images started vomiting bad sectors. Even more unfortunately, one of the bad sectors took out the MFT clusters on my main Win XP management VM. So far that's the only data-loss, but it's a doozy. I said unto my manager, "Help, for I have no VM drive any more, and am woe." Meanwhile I evacuated what data I could. Being what passes for a Storage Administrator around here, finding the space was dead easy.

Yesterday bossman gave me a 500GB Western Digital drive and I got to work restoring service. This drive has Native Command Queueing, unlike the now-dead 320GB drive. I didn't expect that to make much of a difference, but it has. My Vista VMs (undamaged) run noticibly faster now. "iostat -x" shows await times markedly lower than they were before when running multiple VMs.

NCQ isn't the kind of feature that generally speeds up desktop performance, but in this case it does. Perhaps lots of VM's are a 'server' type load afterall.

Labels: ,


Friday, August 28, 2009

Fabric merges

When doing a fabric merge with Brocade gear, when they say that the Zone configuration needs to be exactly the same on both switches, they mean that. The merge process does no parsing, it just compares the zone config. If the metaphorical diff returns anything it doesn't merge. So if one zone has a swapped order of two nodes but is otherwise identical, it'll not merge.

Yes, this is very conservative. And I'm glad for it, since failure here would have brought down our ESX cluster and that's a very wince-worthy collection of highly visible services. But it took a lot of hacking to get the config on the switch I'm trying to merge into the fabric to be exactly right.

Labels: , ,


Monday, August 17, 2009

SANS Virtualization

Mr. Tom Liston of ISC Diary fame is at the SANS Virtualization Summit right now. He has been tweeting it. I wish I was there, but there is zero chance of me convincing my boss to send me. Even if it was a year in which out of state travel was allowed.

Mostly just interesting quotes so far, but there have been a few interesting ones.

"When your server is a file, network access equals physical access" - Michael Berman, Catbird

From earlier: "You can tell how entrenched virtualization has become when the VM admin has become the popular IT scapegoat" - Gene Kim

On VMsprawl: "The 'deploy all you want, we'll right click and make more' mentality." Herb Goodfellow, Guident.

I expect to see more as the week progresses.

Labels: ,


Friday, July 17, 2009

The continual performance tweaking of VMWare

I have VMWare Workstation installed on my workstation. It is very handy. We have an ESX cluster so I could theoretically export VMs I work up on my machine directly to the ESX hosts. I haven't done that yet, but it is possible.

Unfortuantely, I've run into several performance problems since I installed it. The base system as it started, right after I switched from Xen:
  • OpenSUSE 10.2, 64-bit
  • The VM partition is running XFS
  • Intel dual core E6700 processor
  • 4GB RAM
  • 320GB SATA drives
The system as it exists now:
  • OpenSUSE 11.0 64-bit
  • The VM Partition is running XFS, with these fstab settings: logbufs=8,noatime,nodiratime,nobarrier
  • The XFS partition was reformatted with lazy-count=1, and log version 2
  • Intel quad core Q6700
  • 6GB RAM (would have been 8, but I somehow managed to break one of my RAM sockets)
  • 320GB SATA drives, with the 'deadline' scheduler set for the disk with the VM partition on it.
It still doesn't perform that great. I've done enough system monitoring to know that I'm being I/O bound. I hear ext4 is supposed to be better at this than XFS, so I just might go there when openSUSE 11.2 drops. One of these would go a long way to helping fix this problem, but I don't think I'll be able to get the funding to do it.

Labels:


Wednesday, February 11, 2009

High availability

64-bit OES provides some options to highly available file serving. Now that we've split the non-file services out of the main 6-node cluster, all that cluster is doing is NCP and some trivial other things. What kinds of things could we do with this should we get a pile of money to do whatever we want?

Disclaimer: Due to the budget crisis, it is very possible we will not be able to replace the cluster nodes when they turn 5 years old. It may be easier to justify eating the greatly increased support expenses. Won't know until we try and replace them. This is a pure fantasy exercise as a result.

The stats of the 6-node cluster are impressive:
  • 12 P4 cores, with an average of 3GHz per core (36GHz).
  • A total of 24GB of RAM
  • About 7TB of active data
The interesting thing is that you can get a similar server these days:
  • HP ProLiant DL580 (4 CPU sockets)
  • 4x Quad Core Xeon E7330 Processors (2.40GHz per core, 38.4GHz total)
  • 24 GB of RAM
  • The usual trimmings
  • Total cost: No more than $16,547 for us
With OES2 running in 64-bit mode, this monolithic server could handle what six 32-bit nodes are handling right now. The above is just a server that matches the stats of the existing cluster. If I were to really replace the 6 node cluster with a single device I would make a few changes to the above. Such as moving to 32GB of RAM at minimum, and using a 2-socket server instead of a 4-socket server; 8 cores should be plenty for a pure file-server this big.

A single server does have a few things to recommend it. By doing away with the virtual servers, all of the NCP volumes would be hosted on the same server. Right now each virtual-server/volume pair causes a new connection to each cluster node. Right now if I fail all the volumes to the same cluster node, that cluster node will legitimately have on the order of 15,000 concurrent connections. If I were to move all the volumes to a single server itself, the concurrent connection count would drop to only ~2500.

Doing that would also make one of the chief annoyances of the Vista Client for Novell much less annoying. Due to name cache expiration, if you don't look at Windows Explorer or that file dialog in the Vista client once every 10 minutes, it'll take a freaking-long time to open that window when you do. This is because the Vista client has to enumerate/resolve the addresses of each mapped drive. Because of our cluster, each user gets no less than 6 drive mappings to 6 different virtual servers. Since it takes Vista 30-60 seconds per NCP mapping to figure out the address (it has to try Windows resolution methods before going to Novell resolution methods, and unlike WinXP there is no way to reverse that order), this means a 3-5 minute pause before Windows Explorer opens.

By putting all of our volumes on the same server, it'd only pause 30-60 seconds. Still not great, but far better.

However, putting everything on a single server is not what you call "highly available". OES2 is a lot more stable now, but it still isn't to the legendary stability of NetWare 3. Heck, NetWare 6.5 isn't at that legendary stability either. Rebooting for patches takes everything down for minutes at a time. Not viable.

With a server this beefy it is quite doable to do a cluster-in-a-box by way of Xen. Lay a base of SLES10-Sp2 on it, run the Xen kernel, and create four VMs for NCS cluster nodes. Give each 64-bit VM 7.75GB of RAM for file-caching, and bam! Cluster-in-a-box, and highly available.

However, this is a pure fantasy solution, so chances are real good that if we had the money we would use VMWare ESX instead XEN for the VM. The advantage to that is that we don't have to keep the VM/Host kernel versions in lock-step, which reduces downtime. There would be some performance degradation, and clock skew would be a problem, but at least uptime would be good; no need to perform a CLUSTER DOWN when updating kernels.

Best case, we'd have two physical boxes so we can patch the VM host without having to take every VM down.

But I still find it quite interesting that I could theoretically buy a single server with the same horsepower as the six servers driving our cluster right now.

Labels: , , , , , , ,


Friday, September 19, 2008

Monitoring ESX datacenter volume stats

A long while back I mentioned I had a perl script that we use to track certain disk space details on my NetWare and Windows servers. That goes into a database, and it can make for some pretty charts. A short while back I got asked if I could do something like that for the ESX datacenter volumes.

A lot of googling later I found how to turn on the SNMP daemon for an ESX host, and a script or two to publish the data I need by SNMP. It took some doing, but it ended up pretty easy to do. One new perl script, the right config for snmpd on the ESX host, setting the ESX host's security policy to permit SNMP traffic, and pointing my gathering script at the host.

The perl script that gathers the local information is very basic:
#!/usr/bin/perl -w

use strict;
my $partition = ".";
my $partmaps = ".";
my $vmfsvolume = "\Q/vmfs/volumes/$ARGV[0]\Q";
my $vmfsfriendly = $ARGV[1];
my $capRaw = 0;
my $capBlock = 0;
my $blocksize = 0;
my $freeRaw = 0;
my $freeBlock = 0;
my $freespace= "";
my $totalspace= "";
open("Y", "/usr/sbin/vmkfstools -P $vmfsvolume|");
while () {
if (/Capacity ([0-9]*).*\(([0-9]*).* ([0-9]*)\), ([0-9]*).*\(([0-9]*).*a
vail/) {
$capRaw = $1;
$capBlock = $2;
$blocksize = $3;
$freeRaw = $4;
$freeBlock = $5;
$freespace = $freeBlock;
$totalspace = $capBlock;
$blocksize = $blocksize/1024;
#print ("1 = $1\n2 = $2\n3 = $3\n4 = $4\n5 = $5\n");
print ("$vmfsfriendly\n$totalspace\n$freespace\n$blocksize\n");
}
}


Then append the /etc/snmp/snmp.conf file with the following lines (in my case):

exec .1.3.6.1.4.1.6876.99999.2.0 vmfsspace /root/bin/vmfsspace.specific 48cb2cbc
-61468d50-ed1f-001cc447a19d Disk1

exec .1.3.6.1.4.1.6876.99999.2.1 vmfsspace /root/bin/vmfsspace.specific 48cb2cbc
-7aa208e8-be6b-001cc447a19d Disk2


The first parameter after exec is the OID to publish. The script returns an array of values, one element per line, that are assigned to .0, .1, .2 and on up. I'm publishing the details I'm interested in, which may be different than yours. That's the 'print' line in the script.

The script itself lives in /root/bin/ since I didn't know where better to put it. It has to have execute rights for Other, though.

The big unique-ID looking number is just that, a UUID. It is the UUID assigned to the VMFS volume. The VMFS volumes are multi-mounted between each ESX host in that particular cluster, so you don't have to worry about chasing the node that has it mounted. You can find the number you want by logging in to the ESX host on the SSH console, and doing a long directory on the /vmfs/volumes folder. The friendly name of your VMFS volume is symlinked to the UUID. The UUID is what goes in to the snmp.conf file.

The last parameter ("Disk1" and "Disk2" above) is the friendly name of the volume to publish over SNMP. As you can see, I'm very creative.

These values are queried by my script and dropped into the database. Since the ESX datacenter volumes only get space consumed when we provision a new VM or take a snapshot, the graph is pretty chunky rather than curvy like the graph I linked to earlier. If VMware ever changes how the vmfstools command returns data, this script will break. But until then, it should serve me well.

Labels: , , ,


Wednesday, September 10, 2008

That darned budget

This is where I whine about not having enough money.

It has been a common complaint amongst my co-workers that WWU wants enterprise level service for a SOHO budget. Especially for the Win/Novell environments. Our Solaris stuff is tied in closely to our ERP product, SCT Banner, and that gets big budget every 5 years to replace. We really need the same kind of thing for the Win/Novell side of the house, such as this disk-array replacement project we're doing right now.

The new EVAs are being paid for by Student Tech Fee, and not out of a general budget request. This is not how these devices should be funded, since the scope of this array is much wider than just student-related features. Unfortunately, STF is the only way we could get them funded, and we desperately need the new arrays. Without the new arrays, student service would be significantly impacted over the next fiscal year.

The problem is that the EVA3000 contains between 40-45% directly student-related storage. The other 55-60% is Fac/Staff storage. And yet, the EVA3000 was paid for by STF funds in 2003. Huh.

The summer of 2007 saw a Banner Upgrade Project, when the servers that support SCT Banner were upgraded. This was a quarter million dollar project and it happens every 5 years. They also got a disk-array upgrade to a pair of StorageTek (SUN, remember) arrays, DR replicated between our building and the DR site in Bond Hall. I believe they're using Solaris-level replication rather than Array-level replication.

The disk-array upgrade we're doing now got through the President's office just before the boom went down on big expensive purchases. It languished in the Purchasing department due to summer-vacation related under-staffing. I hate to think how late it would have gone had it been subjected to the added paperwork we now have to go through for any purchase over $1000. Under no circumstances could we have done it before Fall quarter. Which would have been bad, since we were too short to deal with the expected growth of storage for Fall quarter.

Now that we're going deep into the land of VMWare ESX, centralized storage-arrays are line of business. Without the STF funded arrays, we'd be stuck with "Departmental" and "Entry-level" arrays such as the much maligned MSA1500, or building our own iSCSI SAN from component parts (a DL385, with 2x 4-channel SmartArray controller cards, 8x MSA70 drive enclosures, running NetWare or Linux as an iSCSI target, with bonded GigE ports for throughput). Which would blow chunks. As it is, we're still stuck using SATA drives for certain 'online' uses, such as a pair of volumes on our NetWare cluster that are low usage but big consumers of space. Such systems are not designed for the workloads we'd have to subject them to, and are very poor performers when doing things like LUN expansions.

The EVA is exactly what we need to do what we're already doing for high-availability computing, yet is always treated as an exceptional budget request when it comes time to do anything big with it. Since these things are hella expensive, the budgetary powers-that-be balk at approving them and like to defer them for a year or two. We asked for a replacement EVA in time for last year's academic year, but the general-budget request got denied. For this year we went, IIRC, both with general-fund and STF proposals. The general fund got denied, but STF approved it. This needs to change.

By October, every person between and Governor Gregoir will be new. My boss is retiring in October. My grandboss was replaced last year, my great grand boss also has been replaced in the last year, and the University President stepped down on September 1st. Perhaps the new people will have a broader perspective on things and might permit the budget priorities to be realigned to the point that our disk-arrays are classified as the critical line-of-business investments they are.

Labels: , , , , , , , , , , , ,


Wednesday, August 27, 2008

Woot!

The EVAs are scheduled to deliver today! This means that we are very probably going to be taking almost every IT system we have down starting late Friday 9/5 and going until we're done. We have a meeting in a few minutes to talk strategy.

There was some fear that the gear wouldn't get here in time for the 9/5 window. The 9/12 window has one of the key, key people needed to handle the migration in Las Vegas for VMWorld, and he won't be back until 9/21 which also screws with the 9/19 window. The 9/19 window is our last choice, since that weekend is move-in weekend and the outage will be vastly more noticeable with students around. Being able to make the 9/5 window is great! We need these so badly that if we didn't get the gear in time, we'd have probably done it 9/12 even without said key player.

The one hitch is if HP can't do 9/5-6 for some reason. Fret. Fret.

Labels: , , ,


Thursday, August 14, 2008

Virtualization and Fileservers

There are some workloads that fit well within VM of any kind, and others that are very tricky. Fileservers are one area that are not good candidates for VM. In some cases they qualify as highly transactional. In others, the memory required to do fileserving well makes them very expensive. When you can fit 40 web-servers on a VM host, but only 4 fileservers, it makes the calculus obvious.

This is on my mind since we're running into memory problems on our NetWare cluster. We've just plain outgrown the 32-bit memory space for file-cache. NW can use memory above the 4GB line, it does have PAE support, but memory access above there is markedly slower than it is below the line. Last I heard the conventional wisdom is that 12GB is about the point where it starts earning you performance gains again. eek!

So, I'm looking forward to 64-bit memory spaces and OES2. 6GB should do us for a few years. That said, 6GB of actually-used RAM in a virtual-host means that I could fit... two of them on a VM server with 16GB of RAM.

16GB of RAM in, say, an ESX cluster is enough to host 10 other servers. Especially with memory deduplication. In the case of my hypothetical 6GB file-servers, 5.5GB of that RAM will be consumed by file-cache that will be unique to that server and thus very little gains from memory de-dup.

In the end, how well a fileserver fits in a VM environment is based on how large of a 'working set' your users have. If the working set it large enough, it can mean that you'll get small gains for virtualization. However, I realize fileserving on the scale we do it is somewhat rare, so for departmental fileservers VM can be a good-sized win. As always, know your environment.

In light of the budgetary woes we'll be having, I don't know what we'll do. Last I heard the State is projected to have a 2.7 billion deficit for the 2009-2011 (fiscal year starts July 1) budget cycle. So it may very well be possible that the only way I'll get access to 64-bit memory spaces is in an ESX context. That may mean a 6 node cluster on 3 physical hosts. And that's assuming I can get new hardware at all. If it gets bad enough I'll have to limp along until 2010 and play partitioning games to load-balance my data-loads across all 6 nodes. By 2011 all of our older hardware falls off of cheap-maintenance and we'll have to replace it, so worst-case that's when I can do my migration to 64-bit. Arg.

Labels: , , , ,


Wednesday, June 11, 2008

Shrinking data-centers

This is the 901st post of this blog. Huh.

ComputerWorld had a major article recently, "Your Next Data Center", subtitled, "Companies are outgrowing their data centers faster than they ever predicted. It's time to rethind and rebuild."

That is not the case with us. Ours is shrinking, and I'm not alone in this. I know other people who are experiencing the same thing.

The data-center we have right now was built sometime between 1999 and 2000. I'm not sure exactly when it was, as I wasn't here. I like to think they planned 20 years of growth into it, as that's how long the previous data-center lasted.

When I first started here in late 2003, the workhorse servers supporting the largest percentage of our intel based servers were HP ML530 G1's (here are the G2's, the same size as the G1's), with some older HP LH3 servers still in service. The freshly installed 6-node Novell NetWare cluster had 3 ML530's, and 3 rack-dense BL380's. If I'm remembering right, at that time we had two other rack-dense servers. The rest were these 7U monsters, and we could cram 4 to a rack.

With the 7U ML530's as the primary machine, it would seem that the planners of our data-center did not take 'rack dense' into consideration. This was certainly the case with the rack they decided to install, as they planned a very old-school bottom-to-top venting scheme; something I've spent considerable time and innovation trying to revise. They also heard about the stats like "20% growth in number of servers year-over-year," and planned enough floor space to handle it.

Right this moment we're poised to occupy a lot LESS rack-space than we once were. For this, I thank two major trends, and a third chronic one:
  1. Replacing the 7U monsters with 1U servers
  2. Virtualization
  3. No budget for massive server expansions
We're still consuming the same amount of power as we were 2 years ago, but the rack units drawing power has reduced. We still have most of those ML530's, but they've all been relegated to 2nd or 3rd line duties like test/deployment servers or single function utility servers. They're all coming off of maintenance soon (they're like 5-7 years old now) so I'm not 100% sure what we're replacing them with. Probably more VM servers if we can kick the money tree hard enough.

One thing we have been having growing pains over is power draw. The reason we're drawing the same as we were 2 years ago is largely due to us coming close to the rated max for our UPS, and replacing the UPS is a major, major capital-request process nightmare. It would seem that upgrading our UPS triggers certain provisions in the local building code that will require us to bring the data-center up to latest code. The upgrades required to do that are prohibitive, and most likely would require us to relocate all of our gear during the construction process. Since I haven't heard any rumors of us starting the capital-request process, I'm guessing we're not due for another UPS any time soon. This... concerns me.

One side-effect to being power-limited, is that our cooling capacity isn't anywhere NEAR stressed yet.

But when it comes to square footage, we have lots of empty space. We are not shoe-horning in servers into every available rack-unit. We haven't resorted to housing servers in with the sys-admin staff.

Labels: ,


Thursday, May 29, 2008

OES2 and SLES10-SP2

Per Novell:

Updating OES2

OES2 systems should NOT be updated to SLES10 SP2 at this time!
Very true. And most especially true if you're running virtualized NetWare! The paravirtualization components in NW65SP7 are designed around the version of Xen that's in SLES10-SP1, and SP2 contains a much newer version of Xen (trying to play catch-up to VMWare means a fast dev cycle, after all). So, expect problems if you do it.

Also, the OES2 install does contain some kernel packages, such as those relating to NSS.

OES2 systems need to wait until either Novell gives the all clear for SP2 deployments on OES2-fcs, or OES2-SP1 ships. OES2-SP1 is built around SLES10-Sp2.

Labels: , , , , ,


Wednesday, May 14, 2008

NetWare and Xen

Here is something I didn't really know about in virtualized NetWare:

Guidelines for using NSS in a virtual environment

Towards the bottom of this document, you get this:

Configuring Write Barrier Behavior for NetWare in a Guest Environment

Write barriers are needed for controlling I/O behavior when writing to SATA and ATA/IDE devices and disk images via the Xen I/O drivers from a guest NetWare server. This is not an issue when NetWare is handling the I/O directly on a physical server.

The XenBlk Barriers parameter for the SET command controls the behavior of XenBlk Disk I/O when NetWare is running in a virtual environment. The setting appears in the Disk category when you issue the SET command in the NetWare server console.

Valid settings for the XenBlk Barriers parameter are integer values from 0 (turn off write barriers) to 255, with a default value of 16. A non-zero value specifies the depth of the driver queue, and also controls how often a write barrier is inserted into the I/O stream. A value of 0 turns off XenBlk Barriers.

A value of 0 (no barriers) is the best setting to use when the virtual disks assigned to the guest server’s virtual machine are based on physical SCSI, Fibre Channel, or iSCSI disks (or partitions on those physical disk types) on the host server. In this configuration, disk I/O is handled so that data is not exposed to corruption in the event of power failure or host crash, so the XenBlk Barriers are not needed. If the write barriers are set to zero, disk I/O performance is noticeably improved.

Other disk types such as SATA and ATA/IDE can leave disk I/O exposed to corruption in the event of power failure or a host crash, and should use a non-zero setting for the XenBlk Barriers parameter. Non-zero settings should also be used for XenBlk Barriers when writing to Xen LVM-backed disk images and Xen file-backed disk images, regardless of the physical disk type used to store the disk images.

Nice stuff there! The "xenblk barriers" can also have an impact on the performance of your virtualized NetWare server. If your I/O stream runs the server out of cache, performance can really suffer if barriers are non-zero. If it fits in cache, the server can reorder the I/O stream to the disks to the point that you don't notice the performance hit.

So, keep in mind where your disk files are! If you're using one huge XFS partition and hosting all the disks for your VM-NW systems on that, then you'll need barriers. If you're presenting a SAN LUN directly to the VM, then you'll need to "SET XENBLK BARRIERS = 0", as they're set to 16 by default. This'll give you better performance.

Labels: , , , , , ,


Monday, May 05, 2008

Linux @ Home

My laptop at home dual-boots between openSUSE and WinXP. There are a few reasons why I don't boot the Linux side very often, some of them work related. And, what the heck, here are the two reasons.

1: Wireless driver problems
I have an intel 3945 WLAN card. It works just fine in linux, well supported. What throws it for a loop, however, are sleep and hibernate states. It can go one, two, four, maybe five cycles through sleep before it will require a reboot in order to find the home wireless again. If it doesn't lock the laptop up hard. Since my usage patterns are heavily dependent upon Sleep mode, this is a major, major disincentive to keep the Linux side booted.

I understand the 2.6.25 kernel is a lot better about this particular driver. Thus, I wait with eager anticipation the release of openSUSE 11.0. This driver is currently the ipw3945 driver, and will eventually turn into iwl3945 driver once it comes down the pipe. What little I've read about it suggests that the iwl driver is more stable through power states.

2: NetWare remote console
I use rconip for remote console to NetWare. Back when Novell first created the IP-based rconsole, they also released rconj along side ConsoleOne to provide it. As this was written in Java, it was mind bogglingly slow. This little .exe file was vastly faster, and I've come to use it extensively. Unless I get Wine working, this tool will have to stay on my Windows XP partition. It works great, and I haven't found a good linux-based replacement yet.

Time has moved on. Hardware has gotten faster, and the 'java penalty' has reduced markedly. RconJ is actually usable, but I still don't use it. Plus, it would require me to install ConsoleOne onto my laptop. It's 32-bit, so that's actually possible, but I really don't want to do that.

The Remote Console through the Novell Remote Monitor (that service out on :8009) has a nice remote-console utility, but it also requires Java. I'm still biased against java, and java-on-linux still seems fairly unstable to me. I don't trust it yet. It also doesn't scale well. When I'm service-packing, it is a LOT nicer looking to have 6 rconip windows up than 6 browser-based NRM java-consoles open. Plus, rconip will allow me access to the server console if DS is locked, something that NRM can't do and is invaluable in an emergency.

Once the wireless driver problems are fixed, I'll boot the linux side much more often. Remote-X over SSH actually makes some of my remote management a touch easier than it is in WinXP. And if I really really need to use Windows, my work XP VM is accessible over RDesktop. There are a few other non-work reasons why I don't boot Linux very often, but I'll not go into those here.

So, oddly, NetWare is partly responsible for keeping me in Windows at home. But only partly.

Labels: , , , ,


Monday, March 17, 2008

Today at Brainshare

Monday. Opening day. I had trouble getting to sleep last night due to a poor choice of bed-time reading (don't read action, don't read action, don't read action). And had to get up at 6am body time in order to get breakfast before the morning keynote. There be zombies.

Breakfast was uninspired. As per usual, the hashbrowns had cooled to a gellid mass before I found everything and got a seat.

The Monday keynotes are always the CxO talks about strategy and where we're going. Today a mess of press releases from Novell give a good idea what the talks were about. Hovsepian was first, of course, and was actually funny. He gave some interesting tid-bits of knowledge.
  • Novell's group of partners is growing, adding a couple hundred new ones since last year. This shows the Novell 'ecosystem' is strong.
  • 8700 new customers last year
  • Novell press mentions are now only 5% negative.
Jeff Jaffe came on to give the big wow-wow speech about Novell's "Fossa" project, which I'm too lazy to link to right now. The big concern is agility. He also identified several "megatrends" in the industry:
  • High Capacity Computing
  • Policy Engines
  • Orchestration
  • Convergence
  • Mobility
I'm not sure what 'Convergence' is, but the others I can take a stab at. Note the lack of 'virtualization' in this list. That's soooo 2007. The big problem is now managing the virtualization, thus Orchestration. And Policy Engines.

Another thing he mentioned several times in association with Fossa and agility, is mergers and acquisitions. This is not something us Higher Ed types ever have to deal with, but it is an area in .COM land that requires a certain amount of IT agility to accommodate successfully. He mentioned this several times, which suggests that this strategy is aimed squarely at for-profit industry.

Also, SAP has apparently selected SLES as their primary platform for the SMB products.

Pat Hume from SAP also spoke. But as we're on Banner, and it'll take a sub-megaton nuclear strike to get us off of it, I didn't pay attention and used the time to send some emails.

Oh, and Honeywell? They're here because they have hardware that works with IDM. That way the same ID you use for your desktop login can be tied to the RFID card in your pocket that gets you into the datacenter. Spiffy.

ATT375 Advanced Tips & Tricks for Troubleshooting eDir 8.8
A nice session. Hard to summarize. That said, they needed more time as the Laptops with VMWare weren't fast enough for us to get through many of the exercises. They also showed us some nifty iMonitor tricks. And where the high-yield shoot-your-foot-off weapons are kept.

BUS202 Migrating a NetWare Cluster to OES2
Not a good session. The presenter had a short slide deck, and didn't really present anything new to me other than areas where other people have made major mistakes. And to PLAN on having one of the linux migrations go all lost-data on you. He recommended SAN snapshots. It shortly digressed into "Migrating a NetWare Cluster to Linux HA", which is a different session all together. So I left.

TUT215 Integrating Macintosh with Novell
A very good session. The CIO of Novell Canada was presenting it, and he is a skilled speaker. Apparently Novell has written a new AFP stack from scratch for OES2 Sp1, since NETATALK is comparatively dog slow. And, it seems, the AFP stack is currently out performing the NCP stack on OES2 SP1. Whoa! Also, the Banzai GroupWise client for Mac is apparently gorgeous. He also spent quite a long time (18 minutes) on the Kanaka client from Condrey Consulting. The guy who wrote that client was in the back of the room and answered some questions.

Labels: , , , , , ,


Thursday, October 25, 2007

Virtualization and security

I've known for a while now that virtualization as it exists in x86 space is not a security barrier. Heck, it was stated outright at BrainShare 2006 when Novell started pushing AppArmor. The Internet Storm Center people had an article on it a month ago. And now we have an opinion from the OpenBSD creator about it, which you can read here.

It sounds like the main reason virtualization isn't a security barrier is because of the CPU architecture. Intel is making advances with this, witness the existence of VT extensions. Also, as virtualization becomes more ubiquitous in the marketplace Intel will start making their CPUs more virtualization-friendly. Which is to say that they're not very VM friendly now.

And as Theo stated in his thread, "if the actual hardware let us do more isolation than we do today, we would actually do it in our operating system." Process separation is its own form of 'virtualization', and is something that is handled in software right now. Anything in software can be subverted by software, so having a hardware enforceable boundary makes things stay where they are put.

Which is why I hold the opinion that you should group virtual-machines with similar security requirements on the same physical hardware, but separate machines subject to different regulations and requirements. Or put another way, do not host the internal Time Card web-server VM on the same hardware as your public web-server, even if they're on completely different networks. Or, do not host HIPPA-subjected VM's on the same ESX cluster as your Blackberry Enterprise Server VM.

Virtualization as it exists now in x86-space does provide a higher barrier to get over to fully subvert the hardware. Groups only interested in the physical resources of a server, such as CPU or disk, may not even need to subvert the hypervisor to get what they want; so no need to break out of jail. Groups intent on thievery of information may have to break out of jail to get what they want, and they'll invest in the technology to do just that.

Warez crews don't give a damned about virtualization, they just want an internet-facing server with lots of disk space they can subvert. That can be a VM or physical server for all they care. They're not the threat, though the resource demands they can place on a physical server may cause problems on on unrelated VM's due to simple resource starvation.

The threat are cabals looking to steal information for resale. They are the ones who will go to the effort to bust out of the VM jail. They're a lot harder to detect since they don't cause huge bandwidth spikes the ways the warez crews do. They've always been our worst enemy, and virtualization doesn't do much at all to prevent them gaining access. In fact, virtualization may ease their problem as we group secure and unsecure information on the same physical hardware.

Labels: , ,


Tuesday, October 09, 2007

Xen in 10.3

Because I couldn't get a good video driver working for it, I haven't spent much time in the new Xen. I believe it is Xen 3.1. And yes, it IS a lot faster on the network. Wow. It used to be painful, now it is improved quite a bit. I just patched the windows vm on my work machine, and the network transfer went really fast. Then I had to turn it off since I needed my linux desktop back.

And now I get to see how to convert a Xen disk-image into a vmware disk image. I know it can be done, but I haven't dug up the script or whatever.

Labels: ,


Friday, October 05, 2007

openSUSE 10.3 is in

No problems, but one. The "nvidia in Xen" problem is back. The prior solution isn't working yet. Unresolved symbols. This could be a work-stopper, or the thing that makes me move all the way to VMWare.

Labels: , ,


Tuesday, October 02, 2007

NCL 2.0 beta and Xen, pt 2

I now have server side and client side captures. The server side shows what's really going on. It is clear that those jumbos I talked about earlier are being disassembled in the Xen network stack. The client traces look similar to this

-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
<- Ack
<- Ack
-> NCP to server
<- Ack

With variations on the order. The NCP to server packets are all jumbos. From the server side it looks a lot different. The same sequence from the server side:

-> NCP to server
-> NCP to server
-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
-> NCP to server
<- Ack
-> NCP to server
-> NCP to server
-> NCP to server
<- Ack

What's more, the server sees a marked delay in packets between the <- Ack and the first -> NCP to server. On the client side the delays are between the -> NCP to server and the responding <- Ack. I interpret this to show a packet-disassembly delay in the Xen stack.

What I can't figure out is how are the jumbos getting on the network stack at all? The configured network interfaces (except for loopback) in the Dom0 all have MTU values of 1500. I suspect NCL throughput for higher record sizes will improve markedly if it didn't force the Xen layer to disassemble the jumbos. Overriding the MTU on an interface is something that can only be done in kernel-space (I think) which would point to the novfs kernel module.

Labels: , ,


Monday, October 01, 2007

NCL 2.0 beta and Xen

I found a good candidate for why the Novell Client for Linux 2.0-beta is so crappy in a Xen kernel. Take a look at this:

Frame 6 (4434 bytes on wire, 4434 bytes captured)

What the heck? What it should look like is this:

Frame 6 (1514 bytes on wire, 1514 bytes captured)

So I go to our network guys and ask, "Have we turned on jumbo frames anywhere?" No, we haven't. Anywhere. Which I pretty much knew, since we're not doing any iSCSI. So where the heck is that jumbo coming from? The only thing I can think of is that the sniffing layer I'm getting this at is above the layer that'd grab what actually hits the wire, and something between the sniffer and the wire is converting these jumbos to the normal 1514B ethernet MTU, and that's where my lag is coming from.

This is a case where I'd like to span a port and get a sniff of what actually hits the wire so I can compare.

Labels: , ,


Monday, September 24, 2007

Virtualization and Security

It's been a few days for it.

Two BrainShare's ago, when I first heard about AppArmor, the guy giving the demo was very, very clear that virtualization is not a security barrier. Especially AppArmor. This may seem a bit contradictory, considering what AppArmor is supposed to do. What he meant was that you should not rely on AppArmor to provide separation between two applications with very different security postures. Physical separation is best.

That extends to full virtualization products like VMWare or XenSource. On Saturday the Internet Storm Center had a nice diary entry on this very topic. To summarize, Malware already detects virtual machines and changes its behavior accordingly. Last Friday, VMWare released a patch for ESX server that fixes some very interesting security problems. The patch also links to CVE-2007-4496, which is well worth a read. In short, an administrative-user in a guest OS can corrupt memory or possibly execute code in the Host OS. These are the kind of vulnerabilities that I'm worried about.

Any time you run on shared hardware the possibility exists of 'leaking' across instances. Virtualization on x86 is still primitive enough that that the barriers between guest OS instances aren't nearly as high as they are on, say, IBM Mainframes which have been doing this sort of thing since the 1960's. I fully expect Intel (and AMD if they can keep up) to make the x86 CPU ever more virtualization friendly. But until we get to robust hardware enforcement of separation between guest OS instances, we'll have to do the heavy lifting in software.

Which means that a good best-practice is to restrict the guests that can run on a specific virtualization host or cluster to servers with similar security postures. Do not mix the general web-server with the credit-card processing server (PCI). Or mix the credit-card processing server (PCI) with the web interface to your Medical records (HIPPA). Or mix the bugzilla web-server for internal development (trade secrets) with the general support web-server.

Yes, this does reduce the 'pay-back' for using virtualization technology in the first place. However, it is a better posture. Considering the rate of adoption of VM technology in the industry, I'm pretty sure the black-hat crowd is actively working on ways to subvert VM hosts through the guests.

Labels: ,


Tuesday, September 18, 2007

OES2: clustering

I made a cluster inside Xen! Two NetWare VM's inside a Xen container. I had to use a SAN LUN as the shared device since I couldn't make it work doing it just to a single file. Not sure what's up with that. But, it's a cluster, the volume moves between the two just fine.

Another thing about speeds, now that I have some data to play with. I copied a bunch of user directory data over to the shared LUN. It's a piddly 10GB LUN so it filled quick. That's OK, it should give me some ideas of transfer times. Doing a TSATEST backup from one cluster-node to the other (i.e. inside the Xen bridge) gave me speeds on the order of 1000MB/Min. Doing a TSATEST backup from a server in our production tree to the cluster node (i.e. over the LAN) gave me speeds of about 350MB/Min. Not so good.

For comparison, doing a TSATEST backup from the same host only drawing data from one of the USER volumes on the EVA (highly fragmented, but must faster, storage) gives a rate of 550 MB/Min.

I also discovered the VAST DIFFERENCES between our production eDirectory tree, which has been in existence since 1995 if the creation timestamp on the tree object is to be believed, and the brand new eDir 8.8 tree the OES2 cluster is living in. We have a heckova lot more attributes and classes in the prod tree than in this new one. Whoa. It made for some interesting challenges when importing users into it.

Labels: , , , ,


Thursday, September 13, 2007

OES2: virtualization

I have the beta up and running. I have a pair of OES2-NW servers running in Xen on SLES10SP1. And it loads just spiffy. Haven't done any performance testing on it, kind of hard to really interpret results at this point anyway.

What I HAVE been spending time on is seeing if it is possible to get a cluster set up. Clusters, of course, rely on shared storage. And if it works the way I need it to work, I need multiple Xen machines talking to the same LUNs. It may be doable, but I'm having a hard time figuring it out. The documentation on Xen isn't what you'd call complete. Novell has some in the SLES10SP1 documentation, but the stuff in the OES2 documentation is... decidedly overview-oriented. This is the most annoying thing, as I can't just put my nose to a manual and find it.

So, looking for Xen manual. It has to be around somewhere. Google-foo failed me today.

Labels: , , ,


Friday, July 20, 2007

SUSE driver pack for windows

Novell released the "SUSE Linux Enterprise Virtual Machine Driver Pack" today. You can find it on the downloads site. A word of warning, though, from the Documentation:

1.9 Avoiding Problems with the Drivers

[...]
  • Upgrading the Linux* kernel of the virtual machine host without upgrading the driver pack at the same time.
So, you can't run it on openSUSE (different kernel), and since SLES10 SP1 has already had a kernel update, you can't use it THERE without a subscription. So no freebie.

But, the fact that they've released it is great. Also, they list Windows XP drivers as part of the download. Yay!

Labels: , ,


Monday, July 09, 2007

More fun OES2 tricks

I had an idea while I was googling around a bit ago. This may not work the way I expect as I'm not 100% on the technologies involved. But it sounds feasible.

Lets say you want to create a cluster mirror of a 2-node cluster for disaster recovery purposes. This will need at least four servers to set up. You have shared storage for both cluster pairs. So far so good.

Create the four servers as OES2-Linux servers. Set up the shared storage as needed so everything can see what it should in each site. Use DRBD to create new block-devices that'll be mirrored between the cluster pairs. Then set up NetWare-in-VM on each server, using the DRBD block-devices as the Cluster disk devices. You could even do SYS: on the DRBD block-devices if you want a true cluster-clone. That way when disk I/O happens on the clustered resources it gets replicated asynchronously to the DR site; unlike software RAID1 the I/O is considered committed when it hits local storage, SW RAID1 only considers writes committed when all mirrored LUNs report the commit.

Then, if the primary site ever dies, you can bring up an exact replica of the primary cluster, only on the secondary cluster pair. Key details like how to get the same network in both locations I leave as an exercise for the Cisco engineers. But still, an interesting idea.

Labels: , , , ,


Tuesday, July 03, 2007

OES2: pushed several months

A new post up on Cool Blogs shows where OES2 is sitting:

http://www.novell.com/coolblogs/?p=921

To quote from one of the comments by the author:
There will be a public beta. It might take couple of months more for a public beta.
This blows my schedule. From the sounds of it, they're looking at a Christmas or possibly BrainShare 2008 release. We'll have to put NetWare inside ESX server instead of a Xen paravirtualization. Due to this delay, and the presumed SP1 schedule, chances are now much worse for Novell to make the summer intersession 2008 migration window.

Crap.

Labels: , , ,


Wednesday, June 13, 2007

Still waiting

Any day now OES2 will come out.

Any day now.

Any day now I'll get a paravirtualizable NetWare and will be able to run it through its paces.

Any day now I'll get to try and figure out how Xen virtualization of NetWare interacts with an HP MSA1500cs.

But not today.

Labels: , , , ,


Tuesday, March 20, 2007

TUT211: NetWare virtualization

  • Xen 3.0.4+ is the codebase. They wanted 3.0.5, but Xensource didn't get the release out in time for that.
  • Server.EXE contains the bulk of the paravirtualization code.
  • New loader, XNLOADER.SYS replaces NWLOADER.SYS, if used in Xen.
  • New console driver. The old method, writing directly to video memory, won't work in a paravirtualized VM.
  • New PSM: XENMP.PSM. Permits SMP in Xen.
  • So far, no "P2V" equivalent application, though they promise something by OES2-ship.
  • Improved VM management screens.

Labels: , , , ,


Friday, March 16, 2007

Xen, cdroms, and tricks

One of the things I managed to figure out today is how to get a DVD drive visible in a Xen DomU, and how to tell the DomU that the media has been changed. First off, configuring your VM to have an optical drive in the first place.
disk = [ 'phy:/dev/sdb5,ioemu:hda,w','phy:/dev/sr0,hdc:cdrom,r']
The second command is the one that attached the physical drive to the DomU. That'll give you a CD device in your DomU. Unless you have a disk in the drive when the DomU is started, you won't see anything. Here is where the next bit comes in.

Unknown to me until now, there is a key-combination that allows you to manage the devices in a DomU.

[ctrl]+[alt]+[2]

That will take you to the HVM management screen. Type 'help' for what commands you can issue here. To tell the DomU that the optical device is ejected:
eject hdc
Where "hdc" is the device you configured in your VM config file.

Then change your media, and at the same screen, issue the command:
change hdc /dev/sr0
This tells the DomU that the optical device has new media, and to scan it.

To get back to the graphical screen:
[ctrl]+[alt]+[1]
This screen works similar to the NetWare debugger, in that all processing in the VM stops when you're in there. The eject command causes processing in the VM enough to process the eject, but not enough to run all the other processes. So beaware that time-sync will get screwed if you stay in the screen too long.

Labels:


Monday, February 12, 2007

Novell, Microsoft, and Xen

Novell put out a press release today.

It turns out that Intel has worked out some drivers for use by Windows inside a Xen paravirtualized container. This is distinct from the 'full virtualization' possible only in conjunction with things like the Intel VT instructions. I expected this to maybe be ready in time for SLES10 SP1, if we were lucky.

This is of great interest to me. I'm running windows on Xen right now, in full mode. Network performance is decidedly poor, though the rest of it works reasonably well. I'd like to run it paravirtualized if at all possible as that runs faster. Unfortunately, the drivers mentioned in the PR aren't generally available.

Labels: , ,


Monday, January 29, 2007

An incompatibility

I've been working on Zen Asset Management 7.5 the past few days. In the process I discovered a rather significant incompatibility with the client. Well, significant for me since it'll make client testing harder.

When run on a Windows XP virtual machine running on Xen 3.0.3 that comes with openSUSE 10.2, it causes the clock in the VM to slow w-a-y down. On the order of 1 tick per 30 ticks on the host machine slow. This makes it unusable in a rather significant way.

It also is completely unfixable! Running Windows in a full VM in Xen on openSUSE 10.2 is an unsupported operation. I have the CPU for it, and it runs pretty well in every other way. But something the inventory process does causes some Xen emulation to go 'poink' in a bad way. It is so bad that even after the VM is powered off and the BIOS is putting the virtual machine to rest, it STILL takes a very long time for the VM to unload. No idea where to report this one.

In general, the product looks interesting. Getting it rolled out everywhere will take a lot of work. Plus, for some reason it isn't accepting my license code. But that's something that can be fixed with a call in to Novell.

Labels: , ,


Tuesday, November 21, 2006

Virtual Machines are not a security barrier

Several of the sessions I attended at BrainShare this year were on AppArmor. The project lead for that product presented several times, and several times he repeated this mantra. A Virtual Machine is not a security barrier. This is true for full-virtualization products such as VMWare, and paravirtualization such as Xen.

Yesterday's SANS diary had an entry about VM detection on the part of malware. As you can well imagine, spyware and adware researchers make great use of products like VMWare to analyze their prey. VMWare has handy snapshoting abilities, which makes reverting your VM to pre-infection state easy. Unfortunately for them, "3 out of 12 malware specimens recently captured in our honeypot refused to run in VMware." The bad-ware authors are also aware of this and program their stuff to not run.

What's more insidious is that there are cases where the malware doesn't use the VMware detection to not run, but to infect the HOST machine instead. While this may not affect something like ESX Server which is a custom OS, other products like Xen in full virtualization mode or VMWare Server running on Windows or Linux would be exposed this way. Figuring out that your malware process is running in a virtual machine is easy and scriptable, and breaking out of the VM is just as scriptable.

Virtual Machines are not a security barrier, nor do they make you significantly safer. They're just different.

Tags: , ,

Labels: , , ,


This page is powered by Blogger. Isn't yours?