Recently in hp Category

HP has been transitioning away from the cciss Linux kernel-driver for a while now, but there hasn't been much information about what it all means. Just on the name alone the module needed a rename (one possible acronym of cciss: Compaq Command Interface for SCSI-3 Support), and it is a driver that has been in the Linux ecosystem a really long time (at least in the 2.2 kernel era). A lot has changed in the kernel.

HP has finally released a PDF describing the whole cciss vs. hpsa thing.

Read it here: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02677069/c02677069.pdf

The key differences:
  • HPSA is a SCSI driver, not a block-driver like CCISS
  • This means that the devices are moving from /dev/cciss to /dev
  • Device mode numbers will change
  • New controllers will increment kernel names, so a second controller will be /dev/sda, not /dev/sdb, so use udev names (partition ID, disk-ID, that kind of thing) to avoid pain.
  • For newer kernels (2.6.36+) cciss and hpsa can load at the same time if the system contains hardware that needs those drivers.

Data Protector deduplication

| 1 Comment
When last I looked at HP Data Protector deduplication, I was not impressed. The client-side requirements were a resource hungry joke, and were seriously compromised by Microsoft failover clusters.

I found a use-case for it last week. We have a server in a remote office across a slow WAN link. Backing that up has always been a trial, but I had the right resources to try and use dedupe and at least get the data off-site.

Sometime between then and now (v6.11) HP changed things.

The 'enhincrdb' directory I railed against is empty. Having just finished a full and incremental backup I see that the amount of data transferred over the wire is exactly the same for full and incremental backups, but the amount of data stashed away is markedly different. Apparently the processing to figure out what needs to be in the backup has been moved from the client to the backup server, which make this useless over slow links.

It means that enhanced incremental backups will take just as long as the fulls do, and we don't have time for that on our larger servers. We're still going to stick with an old fashioned Full/Incremental rotation.

It's an improvement in that it doesn't require such horrible client-side resources. However, this implementation still has enough quirks that I just do not see the use-case.

Reduced packaging in IT

| 5 Comments
I've talked about this before, and I'm sure I'll do it again. We do need to reduce some of the excessive packaging on the things we get. I can completely understand the need to swaddle a $57,000 storage controller in enough packaging to survive a 3 meter drop. What I don't understand is shipping the 24 hard drives that go with that storage controller in individual boxes. It wouldn't take much engineering to come up with a 6-pack foam holder for hard-drives. It would seriously reduce bulk, which makes it easier and cheaper to ship, and there is less material used in the whole process. But I guess that extra SKU is too much effort.

Today I turned this:
HP-BoxesA.jpg

Into this:

HP-BoxesB.jpg

The big box at the top of the stack contained 24 individual hard-drive boxes. Each box had:
  • 1 hard-drive.
  • 1 anti-static bag requiring a knife to open.
  • 2 foam end-pieces to hold the drive in place in the box.
  • 1 piece of paper of some kind, white.
  • 1 cardboard box, requiring a knife to open.
When I was done slotting all of those in, I had a large pile of cardboard boxes, a big jumble of green foam bits, a slippery pile of anti-static bags, and a neat pile of paper. The paper and cardboard can easily be recycled. The anti-static bags and foam bits... not so much. Although, the foam bits were marked type 4 plastic (LDPE), which means they were possibly made from recyclable materials, right?

Right?

I'd still like to use less of it.

Tape is dead, Long live Disk!

Except if you're using HP Data Protector.

Much as I'd like to jump on the backup-to-disk de-dup bandwagon, I can't. Can't afford it. It all comes down to the cost-per-GB of storage in the backup system.

With tape, Data Protector licenses on the following items:
  • Per tape-drive over 2
  • Per tape library with a capacity between 50 and 250 slots
  • Per tape library that exceeds 250 slots
  • Per media pool with more than some-big-number of tapes
With disk, DP licenses on the following items:
  • Per TB in the backup-to-disk system
Obviously, the Disk side is much easier to license. In our environment we had something like 500 SDLT320 tapes, and our library had 6 drives and 45 slots. We only had to license the 4 extra tape drives.

Then our library started crapping out, and we outgrew it anyway. Prime time to figure out what the future holds for our backup environment. TO DISK!

HOLY CRAP that's expensive.

HP licenses their B2D space by the Terabyte. After you do the math it comes down to about $5/GB. Without using a de-duplication technology, you can easily make 10 copies or more of every bit of data subject to backup. Which means that for every 1 GB of data in the primary storage, 10GB of data is in the B2D system, and that'll set us back a whopping $50/GB. So... about the de-duplication system...

Too bad it doesn't work for non-file data, and kinda sorta explicitly doesn't work for clustered systems. Since 70% or so of our backup data is sourced from clustered file-servers or is non-file data (Exchange, SQL backups), this means the gains from HP's de-dup technology are pretty minor. Looks like we're stuck doing standard backups at $50/GB (or more).

So, about that 'dead' tape technology! We've already shelled out for the tape-drive licenses so that's a sunk cost. The library we want doesn't have enough slots to force us to get that license. All that's left is the media costs. Math math math, and the amortized cost of the entire library and media set comes to about $0.25/GB. Niiice. Factor in the magnification factor, and each 1 GB of backup will cost $2.50/GB, a far, far cry from $50/GB.

We still have SOME backup to disk space. This is needed since these LTO4 drives are HUNGRY critters, and the only way to feed them fast enough to prevent shoe-shining is to back everything up to disk, and then copy the jobs to tape directly from disk. So long as we have a week's worth of free-space, we're good. This is a sunk cost too, happily.

So. To-disk backups may be the greatest thing since the invention of the tape-changing robot, but our software isn't letting us take advantage of it. 
One of the increasingly annoying things that IT shops have to put up with is web based administration portals using self-signed SSL certificates. Browsers are increasingly making this setup annoying, and for a good reason. Which is why I try and get these pages signed with a real key if they allow me to.

HP's Command View EVA administration portal annoyingly overwrites the custom SSL files when it does an upgrade. So you'll have to do this every time you apply a patch or otherwise update your CV install.
  1. Generate a SSL certificate with the correct data.
  2. Extract the certificate into base-64 form (a.k.a. PEM format) in separate 'certificate' and 'private key' files.
  3. On your command view server overwrite the %ProgramFiles%\Hewlett-Packard\sanworks\Element Manager for StorageWorks HSV\server.cert file with the 'certificate' file
  4. Overwrite the %ProgramFiles%\Hewlett-Packard\sanworks\Element Manager for StorageWorks HSV\server.pkey file with the 'private key' file
  5. Restart the CommandView service
At that point, CV should be using your generated certificates. Keep these copied somewhere else on the server so you can quickly copy them back in when you update Command View.
HP Data Protector has a client for NetWare (and OES2, but I'm not backing up any of those yet). This is proving to take a bit of TSA tuning to work out right. I haven't figured out where the problem exactly is, but I've worked around it.

The following settings are what I've got running right now, and seems to work. I may tweak later:

tsafs /readthreadsperjob=1
tsafs /readaheadthrottle=1

This seems to get around a contention issue I'm seeing with more aggressive settings, where the TSAFS memory will go the max allowed by the /cachememorythreshold setting and sit there, not passing data to the DP client. This makes backups go really long. The above setting somehow prevent this from happening.

If these prove stable, I may up the readaheadthrottle setting to see if it halts on that. This is an EVA6100 after all, so I should be able to go up to at least 18 if not 32 for that setting.

High availability

| 1 Comment
64-bit OES provides some options to highly available file serving. Now that we've split the non-file services out of the main 6-node cluster, all that cluster is doing is NCP and some trivial other things. What kinds of things could we do with this should we get a pile of money to do whatever we want?

Disclaimer: Due to the budget crisis, it is very possible we will not be able to replace the cluster nodes when they turn 5 years old. It may be easier to justify eating the greatly increased support expenses. Won't know until we try and replace them. This is a pure fantasy exercise as a result.

The stats of the 6-node cluster are impressive:
  • 12 P4 cores, with an average of 3GHz per core (36GHz).
  • A total of 24GB of RAM
  • About 7TB of active data
The interesting thing is that you can get a similar server these days:
  • HP ProLiant DL580 (4 CPU sockets)
  • 4x Quad Core Xeon E7330 Processors (2.40GHz per core, 38.4GHz total)
  • 24 GB of RAM
  • The usual trimmings
  • Total cost: No more than $16,547 for us
With OES2 running in 64-bit mode, this monolithic server could handle what six 32-bit nodes are handling right now. The above is just a server that matches the stats of the existing cluster. If I were to really replace the 6 node cluster with a single device I would make a few changes to the above. Such as moving to 32GB of RAM at minimum, and using a 2-socket server instead of a 4-socket server; 8 cores should be plenty for a pure file-server this big.

A single server does have a few things to recommend it. By doing away with the virtual servers, all of the NCP volumes would be hosted on the same server. Right now each virtual-server/volume pair causes a new connection to each cluster node. Right now if I fail all the volumes to the same cluster node, that cluster node will legitimately have on the order of 15,000 concurrent connections. If I were to move all the volumes to a single server itself, the concurrent connection count would drop to only ~2500.

Doing that would also make one of the chief annoyances of the Vista Client for Novell much less annoying. Due to name cache expiration, if you don't look at Windows Explorer or that file dialog in the Vista client once every 10 minutes, it'll take a freaking-long time to open that window when you do. This is because the Vista client has to enumerate/resolve the addresses of each mapped drive. Because of our cluster, each user gets no less than 6 drive mappings to 6 different virtual servers. Since it takes Vista 30-60 seconds per NCP mapping to figure out the address (it has to try Windows resolution methods before going to Novell resolution methods, and unlike WinXP there is no way to reverse that order), this means a 3-5 minute pause before Windows Explorer opens.

By putting all of our volumes on the same server, it'd only pause 30-60 seconds. Still not great, but far better.

However, putting everything on a single server is not what you call "highly available". OES2 is a lot more stable now, but it still isn't to the legendary stability of NetWare 3. Heck, NetWare 6.5 isn't at that legendary stability either. Rebooting for patches takes everything down for minutes at a time. Not viable.

With a server this beefy it is quite doable to do a cluster-in-a-box by way of Xen. Lay a base of SLES10-Sp2 on it, run the Xen kernel, and create four VMs for NCS cluster nodes. Give each 64-bit VM 7.75GB of RAM for file-caching, and bam! Cluster-in-a-box, and highly available.

However, this is a pure fantasy solution, so chances are real good that if we had the money we would use VMWare ESX instead XEN for the VM. The advantage to that is that we don't have to keep the VM/Host kernel versions in lock-step, which reduces downtime. There would be some performance degradation, and clock skew would be a problem, but at least uptime would be good; no need to perform a CLUSTER DOWN when updating kernels.

Best case, we'd have two physical boxes so we can patch the VM host without having to take every VM down.

But I still find it quite interesting that I could theoretically buy a single server with the same horsepower as the six servers driving our cluster right now.

More on DataProtector 6.10

| 1 Comment
We've had DP6.10 installed for several weeks now and have some experience with it. Yesterday I configured a Linux Installation Server so I can push agents out to Linux hosts without having to go through the truly horrendous install process that DP6.00 forced you to do when not using an Installation Server. This process taught me that DataProtector grew up in the land of UNIX, not Linux.

One of the new features of DP6.10 is that they now have a method for pushing backup agents to Linux/HP-UX/Solaris hosts over SSH. This is very civilized of them. It uses public key and the keychain tool to make it workable.

The DP6.00 method involved tools that make me cringe. Like rlogin/rsh. These are just like telnet in that the username and password is transmitted over the wire in the clear. For several years now we've had a policy in place that states that protocols that require cleartext transmission of security principles like this are not to be used. We are not alone in this. I am very happy HP managed to get DP updated to a 21st century security posture.

Last Friday we also pointed DP at one of our larger volumes on the 6-node cluster. Backup rates from that volume blew our socks off! It pulled data at about 1600GB/Minute (a hair under 27MB/Second). For comparison, SDL320's native transfer rate (the drive we have in our tape library, which DP isn't attached to yet) is 16MB/Second. Considering the 1:1.2 to 1:1.4 compression ratios typical of this sort of file data, the max speed it can back up is still faster than tape.

The old backup software didn't even come close to these speeds, typically running in the 400MB/Min range (7MB/Sec). The difference is that the old software is using straight up TSA, where DP is using an agent. This is the difference an agent makes!

DataProtector 6.00 vs 6.10

A new version of HP DataProtector is out. One of the nicest new features is that they've greatly optimized the object/session copy speeds.

No matter what you do for a copy, DataProtector will have to read all of one Disk Media (50GB by default) to do the copy. So if you multiplex 6 backups into one Disk Writer device, it'll have to look through the entire media for the slices it needs. If you're doing a session copy, it'll copy the whole session. But object copies have to be demuxed.

DP6.00 did not handle this well. Consistently, each Data Reader device consumed 100% of one CPU for a speed of about 300 MB/Minute. This blows serious chunks, and is completely unworkable for any data-migration policy framework that takes the initial backup to disk, then spools the backup to tape during daytime hours.

DP6.10 does this a lot better. CPU usage is a lot lower, it no longer pegs one CPU at 100%. Also, network speeds vary between 10-40% of GigE speeds (750 to 3000 MB/Minute), which is vastly more reasonable. DP6.10, unlike DP6.00, can actually be used for data migration policies.

The price of storage

| 1 Comment
I've had cause to do the math lately, which I'll spare you :). But as of the best numbers I have, the cost of 1GB of space on the EVA6100 is about $16.22. Probably more, since this 6100 was created out of the carcass of an EVA3000, and I don't know what percentage of parts from the old 3000 are still in the 6100 and thus can't apportion the costs right.

For the EVA4400, which we have filled with FATA drives, the cost is $3.03.

Suddenly, the case for Dynamic Storage Technology (formerly known as Shadow Volumes) in OES can be made in economic terms. Yowza.

The above numbers do not include backup rotation costs. Those costs can vary from $3/GB to $15/GB depending on what you're doing with the data in the backup rotation.

Why is the cost of the EVA6100 so much greater than the EVA4400?
  1. The EVA6100 uses 10K RPM 300GB FibreChannel disks, where the the EVA4400 uses 7.2K RPM 1TB (or is it 450GB?) FATA drives. The cost-per-gig on FC is vastly higher than it is on fibre-ATA.
  2. Most of the drives in the EVA6100 were purchased back when 300GB FC drives cost over $2000 each.
  3. The EVA6100 controller and cabinets just plain cost more than the EVA4400, because it can expand farther.
To put it into a bit of perspective, lets take the example of a 1TB volume of, "unorganized file data", the seemingly official term for "file-server". If you place that 1TB of data on the EVA6100, that data consumes $16609.28 worth of storage. So what if 70% of that data hasn't been modified in a year (not unreasonable), and is then put on the EVA4400 instead? So you'd have 307GB on the 6100 and 717GB on the 4400. Your storage cost now drops to $5909.75. That's real money.