Both the EVA4400 and the EVA6100 parts arrived late Wednesday. Wednesday I got the EVA4400 partially unboxed, and finished it up Thursday. Got CommandView upgraded so we could manage the EVA4400, and thus lost licensing for the EVA3000. The 10/26 expiry date for that license is no problem, as the EVA3000 will become an EVA6100 well before then. Next weekend if the stars align right.
And today we schlepped the whole EVA4400 to the Bond Hall datacenter.
And now I'm pounding the crap out of it to make sure it won't melt under the load we intend to put on it. These are FATA disks, which we've never used so we need to figure it out. We're not as concerned with the 6100 since that's FC disks, and they've been serving us just fine for years.
Also on the testing list, making sure MPIO works the way we expect it to.
The EVAs are scheduled to deliver today! This means that we are very probably going to be taking almost every IT system we have down starting late Friday 9/5 and going until we're done. We have a meeting in a few minutes to talk strategy.
There was some fear that the gear wouldn't get here in time for the 9/5 window. The 9/12 window has one of the key, key people needed to handle the migration in Las Vegas for VMWorld, and he won't be back until 9/21 which also screws with the 9/19 window. The 9/19 window is our last choice, since that weekend is move-in weekend and the outage will be vastly more noticeable with students around. Being able to make the 9/5 window is great! We need these so badly that if we didn't get the gear in time, we'd have probably done it 9/12 even without said key player.
The one hitch is if HP can't do 9/5-6 for some reason. Fret. Fret.
It would seem, and I've yet to trace down definitive proof of this, that Windows Server 2008 Clustering still has the Basic Partitioning dependency in it. This limits Windows LUNs to 2TB, among other annoyances. Such as the fact that resizing one of those puppies requires a full copy onto a larger LUN rather than extending the one you already have. How... 1999.
The question has been raised internally that perhaps we need to reassess what we've set for email message-size limits. When we set our current limit, we did it to the apparent defacto standard for mail size limits, which is about 10 meg.
This, perhaps, is not what it should be for an institution of higher-ed where research is performed. We have certain researchers on campus that routinely play with datasets larger than 10MB, sometimes significantly larger. And these researchers would like to electronically distribute these datasets to other researchers, and the easiest means of doing that by far is email. The primary reason we have mail-servers serving the (for example) chem.wwu.edu domain is to have these folk with much larger message size limits. Otherwise, these folk would have their primary email in Exchange.
The routine answer I've heard for handling really large file sizes is to use, "alternate means," to send the file. We don't have a FTP server for staff use, since we have a policy that forbids the use of unauthenticated protocols for transmitting passwords and things. We could do something like Novell does with ftp.novell.com/incoming and create a drop-box that anyone with a WWU account can read, but that's sort of a blunt-force solution and by definition half of a half-duplex method. Our researchers would like a full duplex method, and email represents that.
So what are you all using for email size limits? Do you have any 'out of band' methods (other than snail mail) for handling larger data sizes?
There are some workloads that fit well within VM of any kind, and others that are very tricky. Fileservers are one area that are not good candidates for VM. In some cases they qualify as highly transactional. In others, the memory required to do fileserving well makes them very expensive. When you can fit 40 web-servers on a VM host, but only 4 fileservers, it makes the calculus obvious.
This is on my mind since we're running into memory problems on our NetWare cluster. We've just plain outgrown the 32-bit memory space for file-cache. NW can use memory above the 4GB line, it does have PAE support, but memory access above there is markedly slower than it is below the line. Last I heard the conventional wisdom is that 12GB is about the point where it starts earning you performance gains again. eek!
So, I'm looking forward to 64-bit memory spaces and OES2. 6GB should do us for a few years. That said, 6GB of actually-used RAM in a virtual-host means that I could fit... two of them on a VM server with 16GB of RAM.
16GB of RAM in, say, an ESX cluster is enough to host 10 other servers. Especially with memory deduplication. In the case of my hypothetical 6GB file-servers, 5.5GB of that RAM will be consumed by file-cache that will be unique to that server and thus very little gains from memory de-dup.
In the end, how well a fileserver fits in a VM environment is based on how large of a 'working set' your users have. If the working set it large enough, it can mean that you'll get small gains for virtualization. However, I realize fileserving on the scale we do it is somewhat rare, so for departmental fileservers VM can be a good-sized win. As always, know your environment.
In light of the budgetary woes we'll be having, I don't know what we'll do. Last I heard the State is projected to have a 2.7 billion deficit for the 2009-2011 (fiscal year starts July 1) budget cycle. So it may very well be possible that the only way I'll get access to 64-bit memory spaces is in an ESX context. That may mean a 6 node cluster on 3 physical hosts. And that's assuming I can get new hardware at all. If it gets bad enough I'll have to limp along until 2010 and play partitioning games to load-balance my data-loads across all 6 nodes. By 2011 all of our older hardware falls off of cheap-maintenance and we'll have to replace it, so worst-case that's when I can do my migration to 64-bit. Arg.
I just reported a bug in the beta that surprised me. I can't talk details about it, but it strikes me as the kind of bug that should have been at least reported shortly after the client released. Perhaps it was just so overall buggy that it got lost in the forest, but still. The Vista client has been out for some time now.
Having said the following rant several times over the past few days, I figure it's time to post it ;).
The problem we're running in to is that the number of users of the Vista Client is a small, small sub-set of the overall users of the Novell Client, which are by now a minority of overall users of Novell NCP file-servers. Novell spent years hyping 'clientless' approaches to file-serving, through the CIFS stack on NetWare. A lot of places bought in to that. Because of this, the percentage of NCP-client Vista users among the overall Novell File-Server market is a rather small one.
And small means you don't get a lot of testing done by people-who-are-not-us, and seemingly obvious bugs showing up in the beta Sp1 builds. I don't have any Vista workstations, so I've done exactly zero testing of the Vista Client; this particular bug was reported and troubleshot by someone who is not me (I just filed it). Even though we have beta builds of the Vista client as part of this beta, I'm not testing it. All things considered, I probably should.
Since we're wedded hard to the Novell Client, it's probably time for us to start devoting resources to the ecosystem in order to keep it alive.