MSA, mirroring, and the NetWare cluster

Mirroring WUF is probably the easist thing we'll do, once we get the fibre interconnect between the local SAN and the BCC SAN. Setting up the software RAID devices in NetWare is a fairly simple thing to do, and will immediately be integrated into the cluster. It was a method like this that I used to migrate the SOFTWARE volume from a direct-attach on FACSRV2 to be on the SAN.

That said, there are some design considerations to take into account. SAN best-practices documents at both HP and Novell (and Microsoft) say it is better to create many LUNs than it is to use a few big LUNs. This is the practice we use in the Exchange cluster. The reasoning behind this is to allow the operating system to queue IO operations across many LUNs rather than stack them all up behind a few LUNs, which has the ultimate effect of making IO more efficient on the SAN device. I propose that we follow this practice when partitioning out the MSA.

The LUNs we create on the MSA for use in WUF will have a 64K stripe-size. This is the stripe that best supports file-server loads. For comparison, the stripe-size in the EVA is an unmodifyable 128K.

MSA guidelines strongly recommend against RAID5 arrays larger than 14 drives, which limits us to Drive Arrays of 6.5TB or smaller. Each drive-array we create loses us a drive for parity. Also, I'd like to designate one drive per shelf to be a hot-spare. This leaves us with 22 drives for use as storage.

Right this moment WUF has just over 6TB allocated to it which is almost to the max for a single drive array.

Tags: ,