Recently in shadowvolumes Category

The price of storage

| 1 Comment
I've had cause to do the math lately, which I'll spare you :). But as of the best numbers I have, the cost of 1GB of space on the EVA6100 is about $16.22. Probably more, since this 6100 was created out of the carcass of an EVA3000, and I don't know what percentage of parts from the old 3000 are still in the 6100 and thus can't apportion the costs right.

For the EVA4400, which we have filled with FATA drives, the cost is $3.03.

Suddenly, the case for Dynamic Storage Technology (formerly known as Shadow Volumes) in OES can be made in economic terms. Yowza.

The above numbers do not include backup rotation costs. Those costs can vary from $3/GB to $15/GB depending on what you're doing with the data in the backup rotation.

Why is the cost of the EVA6100 so much greater than the EVA4400?
  1. The EVA6100 uses 10K RPM 300GB FibreChannel disks, where the the EVA4400 uses 7.2K RPM 1TB (or is it 450GB?) FATA drives. The cost-per-gig on FC is vastly higher than it is on fibre-ATA.
  2. Most of the drives in the EVA6100 were purchased back when 300GB FC drives cost over $2000 each.
  3. The EVA6100 controller and cabinets just plain cost more than the EVA4400, because it can expand farther.
To put it into a bit of perspective, lets take the example of a 1TB volume of, "unorganized file data", the seemingly official term for "file-server". If you place that 1TB of data on the EVA6100, that data consumes $16609.28 worth of storage. So what if 70% of that data hasn't been modified in a year (not unreasonable), and is then put on the EVA4400 instead? So you'd have 307GB on the 6100 and 717GB on the 4400. Your storage cost now drops to $5909.75. That's real money.

Disk-space over time

| No Comments
I've mentioned before that I do SNMP-based queries against NetWare and drop the resulting disk-usage data into a database. The current incarnation of this database went in August of 2004, so I have just over 4 years of data in it now. You can see some real trends in how we manage data in the charts.

To show you what I'm talking about, I'm going to post a chart based on the student-home-directory data. We have three home-directory volumes for students, which run between 7000-8000 home directories on them. We load-balance by number of directories rather than least-size. The chart:

Chart showing student home directory disk space usage, carved up by quarter.

As you can see, I've marked up our quarters. Winter/Spring is one segment on this chart since Spring Break is hard to isolate on these scales. We JUST started Winter 2008, so the last dot on the chart is data from this week. If you squint in (or zoom in like I can) you can see that last dot is elevated from the dot before it, reflecting this week's classes.

There are several sudden jumps on the chart. Fall 2005. Spring 2005. Spring 2007 was a big one. Fall 2007 just as large. These reflect student delete processes. Once a student hasn't been registered for classes for a specified period of time (I don't know what it is off hand, but I think 2 terms) their account goes on the 'ineligible' list and gets purged. We do the purge once a quarter except for Summer. The Fall purge is generally the biggest in terms of numbers, but not always. Sometimes the number of students purged is so small it doesn't show on this chart.

We do get some growth over the summer, which is to be expected. The only time when classes are not in session is generally from the last half of August to the first half of September. Our printing volumes are also w-a-y down during that time.

Because the Winter purge is so tiny, Winter quarter tends to see the biggest net-gain in used disk-space. Fall quarter's net-gain sometimes comes out a wash due to the size of that purge. Yet if you look at the slopes of the lines for Fall, correcting for the purge of course, you see it matches Winter/Spring.

Somewhere in here, and I can't remember where, we increased the default student directory-quota from 200MB to 500MB. We've found Directory Quotas to be a much better method of managing student directory sizes than User Quotas. If I remember my architectures right, directory quotas are only possible because of how NSS is designed.

If you take a look at the "Last Modified Times" chart in the Volume Inventory for one of the student home-directory volumes you get another interesting picture:
Chart showing the Last Modified Times for one student volume.
We have a big whack of data aged 12 months or newer. That said, we have non-trivial amounts of data aged 12 months or older. This represents where we'd get big savings when we move to OES2 and can use Dynamic Storage Technology (formerly known as 'shadowvolumes'). Because these are students and students only stick around for so long, we don't have a lot of stuff in the "older than 2 years" column that is very present on the Faculty/Staff volumes.

Being the 'slow, cheap,' storage device is a role well suited to the MSA1500 that has been plaguing me. If for some reason we fail to scare up funding to replace our EVA3000 with another EVA less filled-to-capacity, this could buy a couple of years of life on the EVA3000. Unfortunately, we can't go to OES2 until Novell ships an edirectory enabled AFP server for Linux, currently scheduled for late 2008 at the earliest.

Anyway, here is some insight into some of our storage challenges! Hope it has been interesting.
Two days ago Novell posted an AppNote on Dynamic Storage Technology, formerly known as 'shadow volumes'.

Setting up Dynamic Storage Technology with Open Enterprise Server 2

One thing I noticed right at the top of the article is a little blurb that reads:
This article was written for Novell Open Enterprise Server 2. Sign up here to be notified when the Novell Open Enterprise Server 2 open beta becomes available.
Which tells me that the public beta is probably pretty near, and that OES2 release will probably not be "end of Q3" like Jason Williams indicated a while back. I could be wrong, of course. As soon as I get the public beta code there is some serious testing I need to do.

Anyway, back to the article. This is a click-by-click guide for setting up DST. This includes screenshots, which are of the new iManager 2.7. Unsurprisingly, Novell re-themed the iManager interface. There is a gotcha on step 17, where you have to edit a local config file on the OES server to get it going, that would probably trip up most people trying to set up DST by going solely on looking at the UI.

This is a very good article describing it all. I recommend it!
I had me an idea yesterday. One of those ideas that I'm not sure is a good one, but wow does it make a certain kind of sense.

We, like all too many schools run Blackboard as the groupware product supporting our classrooms. There is an opensource product out there that also can do this, but we're not running it. That's not what this post is about.

First a wee bit of architecture. Roughly speaking, Blackboard is separated into three bits. The web server, the content server, and the database. The web-server is the classic Application Server that is what students and teachers interface with. The web server then talks with both the content server and database server. The content server is the ultimate home of all things like passed in homework. The database server glues this all together.

Due to policies, we have to keep courses in Blackboard for a certain number of quarters just in case a student challenges a grade. They may not be available to everyone, but those courses are still in the system. And so is all of the homework and assorted files associated with that class. Because of this, it is not unusual for us to have 2 years (6-7 quarters) of classes living on the content server, of which all but one quarter is essentially dead storage.

One of the problems we've had is that when it comes time to actually delete a course, it doesn't always clean up the Content associated with that course. Quite annoying.

This is a case where Dynamic Storage Technology would be great. Right now our Blackboard Content servers are a pair of Windows servers in a Windows Cluster. It struck me yesterday that this function could be fulfilled by a pair of OES2 servers in a Novell Clustering Services setup (or Heartbeat, but I don't know how to set THAT up), using Samba and DST to manage the storage. That way stuff that is accessed in the past, oh, 3 months would be on the fast EVA storage, and stuff older than 3 months would be exiled to the slow MSA storage. As the file-serving is done by way of web-servers rather than direct access, the performance hit by using Samba won't be noticable as the concurrency is well below the limit where that becomes a problem. Additionally, since all the files are owned by the same user I could use a non-NSS filesystem for even faster performance.


The problem here is that OES2 isn't out yet. Such a fantastical idea may be doable in the 2008 intersession window, but we may have other upgrades to handle there. But still, it IS an interesting idea.

Dynamic Storage Technology

| No Comments
Novell Connection Magazine has an article up right now that describes DST, formerly known as Shadow Volumes. I've talked about them before, both last year around this time (6/15/07, and 6/26/07) and back at BrainShare (TUT205). So, I've been following this.

As said previously, this'll not work for NetWare, just OES-Linux. From what I understand you can host migration volumes on NetWare, but the server presenting the unified view of the storage has to be OES-linux.

Anyway, on with the article.
I've gone over this in some length in the past. But as with the previous, here are my notes from the session.

  • New fstype = shadowfs, provides a linux-level view of a shadow filesystem. By default, linux doesn't see the unified view. Useful for some backup apps, or things like web-servers.
  • File-systems participating in DST need to be in the same file-system on the OES server. Could be NFS mounted, might possibly be NCP-mounted in the future. Not yet.
  • Migration policy can be set by user.
  • Migrations are batched, not done on-demand.
  • Can be used to silently migrate a volume to new hardware
    • Set new volume as Primary, and old as Shadow
    • As users hit data, it gets migrated to Primary from Shadow during nightly migrations.
    • Over time, most of a file-system can be migrated this way.
  • Directory quotas do NOT replicate over shadow. The shadow quota may be different than Primary quota, and directory quotas are NOT shadow-aware. This is because directory quotas are a function of the file-system, and DST is a function of NCPserv and the client.
The last NOA before BrainShare: Dynamic Storage Technology

This is the official name for what has been known as Shadow Volumes. I've spoken about them many times in the past. I first heard about Shadow Volumes (now Dynamic Storage Technology) at BrainShare last year, but I didn't blog about it. Since then, there have been a few more posts.

June 15, 2006
June 26, 2006
September 13, 2006
November 30, 2006

Yeah, this is exciting stuff. The podcast had more details, here are my notes:

Jason Williams -- Product Manager for OES
  • OES2 will include Dynamic Storage Technology
  • "We recon about 80% of of that stuff [on very large NCP volumes] could be turned to stale. It's stuff that hasn't been touched in maybe 30 days or more"
  • "We have one customer out there with maybe 450 plus terabytes of data, and that's just the unstructured stuff. It doesn't even account for their databases."
  • Redirection to the shadow volume is done similar to DFS, with a pointer the client understands and then follows.
    • This avoids the migrate/demigrate problem for traditional HSM
  • This is linux-only. Not NetWare.
  • Works for NCP-clients right now, trying to get Samba working... not done yet.
  • Can set policies for what to migrated, ModifyDate, AccessDate, FileType, etc.
  • Managed through NRM
  • Can do stacked policies, a global policy, and policies for specific volumes
  • Applies to not just NSS, but to ext3, reiser, xfs, and such.
  • Requires an exclusive lock on a file before it can be migrated.
  • This is a service on top of a file-system, not a feature of a file-system.
  • Monday morning keynote demo! Right there!
  • There will be a table in the Technology Lab
For more details, listen to the pod-cast.


Novell Open Audio had a podcast last week about Open Enterprise Server 2 (it's official, that's the name). It was quite long, and full of nice information. Probably the best bit was about Shadow Volumes, which I've mentioned before. That just keeps getting better and better! I highly recommend listening to the pod-cast.

I've known for a while that it allows policy based migration of data to older/slower/cheaper media. Unlike traditional HSM technologies, Shadow Volumes are based on the last-modified date rather than the last-accessed date. Also, policies can include file types as well, so you can migrate your large multi-media files to media that handles long contiguous reads better. Or just migrate files larger than 50MB to that faster media. Whatever.

One scenario mentioned in the pod-cast was about data migrations of extremely large data. One Novell cluster mentioned in the pod-cast had 420TB of data in it. Ooo! Migrating THAT to a new SAN would take weeks. How it works is this:
  1. Set up the new server
  2. Configure the volume on the new server to use the old SAN as the migrate (i.e. slow) media
  3. Do the server migration itself
  4. As users modify data on the old SAN, it gets tagged for migration to the new SAN. In a week/month/whatever most of the active data is on the new SAN, and the older SAN gets less and less data.
  5. When the time comes to decomission the old SAN (assuming that's what you want to do) the total data migration is a lot easier.
Freaking cool.

Unfortunately, this is an OES2-Linux product only. It can use NetWare volumes as migration targets, but NetWare won't do the policy based decisions. Darn.

Also mentioned is that OES2 will include SP7 for NetWare 6.5, which will introduce Xen paravirtualization to NetWare. IMHO, this is spiffy if your hardware vendor has stopped significant NetWare support (*cough*dell*cough*) and you still need to use it. For us we'll probably stick with 'bare metal' installs for the time being, at least until we get proof that running a Xen-virtualized NetWare instance on a 64-bit server runs faster than the same NetWare running bare-metal on a 64-bit server (in 32-bit emulation mode).

It also sounds like they've spend serious time getting NSS and NCP faster. This is very needed, as I showed earlier. As file I/O is much more CPU bound on Linux than on NetWare, any improvements they can make will be appreciated.

Also, they hope (but are not promising) to give out a public beta of OES2 to all BrainShare attendees. I predict another round of benchmarking come early April.

OES2 is currently slated for relase in the late-May early-June timeframe. This is nifty, as that's the start of Summer for us. Though, we're not migrating right away unless we are blown away by the differences.

Shadow volumes

| No Comments
I mentioned a few months ago something called Shadow Volumes. I just noticed something today in ncpcon on the test server that grabbed my eye something fierce:

change volume
create shadow_volume
create volume
purge volume
remove volume
enable login
disable login
Note the bolded commands. Perhaps Novell has slipped in Shadow Volumes in a post-SP2 update? Doing help on the 'create shadow_volume' command gives this output:

BENCHTEST-LIN:help create shadow_volume

NAME: create shadow_volume - Create NCP shadow volume

create shdadow_volume ncp_volume_name path

Use this command to create an association between an NCP volume
and a NCP shadow volume. This command only adds the NCP shadow
volume mount information to "/etc/opt/novell/ncpserv.conf".

This command can be added to a cluster load script.

You can run ncpcon console commands without entering NCPCON by
prefacing the command with ncpcon.

create shadow_volume vol1 /home/shadows/vol1
and "help shadow"
BENCHTEST-LIN:help shadow

NAME: shadow - Perform Shadow Volume operations on a NCP Volume - (null)

shadow volumename operation [options]

You can run ncpcon console commands without entering NCPCON by
prefacing the command with ncpcon.

operation=[lp][ls][mp][ms] - (lp) List primary files
(ls) List shadow files
(mp) Move files to primary
(ms) Move files to shadow

pattern="searchPattern" - File pattern to match against

owner="username.context" - Username and Context

uid=uidValue - User ID

time=[m][a][c] - (m) Last Time Modified (a) Last Time Accessed
(c) Last Time Changed

range=[time period] - See Time period

size=[size differential] = See Size differential

output="filename" - Output all results to the specified filename

time period=[a][b][c][d][e][f][g][h][i][j]
(a) Within Last Day
(b) 1 Day - 1 Week
(c) 1 Week - 2 Weeks
(d) 2 Weeks - 1 Month
(e) 1 Month - 2 Months
(f) 2 Months - 4 Months
(g) 4 Months - 6 Months
(h) 6 Months - 1 Year
(i) 1 Year - 2 Years
(j) More Than 2 Years

size differential=[a][b][c][d][e][f][g][h][i][j][k]
(a) Less than 1KB
(b) 1 KB - 4 KB
(c) 4 KB - 16 KB
(d) 16 KB - 64 KB
(e) 64 KB - 256 KB
(f) 256 KB - 1 MB
(g) 1 MB - 4 MB
(h) 4 MB - 16 MB
(i) 16 MB - 64 MB
(j) 64 MB - 256 MB
(k) More than 256 MB


Yes, 'EXAMPLE:' is blank in the HELP. Hmmmmmm. I don't see any documentation updates, but those commands are indeed present. Richard Jones mentioned that shadow volumes are an OES2 feature, and to try it out in the beta. Perhaps there is an OES2 beta in the near future? Who knows.

Tags: ,

Other Blogs

My Other Stuff

Monthly Archives