March 2007 Archives

Why cache is good

| 1 Comment
One of my post-brainshare tasks is to rebenchmark some OES performance. I did a benchmark series back in September and the results there weren't terribly encouraging. I learned at BrainShare that a mid-December NCPSERV patch fixed a lot of performance issues, and I should rerun my tests. Okay, I can do that.

One test I did underlines the need to tune your cache correctly. Using the same iozone tool I've used in the past, I ran the throughput test with multiple threads. Three tests:

20 threads processing against a separate 100MB file (2GB working set)
40 threads processing against a separate 100MB file (4GB working set)
20 threads processing against a separate 200MB file (4GB working set)

The server I'm playing with is the same one I used in September. It is running OES SP2, patched as of a few days ago. 4GB of RAM, and 2x 2.8 P4 CPU's. The data volume is on the EVA 3000 on a Raid0 partition. I'm testing OES througput not the parity performance of my array. Due to PCI memory, effective memory is 3.2GB. Anyway, the very good table:
                        20x100M        40x100M        20x200M
Initial write 12727.29193 12282.03964 12348.50116
Rewrite 11469.85657 10892.61572 11036.0224
Read 17299.73822 11653.8652 12590.91534
Re-read 15487.54584 13218.80331 11825.04736
Reverse Read 17340.01892 2226.158993 1603.999649
Stride read 16405.58679 1200.556759 1507.770897
Random read 17039.8241 1671.739376 1749.024651
Mixed workload 10984.80847 6207.907829 6852.934509
Random write 7289.342926 6792.321884 6894.767334
The 2GB dataset fit inside of memory. You can see the performance boost that provides on each of the Read tests. It is especially significant on the tests designed to bust read-ahead optimization such as Reverse Read, Stride Read, and Random Read. The Mixed Workload test showed it as well.

One thing that has me scratching my head is why Stride Read is so horrible with the 4GB data-sets. By my measure about 2.8GB of RAM should be available for caching, so most of the dataset should fit into cache and therefore turn in the fast numbers. Clearly, something else is happening.

Anyway, that is why you want to have a high cache-hit percentage on your NSS cache. This is also why 64-bit memory will help you if you have very large working sets of data that your users are playing on, and we're getting to the level where 64-bit will help. And will help even though OES NCP doesn't scale quite as far as we'd like it to. That's the overall question I'm trying to answer here.

The ".xxx" top-level domain returns

I read a post on Ars Technica that the top-level domain ".xxx" is back under consideration. From the article:
To find an explanation for resilience of the .xxx TLD proposal, one need only follow the money. The only organizations advocating the creation of the .xxx TLD at this point are the domain registrars, who would be able to generate considerable profits by selling .xxx domain names.
Which is very true. They won't be reaping the big bucks by signing over the seamy side of the internet, they'll be making the big bucks from reputation protection buys from the likes of us. Higher Education domains like ours, WWU.EDU, are PRIME PICKINGS for .XXX. You can fully imagine that "WWU.XXX" will be full of ***Hot CoEd Action***, if we don't get it first. So you can further guarantee that our Telecom group will dilligently pick up "WWU.XXX" in order to maintain our reputation. So whoever is granted the authority to register ".XXX" domains will be getting our money, and most other .EDU domains as well.

Back in the early years of the internet I heard of a proposal to have pornographers use "xxx" instead of the then-ubiquitous "www" at the start of URLs. It has the added advantage of not requiring regulatory approval, and is opt-in. It never went anywhere, but I thought it served the need better than a new top-level domain. Back then I didn't even consider the idea that "fbi.xxx" would be an attractive target for pornographers, heh.

Just a handy reminder

Novell has changed the patch process for brand new OES SP2 slightly. See TID3045794.

rug act [activation code] [email address]activate your patches
rug sub oes
subscribe to the OES channel
rug ref
refresh the OES channel
rug pin patch-11371
Install the rug patch, so this doesn't take an age
rcrcd restart
Restart the red-carpet daemon, so the patch takes
rug ref
refresh the OES channel
rug pl
Make sure you see patches
rug pin --entire-channel oes
Install all the patches in the OES channel

Now you know.

BrainShare done

| 1 Comment
I'm back at work. BrainShare was a blast, as usual. Learned a lot. Spent most of the day dumping what I learned, and will be spending the rest of the week working on things I learned about last week. Like a new benchmark series with OES with the mid-December NCP patch. I want to see if that changed anything.

Also next week when class is back in I need to analyze our I/O patterns on WUF to better design a test for OES. I need to know FOR SURE if OES-Linux is up to the task of handling 5000 concurrent connections the way we do it. The last series suggested it, but I need more details.

Finding me

I'll be at Meet the Experts tonight. I'm likely to be found near the Support Forum SysOps, wherever they'll be.
There is a book on this: "Novell Cluster Services for NetWare and Linux". This session was about OES, not OES2. Again, my notes:
  • On linux, cluster nodes are added through YaST
  • 120 bytes of meta-data per file on NSS
  • iPrint volumes could go on non-NSS volumes
  • ext3 on OES2 is indexed, not indexed on OES1. Problem for larger directories.
  • Novell Server Consolidation and Migration Tool can migrate Netware to Linux
  • While running in mixed mode, can not extend or create NSS pools. Reboots all around to make this take.
  • In mixed mode, trustee modifications do NOT transfer to the other OS. Migrate your NetWare volumes to OES-Linux, and leave them there!
    • In OES-Linux, trustees are kept in a file, not in the file-system.
  • In mixed mode, cluster load/unload scripts are kept in /etc/opt/novell/ncs/
    • When out of mixed mode, scripts are promoted into edir
  • Cluster licenses are not checked in OES-linux, but still 'count' come audit time. So have them.
  • The 'cluster convert' command ends mixed-mode operation
  • Clustering inside VMWare ESX server: only 2-node Microsoft clusters are supported. All others are not.

OES 210: OES, architectural overview

This sounds basic, but it extends on IO102. Again, my notes:
  • Probable beta in the next few weeks
  • OES2 will not install on SLES10, only on SLES10 sp1
    • This was done for Product Certification reasons, as was the fact that OES is an 'add on' to SLES
  • Most of OES2 is still 32-bit code. Parts with kernel interaction will be 64-bit.
  • Shipping on DVD media, though the OES add-on will be CD.
  • It will use Novell Customer Center for updates
  • http://www.novell.com/products/openenterpriseserver/partners for AV and Backup partners
  • CASA is a new auth package, stores things. Also exists on the client
  • NLDAP has been ported to openLDAP, in that the openLDAP community has accepted the patches submitted by Novell.
  • The kernel in OES2 will be 2.6.16
  • SMS allows backing up of Xen VMs
  • eDir 8.8 comes with OES2, no word on eDir 8.7
  • pureFTP is edir integrated
  • iManager 2.7 comes with JRE1.5
    • iManager WILL be ported to NetWare, which means OES2-NetWare will also come with JRE1.5
  • Samba new 'passdb' option = NDS_ldapsam
    • Allows use of Universal passwords as a Samba password. Nifty.
  • Tomcat 5 now, separate OES instance from the default SLES10 instance.
  • New migration framework, script based from the looks of it.
  • LAS, light auditing framework, new audit API
    • NSS is instrumented to use it.

TUT211: NetWare virtualization

  • Xen 3.0.4+ is the codebase. They wanted 3.0.5, but Xensource didn't get the release out in time for that.
  • Server.EXE contains the bulk of the paravirtualization code.
  • New loader, XNLOADER.SYS replaces NWLOADER.SYS, if used in Xen.
  • New console driver. The old method, writing directly to video memory, won't work in a paravirtualized VM.
  • New PSM: XENMP.PSM. Permits SMP in Xen.
  • So far, no "P2V" equivalent application, though they promise something by OES2-ship.
  • Improved VM management screens.

TUT205: Dynamic Storage Technology

I've gone over this in some length in the past. But as with the previous, here are my notes from the session.

  • New fstype = shadowfs, provides a linux-level view of a shadow filesystem. By default, linux doesn't see the unified view. Useful for some backup apps, or things like web-servers.
  • File-systems participating in DST need to be in the same file-system on the OES server. Could be NFS mounted, might possibly be NCP-mounted in the future. Not yet.
  • Migration policy can be set by user.
  • Migrations are batched, not done on-demand.
  • Can be used to silently migrate a volume to new hardware
    • Set new volume as Primary, and old as Shadow
    • As users hit data, it gets migrated to Primary from Shadow during nightly migrations.
    • Over time, most of a file-system can be migrated this way.
  • Directory quotas do NOT replicate over shadow. The shadow quota may be different than Primary quota, and directory quotas are NOT shadow-aware. This is because directory quotas are a function of the file-system, and DST is a function of NCPserv and the client.

TUT212: Novell Storage Services

I'm what you'd call good at NSS. But NSS on OES2 is another critter. This session took us through the updates. From my notes:

  • Three times the NSS source tree has been accidentally deleted by developers. It has been restored from Salvage each time. Go Salvage.
  • When mounting NSS on OES-Linux, mount it with the long namespace. Saves time. I did not catch the fstab option to make this work, though.
  • You can create NSS pools that are not NetWare compatible
  • NSS & LUM
    • NSS = 128-bit, Unix = 32-bit. LUM handles the translations.
    • Users need to be LUM-enabled for this to work
    • NCP-Serv can fake it for non-LUM users, but it is slower access.
      • OES1 = Rights and owners set all posix
      • OES2 = Rights and owners set through extended attributes
    • If Samba, then LUM.
      • Trustees are enforced, GUID is ignored.
  • Beasts = inodes!
  • /proc/slabinfo -> lsa_inode_cache = @ of inodes/files in cache
  • On NetWare, memory over the 4GB line is treated as a RAMdisk for files over 128K in size.
  • 32-bit vs 64-bit linux & NSS
    • 32-bit linux: 1GB max kernel memory, makes for tricky caching
    • 64-bit linux: All memory can be kernel memory
  • NSS patch in mid-December allowed meta-data caching in user-memory, greatly speeding up meta-data reads on 32-bit systems with large numbers of files.
  • nss /HighMemoryCacheType= [private|linux|none]
    • Sets the use of User memory in 32-bit OES
    • None = Use the same algorithm as OES-FCS, which is to try and cache everything in Kernel-mode memory. Only option on 64-bit linux since it doesn't have to use USER memory at all.
    • linux = integrate caching into the regular linux caches. This can be a problem on dual use file-server/app-server system, as memory hungry applications can cause the file-system cache to purge completely.
    • private = set up a separate user-mode cache in memory outside of the linux cache. Best for dedicated file-servers.

What's new in OES2

| 1 Comment
A good many things are new in OES2. The high-points:
  • 64-bit support (woo!)
  • iFolder 3.6
  • Dynamic Storage Technology (f.k.a. Shadow Volumes)
  • eDir integrated DHCP/DNS & FTP
  • Major Samba improvements
  • DFS support, including linking to sub-directories
    • Make a link to, for example, DATA3:/shared/, rather than making a new volume just for "shared"
  • NetWare in a VM, with improved VM management
  • Xen 3.0.4+ support
    • They wanted 3.0.5, but Xensource didn't make the cut off date. So OES2 will have 3.0.4 heavily patched.
Also...
  • Service packs for OES will be synchronized with SLES
  • OES is going to be an add-on product on top of SLES, choose 'add on product' during install and use the OES CD's.
  • The 'Volume Location Database' for DFS is clusterable now
  • iManager 2.7 now has support for managing file-system trustees
  • OES3 will only have support for NetWare inside of a VM. This is a move that was pushed by the hardware vendors, NOT Novell. The hardware vendors have notified Novell that they'll be discontinuing driver support for NetWare after OES2.
The new Novell Client will be released near OES. This will be 4.91SP4:
  • It has 802.1 support
  • New client for SLED10
  • No DLU for vista, that will come from Zen
There was a slide that is guaranteed to annoy the slashdot crowd.

Novell slide says Risk
Note the bottom line:
  • Reduced risk of deployment
This line implies that Novell believes Microsoft's IP is a risk to Linux. However, this one line was the only bullet point that was not verbally referred to in the keynote (that I remember). I suspect that this is the slide that Microsoft provided.

Another thing to note about Mr. Mundie's discussion was OSS. He referred to OSS as, "the university model of development," which further implies that it isn't good for industry. It was a subtle thing, but clearly more of the Microsoft line on this whole deal.

TUT212: Novell Storage Services

Not a new topic, but it contained the updates to NSS that'll be there in OES2.

By far the biggest thing is a 64-bit version of OES. Big big big. How big? Very big.

Remember those benchmarks I ran? The ones that compare the ability of OES to keep up with NetWare? And how I learned that on OES NCP operations are CPU bound w-a-y more than on NetWare? That may be going away on 64-bit platforms.

You see, 64-bit linux allows the Kernel to have all addressable memory as kernel memory. 32-bit linux was limited to the bottom 1GB of RAM. If NSS is allowed to store all of its cache in kernel memory, it'll behave exactly like 32-bit NetWare has done since NSS was introduced with NetWare 5.0. I have very high hopes that 64-bit OES will solve the performance problems I've had with OES.

Monday keynote

Ron Hovespian is a better speaker than Messman was, and was much better about hiding the fact that he was using a teleprompter. All in all the session wasn't terribly informative, but then the Monday session generally aimed at Press Releases rather than gee whiz. That comes Friday.

That said, there was some good stuff in this session:
  • OES2 public beta will be 'soon'. It will not be released at BrainShare
  • AD / eDir federation will be in OES2
  • SLES10 SP1 is out
  • A new certification: Novell Certified Engineer (NCE), a migration of the old Certified Novell Engineer (CNE) to the new Linux regime. (I have to look in to that)
  • Virtualization managers are coming soon. Possibly in Zen for Linux 7.2, releasing "after Q2".
  • NetWare SP7 will be OES2
Oh, and there was a Microsoft guy on stage. Whoa. I'll post that picture later.

Update: The picture.

Sungard @ BrainShare

TUT281 is a very interesting session.
'This session presents an integration solution to manage higher education student identity information using Banner and IDM 3.5. We discuss the objects and attributes that can be synchronized as well as showing how to implement custom business policies as part of the integration solution."
Could it be that they have a driver for Banner SCT now? That would change some politics. Last time we looked we needed the 'CSV' driver to make IDM work with Banner, and that is ironically the most expensive driver to buy. Hmmmmmmmm.

On my way

I'm heading to Salt Lake City for BrainShare today. I'm in the Bellingham airport right now, on their wireless. FREE wireless, mind. Not this sprint/verizon/alcatel wireless you get in the big airports that costs $10/hour or whatnot. Civilization!

Anyway.

The thing that rocks HARD is that Delta has direct flight from BLI to SLC. This is even better because Novell contracts with Delta for deals for attendees, as SLC is a hub for Delta. So not only do I not have to do a cross-terminal transfer in SeaTac like last year, I get to fly for cheaper. Or, WWU gets it for cheaper. Also, I'll be coming home Friday night since I don't have to do a red-eye.

Xen, cdroms, and tricks

One of the things I managed to figure out today is how to get a DVD drive visible in a Xen DomU, and how to tell the DomU that the media has been changed. First off, configuring your VM to have an optical drive in the first place.
disk = [ 'phy:/dev/sdb5,ioemu:hda,w','phy:/dev/sr0,hdc:cdrom,r']
The second command is the one that attached the physical drive to the DomU. That'll give you a CD device in your DomU. Unless you have a disk in the drive when the DomU is started, you won't see anything. Here is where the next bit comes in.

Unknown to me until now, there is a key-combination that allows you to manage the devices in a DomU.

[ctrl]+[alt]+[2]

That will take you to the HVM management screen. Type 'help' for what commands you can issue here. To tell the DomU that the optical device is ejected:
eject hdc
Where "hdc" is the device you configured in your VM config file.

Then change your media, and at the same screen, issue the command:
change hdc /dev/sr0
This tells the DomU that the optical device has new media, and to scan it.

To get back to the graphical screen:
[ctrl]+[alt]+[1]
This screen works similar to the NetWare debugger, in that all processing in the VM stops when you're in there. The eject command causes processing in the VM enough to process the eject, but not enough to run all the other processes. So beaware that time-sync will get screwed if you stay in the screen too long.

ZEN Pulsar has a name

| 1 Comment
http://www.novell.com/coolblogs/?p=792

and

http://www.novell.com/news/press/item.jsp?id=1306

It is, wait for it...

Novell ZENworks Configuration Management!

One nifty feature, long rumored:
With native integration for both Microsoft* Active Directory* and Novell® eDirectoryTM
I've heard that ZEN Pulsar would have its own internal database for things like application objects and images. I guessed this was due to eventual AD support, but, hey, there it is!

We'll hear a lot more about it next week at BrainShare, I'm sure. One thing is clear, and that's this version of Zen is Windows-desktop only for the time being. Linux will come later, again we'll learn more next week. ZEN Asset Management is still a different product, as is Patch Management. No surprise there, as both products are repackaged third-party apps.
The last NOA before BrainShare: Dynamic Storage Technology

This is the official name for what has been known as Shadow Volumes. I've spoken about them many times in the past. I first heard about Shadow Volumes (now Dynamic Storage Technology) at BrainShare last year, but I didn't blog about it. Since then, there have been a few more posts.

June 15, 2006
June 26, 2006
September 13, 2006
November 30, 2006

Yeah, this is exciting stuff. The podcast had more details, here are my notes:

Jason Williams -- Product Manager for OES
  • OES2 will include Dynamic Storage Technology
  • "We recon about 80% of of that stuff [on very large NCP volumes] could be turned to stale. It's stuff that hasn't been touched in maybe 30 days or more"
  • "We have one customer out there with maybe 450 plus terabytes of data, and that's just the unstructured stuff. It doesn't even account for their databases."
  • Redirection to the shadow volume is done similar to DFS, with a pointer the client understands and then follows.
    • This avoids the migrate/demigrate problem for traditional HSM
  • This is linux-only. Not NetWare.
  • Works for NCP-clients right now, trying to get Samba working... not done yet.
  • Can set policies for what to migrated, ModifyDate, AccessDate, FileType, etc.
  • Managed through NRM
  • Can do stacked policies, a global policy, and policies for specific volumes
  • Applies to not just NSS, but to ext3, reiser, xfs, and such.
  • Requires an exclusive lock on a file before it can be migrated.
  • This is a service on top of a file-system, not a feature of a file-system.
  • Monday morning keynote demo! Right there!
  • There will be a table in the Technology Lab
For more details, listen to the pod-cast.

Displays

It is no secret that it is an LCD market these days. There are some very good reasons for this, and I have two sitting on my work desk. I'm a sucker for a good monitor, so when I built my very first computer (an AMD K7) my one splurge with that system was a high-end monitor. It was a ViewSonic 17" Professional Series, and cost $800. Theory being that monitors didn't change that much, and it would probably last long enough to put two PCs to bed. Which it has, and will likely outlast a third.

Today on AnandTech, is a quote:
CRT holdouts still aren't going to get the high refresh rates and extremely fast response times that they're used to, but it is getting nearly impossible to find any quality CRTs these days - all of the best CRTs were made several years ago, and eventually even those are going to wear out.
Which is true.You can't find the very good CRTs anymore. The monitor I bought is still running just peachy. It is small by modern standards, but it has been rock solid. Great color performance, great frequency response too. It also comes out of suspend mode pretty fast.

One of the people in a forum I follow works in a Hospital. That is one area where high-end CRTs are in great demand, as high refresh-rates are quite common on imaging equipment. LCDs can't keep up. Since these are used daily, and used hard, finding replacements is getting harder and harder every year.

The desktop system at home is too old to keep up. I'll be building a gaming-oriented rig soon, as the laptops have taken over 90% of the non-gaming activity in the house. That will probably have an LCD larger than 17", as all new video cards these days come with only DVI outputs and the screen realestate is needed. This old warrior will likely be consigned to be the head for the linux router.

Spam stats!

Yummy stats! These are from the anti-spam appliance in front of Exchange, for the last 24 hours.



Processed
Spam
Suspected Spam
Attacks
Blocked
Allowed
Viruses
Suspected Virus
Worms
Unscannable





Summary 168,802 85,166 (50%) 544 (<> 4,837 (3%) 0 (0%) 0 (0%) 43 (<> 31 (<> 3,730 (2%) 1,772 (1%)

And now, definitions:
Processed: The number of messages processed. This is unexploded, so that mail sent to 42 people still counts as just 1.
Spam: The number of Spam messages with a confidence of 90% or higher.
Suspected Spam: The number of Spam messages with a user defined confidence of (in this case) 70% or higher.
Attacks: An aggregate statistic, but in this case they're all Directory Harvest Attack messages. A directory-harvest-attack message is one of those messages sent to 20 people at a site with generated names, in an effort to see which addresses don't generate a bounce message.
Allowed/Blocked: We don't use this feature.
Viruses: Viruses that are not mass mailers.
Suspected Viruses: Heuristically detected viruses. Good for picking up permutations of common viri.
Worms: Viruses that are mass mailers.
Unscannable: Messages that are unscannable for whatever reason.

Like my boss, you may be looking at that 50% number and wonder what happened. It is commonly reported in the press that, "90% of all email is now spam," so where are the other 40% going? I looked into where the press were getting their numbers, and most of them get them from MessageLabs. They report their numbers on the Threat Watch. Today, the Spam rate is, "48.43%", so the 50% we're seeing is well within reason. Looking at their historical data the spam rate waxes and wanes on a day to day and week to week basis.

Brainshare, and the bad chairs

The Salt Palace had an expansion recently. I can't tell where the new space was added, but it has been under construction for some time now. I think the new space is why they're able to offer a LearningLab again this year. I hope they fixed one of my bigger pet-peeves about BrainShare, the bad chairs.

Specifically, any session outside of the Ballrooms have the bad chairs. These chairs are standard convention chairs. Stackable. You can deploy lots of them. They store easy. They have a square back-rest, and the top comes to about mid-back on me. You can find an example here.Unfortunately, as they age, the back-rest panel flexes in and the top bar presses into my spine. This makes long sessions an agony 7 times out of 10 thanks to the chairs.

The ones in the ballrooms are different. They're the elongated oval style. Another example.

Monday: 3 ballroom, 1 bad-chair
Tuesday: 2 ballroom, 3 bad-chair (ow)
Wednesday: 2 bad-chair, including both 2-hour sessions. OwOwOw.
Thursday: 2 ballroom, 3 bad-chair (ow)
Friday: 1 ballroom, 1 bad-chair

Maybe I should bring my own custom seat-back. That's a really good idea.
A comment on my editorial about outsourcing student email asked whether we had considered GroupWise for the students.

There are a number of reasons why that isn't workable, though product price is not one of them. I'm pretty sure we have enough licenses to deploy GroupWise to the students, as I'm pretty sure that the bundle we purchased to get them into NetWare also included licensing for two other products (Zen for Desktops, and GroupWise). So, GW is a free option. Plus, it would give students a calendering option they don't have right now, and the ability (if we enable it) to share folders between users.

Now for the downsides, and it isn't any of the five points I lined out previously.

First and foremost, SPAM. The native anti-spam inside GroupWise is a simple blacklist last time I looked, which is effectively worthless in the modern era of SPAM. So we'd STILL need an add-on anti-spam product. The open-source stuff we're doing now would work just as effectively with GroupWise as it does with postfix, and that would provide zero improvement over the status quo. As I mentioned before, commercial anti-spam products that cost even $2 a head are still more expensive than we can afford. And finally, SPAM is what is driving the current decision process, not new/better/more features.

Second, is server. This is going to be a POA with 18,000 users in it. No matter what we do, we'll require a new server to do student email and that's a non-trivial cost. We can't install GW onto the Solaris system that is the current student email server, so we'll need something else (probably running SLES10) to provide that functionality. We could theoretically put it into the NetWare cluster, but that is not a long-term viable solution considering Novell's stated goals. This server will have to run all the GroupWise agents on one server, so one POA, one MTA, GWIA, and WebAccess. This may not be doable with as many users as we have right now.

Third, is scaling. We have 18,000 active student accounts. Even in a WebAccess/GWIA only environment that'll make for a monstrously huge POA database. The mail volumes on the current student email server is on the order of 350GB in size, roughly equivalent to our Exchange mail-store size. Reindexing that thing will take a lot of time. We could split this into multiple POAs, but then you create the, "how do we automate load-balancing between the POAs," problem.

Forth, is people. By going to a GW solution, we liberate significant sysadmin time in the unix side of the org, but consume nearly 100% of a person on the Novell/MS side of the org. That person is most likely me, as the other mail administrator on that side is adamantly (and vociferously) against GroupWise in general due to bad experiences at a previous job. I don't want to be an email administrator like that.

In short GroupWise doesn't solve the identified problem, namely SPAM. It adds a bunch of 'nice-to-have' features. Outsourcing gives us a LOT, such as greatly increased mail-quota, much better anti-SPAM, and the possibility of keeping a WWU-branded mail account after graduation.
In this age, there is not much point in a school going halfway with an email system...either offer something reasonably close to the state-of-the-art or outsource it to someone who does. If you do neither, it won't get used. Even mandating the use of the school email doesn't work. You end up with professors collecting their students' gmail/hotmail/etc addresses at the beginning of the semester and having a TA type all those addresses into a mailing list.
-paeanblack (191171)
A good point. Our Fac/Staff side is done to corporate standards, and is pretty good. We use Exchange, and pay for some (rather good) anti-SPAM appliances. The quality of email provided to our FacStaff is state of the art. Student side is another matter. The prime mailer right now is handled by the venerable postfix, with antispam provided by other open-source products.

In both cases, though, mail quota doesn't come even remotely close to the "gmail standard". I THINK student quota is 100MB these days, and I could be quite wrong. We have students mailing (*sigh*) 10MB power-point files around, so that can get chewed up right quick. Students get POP and IMAP support, though from what I hear the SPAM problem is the main complaint, and there is some grumbling that squirrel mail isn't the best interface to use.
You give them a campus e-mail address. It's the *official* address. Delivery to that mailbox for all official college correspondence is guaranteed. THEN, if you opt to forward it off-campus to gmail or wherever, that's your own business, and you're responsible for the failings of such at your own peril.
Dredd13 (14750)
This is what we do. The official address is the @cc.wwu.edu address. Students can then forward that mail to somewhere else if they so wish (and a lot do). We haven't accepted an off-campus 'official address' because of the inability to guarantee delivery of things like billing and assignments.
I don't understand the problem with having a universal campus-hosed e-mail service. They have servers accessible to the outside world, so why not throw in an e-mail server? If you make it simple (ie: SquirrelMail seems to be a popular campus e-mail hosting app, probably cause of it's cost and simplicity), I wouldn't think size would be an issue, as long as you set the proper quotas per e-mail/user.
-Anonymous Coward
The problem with this is funding. We use SquirrelMail. Unfortunately, the spam problem is bad enough that we need to spend money, not just admin time, to fix the problem to the end user's sastisfaction. Spending money for 18,000 accounts is not cheap by any stretch. Spending on that front is largely tied to student tech fees, which students are understandably loath to increase more than they have to. I don't know what success we've had getting fees approved for things like commercial anti-SPAM products.
All students will be forced onto the system by the end of the semester, but it doesn't support POP or IMAP. Because of that limitation, the only freely available mail client it supports is Windows Live Desktop, which is only available on Windows
-Topic head
This is a problem that has been brought up. A sizable percentage of our student population has PowerBooks as their primary computer, and a Windows-only solution isn't workable. Our Computer Science department is, understandably, a den of anti-Microsoft sentiment (which is why the cs.wwu.edu domain receives mail independant of the central services). This is one of the reasons why we NEED something like POP or better yet IMAP support in whatever we go with. Web-only portals like gmail can work, but some students really like just dropping all their mail into a single mail client that has links to all of their email accounts.
I agree, switching to gmail for university email doesn't sound that bad. Especially if it would raise the storage limit from 20 MB to >2GB. I don't really care though, I almost never use my university email as I have all of my class email sent to my Yahoo/SBC account.
-assassinator42 (844848)
Before the current Windows Live vs. Google debate started there were murmurings of looking at converting to a gmail setup. We got hung up on several of the points mentioned in my previous post; no SSO, no easy account create/delete, no password sync.
My University [dailynorthwestern.com] is switching to Google. One of my concerns is that I really like my desktop clients (alpine and thunderbird) and prefer IMAP. While gmail is an excellent web-client, I don't really use my gmail account that much, because it doesn't offer IMAP & POP is both "flaky" and limiting.
-Anonymous Coward
IMAP is something of a sore point with us techs. We prefer it to POP. Neither service offers IMAP yet, which is one of the reasons we haven't lept in with glad cries.
You're forgetting about something, though. Microsoft give huge discounts and tons of free stuff to colleges, therefore the colleges have raging boners for Microsoft.
-Anonymous Coward
Heh. Us more than most, since we're close enough to Redmond that a number of our alumni work for Microsoft and can donate software from the Company Store. That's how we paid for MS Office the last time around. The IRS has changed some rules to make that more expensive, but it is still a lot cheaper than regular alumni appeals. This is how we were able to afford to import all students into Active Directory.

However... while Microsoft is 'the cheap option' a lot of the time, recent licensing changes at Microsoft have made it much more expensive for us, and our Alumni arm-twisters. We're still wondering how we're going to pay for Exchange 2007. Vista... oof. Not going there yet. Like ALL institutions, we've factored in a certain level of money for software and Microsoft is making themselves more expensive. So, the raging boner is going flacid.

Besides, we've been a NetWare shop for a long time. Hah!
Our boss dismissed the idea of outsourcing to Google or anybody else based SOLELY upon the fact that they reserved the right to advertise in the future to our students. We don't view our students as a commodity to be sold, so that kinda killed the whole "outsource the email" idea.
-Sorthum (123064)
Yeah, that's giving us pause too. Neither outright states that they won't advertise to students. Both admit they'll be using usage data to improve their advertising targeting in general.

Outsourcing student e-mail

I saw on Slashdot today a piece about a University migrating their student email to Windows Live.

There have been high-level discussions about doing the same here at WWU, only we're still trying to figure out if Windows Live or a Google program makes the most sense. No decision has been made, though Windows Live would integrate much better into our environment due to the presence of student accounts in Active Directory. The Google offering has better, 'hearts and mind,' support among us techs, but the Microsoft offering would require less work from us techs to get running.

Last I heard, neither offering supported IMAP. GMail doesn't support IMAP, so I doubt any Google offer would. No idea if Windows Live (general access) even does.

There are a number of reasons why outsourcing email is attractive, and right there at the top is SPAM. We can't afford any commercial product to do student anti-spam, as they all charge per-head and even $2/head gets pretty spendy when you have to cover 18,000 student accounts. Currently, student e-mail anti-SPAM is all open-source and I still hear that the SPAM problem is pretty bad. The most senior of our unix admins spends about half his day dealing with nothing but SPAM related problems, so outsourcing would save us that expense as well.

The number two reason is price. Both the Google offering and Microsoft offering are free. Both have promised that they won't put advertising in their web portals for active students, but the usage data may be used to tailor advertising programs targeted (elsewhere) at the high-profit college-age population. Both offerings permit the student to maintain the address after graduation, though in that case they would get advertising in their web portals.

There are a number of problems that outsourcing introduces.
  • Identity synchronization. MS is easiest, Google will require some custom code.
  • Password synchronization. Do we even want to do it? If so, how? If not, why not?
  • Account enable/disable. How do we deactivate accounts?
  • Single sign-on. Is it possible to integrate whichever we use into CAS? Can we integrate it into the WWU Portal?
  • Web interface skinning. Will they permit skinning with the WWU style, or will they force their own?
The answers to all of the above are not in yet, which is why a decision hasn't been made on which way we're going. But the decision to outsource at all is all but made at this point.

Update 1 10/13/2007
Update 2 8/1/2008

Another pet-peeve

"I need some space on the SANS".

No. You don't. You need some space on the SAN. Different. We don't have multiple Storage Area Networks, nor do I think you're talking about a certain well known security agency.

Thank you. Please move along.