Friday, September 18, 2009

It's the little things

My attention was drawn to something yesterday that I just hadn't registered before. Perhaps because I see it so often I didn't twig to it being special in just that place.

Here are the Received: headers of a bugzilla message I got yesterday. It's just a sample. I've bolded the header names for readability:
Received: from ExchEdge2.cms.wwu.edu (140.160.248.208) by ExchHubCA1.univ.dir.wwu.edu (140.160.248.102) with Microsoft SMTP Server (TLS) id 8.1.393.1; Tue, 15 Sep 2009 13:58:10 -0700
Received: from mail97-va3-R.bigfish.com (216.32.180.112) by
ExchEdge2.cms.wwu.edu (140.160.248.208) with Microsoft SMTP Server (TLS) id 8.1.393.1; Tue, 15 Sep 2009 13:58:09 -0700
Received: from mail97-va3 (localhost.localdomain [127.0.0.1]) by mail97-va3-R.bigfish.com (Postfix) with ESMTP id 6EFC9AA0138 for me; Tue, 15 Sep 2009 20:58:09 +0000 (UTC)
Received: by mail97-va3 (MessageSwitch) id 12530482889694_15241; Tue, 15 Sep 2009 20:58:08 +0000 (UCT)
Received: from monroe.provo.novell.com (monroe.provo.novell.com [137.65.250.171]) by mail97-va3.bigfish.com (Postfix) with ESMTP id 5F7101A58056 for me; Tue, 15 Sep 2009 20:58:07 +0000 (UTC)
Received: from soval.provo.novell.com ([137.65.250.5]) by
monroe.provo.novell.com with ESMTP; Tue, 15 Sep 2009 14:57:58 -0600
Received: from bugzilla.novell.com (localhost [127.0.0.1]) by soval.provo.novell.com (Postfix) with ESMTP id A56EECC7CE for me; Tue, 15 Sep 2009 14:57:58 -0600 (MDT)
For those who haven't read these kinds of headers before, read from the bottom up. The mail flow is:
  1. Originating server was Bugzilla.novell.com, which mailed to...
  2. soval.provo.novell.com running Postfix, who forwarded it on to Novell's outbound mailer...
  3. monroe.provo.novell.com, who attempted to send to us and sent to the server listed in our MX record...
  4. mail97-va3.bigfish.com running Postfix, who forwarded it on to another mailer on the same machine...
  5. mail97-ca3-r running something called MessageSwitch, who sent it on to the internal server we set up...
  6. exchedge2.cms.wwu.edu running Exchange 2007, who send it on to the Client Access Server...
  7. exchhubca1.univ.dir.wwu.edu for 'terminal delivery'. Actually it went on to one of the Mailbox servers, but that doesn't leave a record in the SMTP headers.
Why is this unusual? Because steps 4 and 5 are at Microsoft's Hosted ForeFront mail security service. The perceptive will notice that step 4 indicates that the server is running Postfix.

Postfix. On a Microsoft server. Hur hur hur.

Keep in mind that Microsoft purchased the ForeFront product line lock stock and barrel. If that company had been using non-MS products as part of their primary message flow, then Microsoft probably kept that up. Next versions just might move to more explicitly MS-branded servers. Or not, you never know. Microsoft has been making placating notes towards Open Source lately. They may keep it.

Labels: , , ,


Tuesday, September 08, 2009

Exchange Transport Rules, update

Remember this from a month ago? As threatened in that post I did go ahead and call Microsoft. To my great pleasure, they were able to reproduce this problem on their side. I've been getting periodic updates from them as they work through the problem. I went through a few cycles of this during the month:

MS Tech: Ahah! We have found the correct regex recipe. This is what it is.
Me: Let's try it out shall we?
MS Tech: Absolutely! Do you mind if we open up an Easy Assist session?
Me: Sure. [does so. Opens sends a few messages through, finds an edge case that the supplied regex doesn't handle]. Looks like we're not there yet in this edge case.
MS Tech: Indeed. Let me try some more things out in the lab and get back to you.

They've finally come up with a set of rules to match this text definition: "Match any X-SpamScore header with a signed integer value between 15 and 30".

Reading the KB article on this you'd think these ORed patterns would match:
^1(5|6|7|8|9)$
^2\d$
^30$
But you'd be wrong. The rule that actually works is:
(^1(5$|6$|7$|8$|9$))|(^2(\d$))|(^3(0$))
Except if ^-
Yes, that 'except if' is actually needed, even though the first rule should never match a negative value. You really need to have the $ inside the parens for the first statement, or it doesn't match right; this won't work: ^1(5|6|7|8|9)$. The same goes for the second statement with the \d$ constructor. The last statement doesn't need the 0$ in parens, but is there to match the pattern of the previous two statements of having the $ in the paren.

Riiiiiight.

In the end, regexes in Exchange 2007 Transport Rules are still broken, but they can be made to work if you pound on them enough. We will not be using them because they are broken, and when Microsoft gets around to fixing them the hack-ass recipes we cook up will probably break at that time as well. A simple value list is what we're using right now, and it works well for 16-30. It doesn't scale as well for 31+, but there does seem to be a ceiling on what X-SpamScore can be set to.

Labels: , ,


Wednesday, August 05, 2009

Exchange transport-rules

Exchange 2007 supports a limited set of Regular Expressions in its transport-rules. The Microsoft technet page describing them is here. Unfortunately, I believe I've stumbled into a bug. We've recently migrated our AntiSpam to ForeFront. And part what ForeFront does is header markup. There is a Spamminess number in the header:
X-SpamScore: 66
That ranges from deeply negative to over a hundred. With this we can structure transport-rules to handle spammy email. In theory, the following trio of regexes should catch anything with a score of 15 or higher:
^1(5|6|7|8|9)$
^(2|3|4|5|6|7|8|9)\d$
^\d\d\d$
Those of you that speak Unix regex are quirking an eyebrow at that, I know. Like I said, Microsoft didn't do the full Unix regex treatment. The "\d" flag, "matches any single numeric digit." The parenthetical portion, "Parentheses act as grouping delimiters," and, "The pipe ( | ) character performs an OR function."

Unfortunately, for reasons that do not match the documentation the above trio of regexes is returning true on this:
X-SpamScore: 5
It's the second recipe that's doing it, and it looks to be the combination of paren and \d that's the problem. For instance, the following rule:
^\d(6|7)$
returns true for any single numeric value, but returns false for "56". Where this rule:
^5(6|7)
only returns true for 56 and 57. To me this says there is some kind of interaction going on between the \d and the () constructors that's causing it to change behavior. I'll be calling Microsoft to see if this is working as designed and just not documented correctly, or a true bug.

Labels: ,


Friday, June 26, 2009

X500 addresses and LegacyExchangeDN

We missed a step or something in decommissioning our Exchange2003 servers. As a result, we have a whole lot of... stuff going 'unresolvable' due to how Outlook and Exchange work. There is an attribute on users and group called LegacyExchangeDN. Several processes store this value as the DN of the object. If that object was created in Exchange 2003 (or earlier) it's set to a location that no longer exists.

The fix is to add an X500 address to the object. That way, when the resolver attempts to resolve that DN it'll turn up the real object. So how do you add an X500 address to over 5000 objects? Powershell!

$TargetList = get-distributiongroup grp.*

foreach ($target in $TargetList) {
$DN=$target.SamAccountName
$Leg=$target.LegacyExchangeDN
$Email=$target.EmailAddresses
$Has500=0

if ($leg -eq "/O=WWU/OU=WWU/cn=Recipients/cn=$DN") {
foreach ($addy in $Email) {
if ($addy -eq [Microsoft.Exchange.Data.CustomProxyAddress]("X500:" + $Leg)) {
$Has500=1
}
}
if ($Has500 -eq 0) {
$Email += [Microsoft.Exchange.Data.CustomProxyAddress]("X500:" + $Leg)
$target.EmailAddresses = $Email
set-distributiongroup -instance $target
write-host "$DN had X500 added" -foregroundcolor cyan
} else {
write-host "$DN already had X500 address" -foregroundcolor red
}
} else {
write-host "$DN has correct LegacyExchangeDN" -foregroundcolor yellow
}
}


It's customized for our environment, but it should be obvious where you'd need to change things to get it working for you. When doing users, use "get-mailbox" and "set-mailbox" instead of "get-distributiongroup" and "set-distributiongroup". It's surprisingly fast.

My contribution to the community!

Labels: ,


ForeFront and spam

They have an option to set a custom X-header for indicating spam. The other options are subject-line markup and quarantine on the ForeFront servers. What they never document is what they set the header to. As it happens, if the message is spam it gets set like this:
X-WWU-JunkIt: This message appears to be spam.
Very basic. And not documented. Now that we know what it looks like we can create a Transport Rule that'll direct such mail to the junk folder in Outlook. Handy!

Labels: , ,


Wednesday, June 10, 2009

Outlook for everything

Back when I worked in a GroupWise shop, we'd get the occasional request from end users to see if Outlook could be used against the GroupWise back-end. Back in those days there was a MAPI plug-in for Outlook that allowed it to talk natively to GroupWise and it went rather well. Then Microsoft made some changes, Novell made some changes, and the plug-in broke. GW Admins still remember the plug-in, because it allowed a GroupWise back end to look exactly like and Exchange back end to the end users.

Through the grapevine I've heard tales of Exchange to GroupWise migrations almost coming to a halt when it came time to pry Outlook out of the hands of certain highly placed noisy executives. The MAPI plugin was very useful in quieting them down. Also, some PDA sync software (this WAS a while ago) only worked with Outlook, and using the MAPI plugin was a way to allow a sync between your PDA and GroupWise. I haven't checked if there is something equivalent in the modern GW8 era.

It seems like Google has figured this out. A plugin for Outlook that'll talk to Google Apps directly. It'll allow the die-hard Outlook users to still keep using the product they love.

Labels: ,


Tuesday, May 12, 2009

Explaining email

In the wake of this I've done a lot of diagramming of email flow to various decision makers. Once upon a time, when the internet was more trusting, SMTP was a simple protocol and diagramming was very simple. Mail goes from originator, through one or more message transfer agents (or none at all and deliver directly, it was a more trusting time back then) to its final home, where it gets delivered. SMTP was designed in an era when UUCP was still in widespread use.

Umpteen years later and you can't operate a receiving mail server without some kind of anti-spam product. Also, anti-spam has been around as an industry long enough that certain metaphors are now ingrained in the user experience. Almost all products have a way to review the "junk e-mail". Almost all products just silently drop virus-laden mail without a way to 'review' it. Most products contain the ability to whitelist and blacklist things, and some even let that happen at the user level.

What this has done is made the mail-flow diagram bloody complicated. As a lot of companies are realizing the need to have egress filters in addition to the now-standard ingress filters, mail can get mysteriously blocked at nearly every step of the mail handling process. The mail-flow diagram now looks like a flow-chart since actual decisions beyond "how do I get mail to domain.com" are made at many steps of the process.

The flow-charts are bad enough, but they can be explained comprehensibly if given enough time to describe the steps. What takes even more time is when deploying a new anti-spam product and deciding how much end-user control to add (do we allow user-level white lists? Access to the spam-quarantine?), what kinds of workflow changes will happen at the helpdesk (do we allow helpdesk people access to other people's spam-quarantine? Can HD people modify the system whitelist?), or overall architectural concerns (is there a native plugin that allows access to quarantine/whitelist without having to log in to a web-page? If the service is outsourced (postini, forefront) does it provide a SSO solution or is this going to be another UID/PW to manage?).

And I haven't even gotten to any kind of email archiving.

Labels: ,


Thursday, April 30, 2009

Conflicting email priorities

As mentioned in the Western Front, we're finally migrating students to the new hosted Exchange system Microsoft runs. They've since changed the name from Exchange Labs to OutlookLive. It has taken us about two quarters longer than we intended to start the migration process, but it is finally under way.

Unfortunately for us, we got hit with a problem related to conflicting mail priorities. But first, a bit of background.

ATUS was getting a lot of complaints from students that the current email system (sendmail, with SquirrelMail) was getting snowed under with spam. The open-source tools we used for filtering out spam were not nearly as effective as the very expensive software in front of the Faculty/Staff Exchange system. Or much more importantly, were vastly less effective than the experience Gmail and Hotmail give. Something had to change.

That choice was either to pay between $20K and $50K for an anti-spam system that actually worked, or outsource our email for free to either Google or Microsoft. $20K.... or free. The choice was dead simple. Long story short, we picked Microsoft's offering.

Then came the problem of managing the migration. That took its own time, as the Microsoft service wasn't quite ready for the .EDU regulatory environment. We ran into FERPA related problems that required us to get legal opinions from our own staff and the Registrar relating to what constitutes published information, which required us to design systems to accommodate that. Microsoft's stuff didn't make that easy. Since then, they've rolled out new controls that ease this. Plus, as the article mentioned, we had to engineer the migration process itself.

Now we're migrating users! But there was another curveball we didn't see, but should have. The server that student email was on has been WWU's smart-host for a very long time. It also had the previously mentioned crappy anti-spam. Being the smart-host, it was the server that all of our internal mail blasts (such as campus notifications of the type Virginia Tech taught us to be aware of) relayed through. These mail blasts are deemed critical, so this smart-host was put onto the OutlookLive safe-senders list.

Did I mention that we're forwarding all mail sent to the old .cc.wwu.edu address to the new students.wwu.edu address? The perceptive just figured it out. Once a student is migrated, the spam stream heading for their now old cc.wwu.edu address gets forwarded on to OutlookLive by way of a server that bypasses the spam checker. Some students are now dealing with hundreds of spam messages in their inbox a day.

The obvious fix is to take the old mail server off of the bypass list. This can't be done because right now critical emails are being sent via the old mail server that have to deliver. The next obvious fix, turn off forwarding for students that request it, won't work either since the ERP system has all the old cc.wwu.edu addresses hard-coded in right now and the forwards are how messages from said system get to the students.wwu.edu addresses. So we geeks are now trying to set up a brand new smart-host, and are in the process of finding all the stuff that was relaying through the old server and attempting to change settings to relay through the new smart-host.

Some of these settings require service restarts of critical systems, such as Blackboard, that we don't normally do during the middle of a quarter. Some are dead simple, such as changing a single .ini entry. Still others require our developers to compile new code with the new address built in, and publish the updated code to the production web servers.

Of course, the primary sysadmin for the old mail-server was called for Federal jury-duty last week and has been in Seattle all this time. I think he comes back Monday. His grep-fu is strong enough to tell us what all relays through the old server. I don't have a login on that server so I can't try it out myself.

Changing smart-hosts is a lot of work. Once we get the key systems working through the new smart-host (Exchange 2007, as it happens), we can tell Microsoft to de-list the old mail-server from the bypass list. This hopefully will cut down the spam flow to the students to only one or two a day at most. And it will allow us to do our own authorized spamming of students through a channel that doesn't include a spam checker. Valuable!

Labels: , ,


Friday, March 20, 2009

Storage that makes you think

Anandtech has a nice article up right now that compares SAS, SATA, and SSD drives in a database environment. Go read it. I'll wait.

While the bulk of the article is about how much the SSD drives blow the pants off of rotational magnetic media, the charts show how SAS performs versus SATA. As they said at the end of the article:
Our testing also shows that choosing the "cheaper but more SATA spindles" strategy only makes sense for applications that perform mostly sequential accesses. Once random access comes into play, you need two to three times more SATA drives - and there are limits to how far you can improve performance by adding spindles.
Which matches my experience. SATA is great for sequential loads, but is bottom of the pack when it comes to random I/O. In a real world example, take this MSA1500CS we have. It has SATA drives in it

If you have a single disk group with 14 1TB drives in it, this gives a theoretical maximum capacity of 12.7TB (that storage industry TB vs OS TB problem again). Since you can only have LUNs as large as 2TB due to the 32-bit signed integer problem, this would mean this disk group would have to be carved into 7 LUNs. So how do you go about getting maximum performance from this set up?

You'll have to configure a logical volume on your server such that each LUN appends to the logical volume in order, and then make sure your I/O writes (or reads) sequentially across the logical volume. Since all 7 LUNs are on the same physical disks, any out-of-order arrangement of LUN on that spanned logical volume would result in semi-random I/O and throughput would drop. Striping the logical volume just ensures that every other access requires a significant drive-arm move, and would seriously drop throughput. It is for this reason that HP doesn't recommend using SATA drives in 'online' applications.

Another thing in the article that piqued my interest is there on page 11. This is where they did a test of various data/log volume combinations between SAS and SSD. The conclusion they draw is interesting, but I want to talk about it:
Transactional logs are written in a sequential and synchronous manner. Since SAS disks are capable of delivering very respectable sequential data rates, it is not surprising that replacing the SAS "log disks" with SSDs does not boost performance at all.
This is true, to a point. If you have only one transaction log, this is very true. If you put multiple transaction logs on the same disk, though, SSD becomes the much better choice. They did not try this configuration. I would have liked to have seen a test like this one:
  • Three Data volumes running on SAS drives
  • One Log volume running on an SSD with all three database logs on it
I'm willing to bet that the performance of the above would match, if not exceed, running three separate log volumes running on SAS.

The most transactional database in my area is probably Exchange. If we were able to move the Logs to SSD's, we very possibly could improve performance of those databases significantly. I can't prove it, but I suspect we may have some performance issues in that database.

And finally, it does raise the question of file-system journals. If I were to go out and buy a high quality 16GB SSD for my work-rig, I could use that as an external journal for my SATA-based filesystems. As it is an SSD, running multiple journals on it should be no biggie. Plus, offloading the journal-writes should make the I/O on the SATA drives just a bit more sequential and should improve speeds. But would it even be perceptible? I just don't know.

Labels: , ,


Thursday, March 19, 2009

Evolution and Exchange 2007

The question came up today, so I googled around. Turns out the MAPI plugin for Evolution is out there and can be installed on most OpenSUSE builds.

http://download.opensuse.org/repositories/GNOME://Evolution://mapi/

However, it required Samba 4. Presumably for the RPC interface. So if you're not willing to upgrade your Samba, then you can't use it. Still, nice to see it out there!

Labels: ,


Monday, March 16, 2009

Death of cc.wwu.edu

Part of the process of moving the students over to Exchange Labs is decommissioning the cc.wwu.edu domain for email. Students have been there for a loooong time, and once upon a time faculty/staff mail was there as well. We've since moved to wwu.edu for our fac/staff domain.

Next week we're turning off cc.wwu.edu for fac/staff. The students still over there will be moved slowly over to the hosted solution. The Fac/Staff users will be moved to Exchange, period.

This has created some heated feelings as there are professors who've published books and have "@cc.wwu.edu" printed in the books. I'm not sure how we're handling that, but... that's not my email system. Email addresses do tend to get stale after a while, and that's just a fact of the internet.

However, one of the guys in the office here was one of the very first people to get an email address at cc.wwu.edu way back in the dark and misty reaches of a more trusting internet. I don't know how long he had that address, but it very well could have been over 20 years. He's letting it go with a tear in his eye, but not a big one. He's one of the unlucky schmucks with his first name as his username, and it's in every. single. solitary. mail-list known to God and man. His @cc.wwu.edu account has been nothing but a spam trap for years now.

Labels: ,


Tuesday, March 10, 2009

But what about GroupWise

Today I picked up my dead tree version of NetworkWorld, and saw an item on the cover:

Looking to exchange Exchange?
Joel Snyder tested six alternatives to Microsoft's Exchange 2007. OUr findings: the Exchange alternatives are adequate for midsized networks, but Exchange offers the most comprehensive set of features and management hooks for networks of all sizes. Page 22
(online version)

GroupWise was NOT in this test. This surprised me greatly, as the Big Three mailers have always been Exchange, Notes, and GroupWise. Notes was also left out of this test. The online version already has a few comments regarding GroupWise, and Joel Snyder replied with this:
By Joel Snyder on Tue, 03/10/2009 - 10:09am.

Sorry, Groupwise fans, but Novell just didn't show up on our radar in the mid-size email business.

When you're looking at this space, Microsoft and Lotus together own 96% of the on-site mail service in businesses (the numbers that IDC, Ferris, and Radicati offer all vary a lot, but no one seems to give the non-MS/non-Lotus camp more than 10% total for everyone). Slicing up the remaining piece is a pretty difficult task, with lots of little players. While Groupwise used to be a major mover-and-shaker, there is no obvious "#3" in this business anymore.

It's clear that we've got a pile of Groupwise fans (why am I remembering the Windows vs. Netware war of about 10 years ago???) here, so maybe we should take a quick look at Groupwise and see how it stacks up.

Considering Novell has spent quite a lot of effort trying to convince people that they're the number three behind the MS/IBM duopoly, this is somewhat concerning. I have no idea what the real market-share numbers are for the mid-size enterprise groupware market.

Labels: ,


Saturday, January 31, 2009

An open-source project I can really get behind

Link grabbed on Slashdot:

http://www.computerworld.com.au/article/274883/openchange_kde_bring_exchange_compatibility_linux


A project is now in the works to bring true Exchange compatibility to a non-Windows desktop. The Exchange Connector for Evolution and others of that ilk have done so using the WebDAV or RPC-over-HTTP interface. This project is different in that it is trying to emulate how Outlook interacts with the Exchange system; using RPC over SMB among other protocols.

This would bring the full power of the Exchange groupware environment to anything that can run KDE from the sounds of it. And that's really nifty. It uses Samba 4 code to a great extent, which makes sense as there is a lot of neat new things in that codebase. This would be a big step for Linux in the enterprise! I also saw mention of Novell input to the project, which is nice to see.

Labels: ,


Tuesday, January 20, 2009

Inept phishers

Over the weekend (Saturday, in fact) we had a phish attempted against us. As this is still a relatively new experience for us, it got the notice of the higher-ups. When I got in, I got the task of grepping logs to see if anyone replied to it.

While doing that I noticed something about the email. It had no clickable links in it, and the From: address (there was no Reply-To: address) was @wwu.edu, and was an illegal address.

In short, anyone replying to it would get a bounce message, and there was no way for the phishers to get the data they wanted.

More broadly, we've noticed a decided increase in phishing attempts against .edu looking for username/password combinations. The phishers then use that information to log in to webmail portals to send spam messages the hard way, copy-paste into new emails. This has the added benefit (for them) of coming from our legitimate mailers, from a legitimate address, and thus bypasses spam reputation checks, SPF records, and other blacklists. It doesn't have the volume of botnet-spam, but its much more likely to get past spam-checkers. At last check, about 50% of incoming mail connections to @wwu.edu are terminated due to IP-reputation failures.

Labels: ,


Thursday, January 08, 2009

Learning new tricks

This morning we got a standard form for an account status change, moving from Employee to Student. We do these all the time, though usually it's the other direction. We processed it like we always do, and moved it on.

Three hours pass, and the user goes to the ATUS helpdesk. They can't get into email, and would like to know why.

The reason is because we remove the Exchange mailbox when moving to Student. The Student email system is different. So far, perfectly normal.

It turns out that what they really wanted was an account on the shell-server, which they didn't already have. They were a staff member taking some classes, and needed the account. Not an ex-employee now taking classes. Someone, somewhere, filled out the wrong form and sent it along to us. So we needed to put their accounts back where they were.

We moved the account back to the Employee side of the virtual fence, but couldn't get the mailbox back. The mailbox wasn't showing in the Disconnected Mailboxes list like it should. Under Exchange 2003 the fix for that was pretty simple, right-click on the Mail-store and select, "Run Clean up process." That would force any recently deleted mailboxes to show up in the disconnected list.

Thing is, that isn't there in Exchange 2007. I went on up to the Mailstores in Exchange Console, and there was no option to force the clean-up process to run. I suspected this was one of those things consigned to the command-line thanks to Microsoft's 80/20 rule for Exchange management utilities (the 80% of the things you do every day and don't think about are in the GUI, the 20% you do once a month and need RIGHT NOW has to be googled and run through the powershell cmd).

I was right. It took some googling, but I found it.

Clean-MailboxDatabase -Identity "exchmailboxsrv1\First Storage Group\TwinklyVampireDatabase"
(Not real names)

That will force the clean-up process. After running this, the missing mailbox showed up, and we were able to reconnect it to their user object with no loss of mail.

Labels:


Friday, October 17, 2008

Cool things with powershell

Now that we're on Exchange 2007,we've had to figure out PowerShell. When I went to the Exch2007 class, it was pretty clear that Microsoft had redesigned their GUI tools under the 80/20 rule. 80 percent of the functionality that'll get used on a daily basis is in the GUI, and the 20 that gets used rarely or only by automation is on the command-line.

Which means that the once a year you go do something oddball you're hitting google to try and figure out the ruddy command-line options.

Any way, I digress. I've been writing a pair of powershell scripts to do some internal tasks (one of which is to create Resources the way we want them created), and have run into a few snags. The first snag is that a script that looks like this:
new-distributiongroup -Name $groupName -Type security -Yadda True
Add-ADpermission -Identity $resourceName -user $groupname -ExtendedRights "Send-as"


Won't work. That's because "new-distributiongroup" returns before the new distribution group can be acted upon by PowerShell. So I had to introduce a loop to make sure it was getable before I tried setting the permission. This is what vexed me. The loop I came up with is cludgy, but it does what I need it to.
$groupExists="False"
new-distributiongroup -Name $groupName -Type security -Yadda True
do {
sleep -seconds 1
$groupExists = get-ADpermission -Identity $groupName -blah blah |fw Isvalid
} while (!$groupExists)
Add-ADpermission -Identity $resourceName -user $groupname -ExtendedRights

While it works, when the script runs that loop creates a sea of StdErr output I don't care to know about. I'm waiting until it stops returning an error. Sometimes it takes only two seconds for the group to exist, other times it can take as long as 10. I still need to trap for it.

Today I finally figured out how to quash stderr so I don't see it. A very simple modification. It's in the test. Instead of "|fw IsValid", I use "2>1 |fs IsValid". This quashes StdErr, and still populates $groupExists. The script run looks a lot cleaner too.

The other thing I learned the hard way is that if you're doing multiple sets of mailbox or AD permissions, doing them too fast can cause the updates to collide. So I've taken to putting the above loop in to verify the previous permission mod has taken effect before I throw another one in. Annoying, but can be worked around.

Labels: ,


Sunday, September 14, 2008

EVA6100 upgrade a success

Friday night four HP tech arrived to put together the EVA6100 from a pile of parts and the existing EVA3000. It took them 5 hours to get it to the point where we could power-on and see if all of our data was still there (it was, yay), and a few hours after that on our behalf to put everything back together.

There was only one major hitch for the night, which meant I got to bed around 6am Saturday morning instead of 4am.

For EVA, and probably all storage systems, you present hosts to them and selectively present LUNs to those hosts. These host-settings need to have an OS configured for them, since each operating system has its own quirks for how it likes to see its storage. While the EVA6100 has a setting for 'vmware', the EVA3000 did not. Therefore, we had to use a 'custom' OS setting and a 16 digit hex string we copied off of some HP knowledge-base article. When we migrated to the EVA6100 it kept these custom settings.

Which, it would seem, don't work for the EVA6100. It caused ESX to whine in such a way that no VMs would load. It got very worrying for a while there, but thanks to an article on vmware's support site and some intuition we got it all back without data loss. I'll probably post what happened and what we did to fix it in another blog post.

The only service that didn't come up right was secure IMAP for Exchange. I don't know why it decided to not load. My only theory is that our startup sequence wasn't right. Rebooting the HubCA servers got it back.

Labels: , , , ,


Wednesday, September 10, 2008

That darned budget

This is where I whine about not having enough money.

It has been a common complaint amongst my co-workers that WWU wants enterprise level service for a SOHO budget. Especially for the Win/Novell environments. Our Solaris stuff is tied in closely to our ERP product, SCT Banner, and that gets big budget every 5 years to replace. We really need the same kind of thing for the Win/Novell side of the house, such as this disk-array replacement project we're doing right now.

The new EVAs are being paid for by Student Tech Fee, and not out of a general budget request. This is not how these devices should be funded, since the scope of this array is much wider than just student-related features. Unfortunately, STF is the only way we could get them funded, and we desperately need the new arrays. Without the new arrays, student service would be significantly impacted over the next fiscal year.

The problem is that the EVA3000 contains between 40-45% directly student-related storage. The other 55-60% is Fac/Staff storage. And yet, the EVA3000 was paid for by STF funds in 2003. Huh.

The summer of 2007 saw a Banner Upgrade Project, when the servers that support SCT Banner were upgraded. This was a quarter million dollar project and it happens every 5 years. They also got a disk-array upgrade to a pair of StorageTek (SUN, remember) arrays, DR replicated between our building and the DR site in Bond Hall. I believe they're using Solaris-level replication rather than Array-level replication.

The disk-array upgrade we're doing now got through the President's office just before the boom went down on big expensive purchases. It languished in the Purchasing department due to summer-vacation related under-staffing. I hate to think how late it would have gone had it been subjected to the added paperwork we now have to go through for any purchase over $1000. Under no circumstances could we have done it before Fall quarter. Which would have been bad, since we were too short to deal with the expected growth of storage for Fall quarter.

Now that we're going deep into the land of VMWare ESX, centralized storage-arrays are line of business. Without the STF funded arrays, we'd be stuck with "Departmental" and "Entry-level" arrays such as the much maligned MSA1500, or building our own iSCSI SAN from component parts (a DL385, with 2x 4-channel SmartArray controller cards, 8x MSA70 drive enclosures, running NetWare or Linux as an iSCSI target, with bonded GigE ports for throughput). Which would blow chunks. As it is, we're still stuck using SATA drives for certain 'online' uses, such as a pair of volumes on our NetWare cluster that are low usage but big consumers of space. Such systems are not designed for the workloads we'd have to subject them to, and are very poor performers when doing things like LUN expansions.

The EVA is exactly what we need to do what we're already doing for high-availability computing, yet is always treated as an exceptional budget request when it comes time to do anything big with it. Since these things are hella expensive, the budgetary powers-that-be balk at approving them and like to defer them for a year or two. We asked for a replacement EVA in time for last year's academic year, but the general-budget request got denied. For this year we went, IIRC, both with general-fund and STF proposals. The general fund got denied, but STF approved it. This needs to change.

By October, every person between and Governor Gregoir will be new. My boss is retiring in October. My grandboss was replaced last year, my great grand boss also has been replaced in the last year, and the University President stepped down on September 1st. Perhaps the new people will have a broader perspective on things and might permit the budget priorities to be realigned to the point that our disk-arrays are classified as the critical line-of-business investments they are.

Labels: , , , , , , , , , , , ,


Monday, August 25, 2008

Email sizes

The question has been raised internally that perhaps we need to reassess what we've set for email message-size limits. When we set our current limit, we did it to the apparent defacto standard for mail size limits, which is about 10 meg.

This, perhaps, is not what it should be for an institution of higher-ed where research is performed. We have certain researchers on campus that routinely play with datasets larger than 10MB, sometimes significantly larger. And these researchers would like to electronically distribute these datasets to other researchers, and the easiest means of doing that by far is email. The primary reason we have mail-servers serving the (for example) chem.wwu.edu domain is to have these folk with much larger message size limits. Otherwise, these folk would have their primary email in Exchange.

The routine answer I've heard for handling really large file sizes is to use, "alternate means," to send the file. We don't have a FTP server for staff use, since we have a policy that forbids the use of unauthenticated protocols for transmitting passwords and things. We could do something like Novell does with ftp.novell.com/incoming and create a drop-box that anyone with a WWU account can read, but that's sort of a blunt-force solution and by definition half of a half-duplex method. Our researchers would like a full duplex method, and email represents that.

So what are you all using for email size limits? Do you have any 'out of band' methods (other than snail mail) for handling larger data sizes?

Labels: , ,


Friday, August 01, 2008

Outsourcing email

Those of you who've been paying attention know that WWU is in the process of trying to outsource our student email. This has been going on for close to two years now, and we're now in the process of trying to get Exchange Labs working. So when I saw this article on Computerworld about this topic I had to read it. The article is more about the Google service, but includes some parts about the other offerings out there.

We went with Exchange Labs in part because we already had everyone in Active Directory, and the migration should go a lot easier that way. Or so we thought. We've been having major issues getting things talking. In theory once the communications channel is set up it should just work. We shall see.

One thing in the article that caught my attention was this:
Drexel University earlier this year launched a pilot project in which it will give some of its 20,000 students a choice of four e-mail systems: its own Exchange-based enterprise e-mail, Gmail, Microsoft Windows Live Hotmail and Microsoft's Exchange Labs, which is a pilot program for online, Exchange-based hosted e-mail that was launched about six months ago and is based on what will be the Exchange 14.0 release.
If we can't get it really working for us, something like this might be a good idea until such time as we can get true automatic provisioning done. It'll be a manual process (this is me wincing), but could get the product in the door while we work out bugs. Eh, who knows if this is where we'll go.

Labels:


Tuesday, July 15, 2008

Your spam-checker ate my email

This is a question I get a fair amount. This is understandable, as the spam-checker is the software whose entire job is to eat email. So naturally that's the first place people think to check when mail gets sent but not received.

I also hate dealing with this kind of question. The spam appliances we use have a search feature, which is critical for figuring out if some email is being eaten incorrectly. Unfortunately, the search feature is devilishly slow. I swear, it is grepping hundreds files tens of megabytes in size and post-processing the output. It generally takes 5 minutes to answer a question every time I hit the 'search' button. And just like google, it can take a few tries to phrase my search terms correctly to get what I want.

Right now we have a complaint that all email sent to us by a certain domain never arrives. This is false, as on the day in question we received and passed on to Exchange about 20 messages from the domain. As it happens the Edge server is having a problem with it, and that needs attention. But I had to do about 30 minutes of waiting for search results to really determine this.

Labels: ,


Monday, July 14, 2008

An exchange 2007 problem

While I was on vacation we had a few more instances of email going into a black hole. This is not good. I had suspected this was happening, but proof accumulated while I was broiling in the mid-west.

After doing a lot of message tracing in Exch2007, I noticed one trend. When an email to a group hits the Hub server, it attempts to dereference the group into a list of mailboxes to deliver to. It uses Global Catalogs for this function. When the GC used was one in our empty root rather than the child domain that everything lives in, this one group didn't return any people. The tracking code was, "dereferenced, 0 recipients". Which is a fail-by-success.

After a LOT of digging, I threw an LDAP browser at the GC's. What I noticed is that the GC entry for this one group was subtly different on the empty-root GC and the child-domain GC. Specifically, the object had no "member" attributes.

It turns out the problem was that the group in question was set to a Global group, rather than a Universal group. Ahah! Global groups apparently don't publish Member info globally, just in the domain itself. Universal groups are just that, Universal, and publish enterprise wide. Right. Gotcha.

Exch2003 did not manifest this, as it stayed in the domain pretty solidly. I don't know how many of our groups are still Global groups, but this one is going to take some clean-up to fix.

Labels: , ,


Monday, May 05, 2008

Back-scatter spam

There was a recent slashdot post on this. We've had a fair amount of this sort of spam. And the victims are at pretty high levels of our organization, too. Last week the person who is responsible for us even having a Blackberry Enterprise Server asked us to figure out a way to prevent these emails from being forwarded to their blackberry. When a spam campaign is rolling, that person can get a bounce-message every 5-15 minutes for up to 8 hours, into the wee hours of the night. And that's just the mails that get PAST our anti-spam appliance. We set up some forwarding filters, but we haven't heard back about how effective they are.

This is a hard thing to guard against. You can't use the reputation of the sender IP address, since they're all legitimate mailers being abused by the spam campaign and are returning delivery service notices per spec. So the spam filtering has to be by content, which is a bit less effective. In one case, of the 950-odd DSN's we received for a specific person during a specific spam campaign, only 15 made it to the inbox. But that 15 was enough above what they normally saw (about 3 a day) that they complained.

Backscatter is a problem. However, our affected users have so far been sophisticated enough users of email to realize that this was more likely forgery than something wrong with their computer. So, we haven't been asked to "track down those responsible." This is a relief for us, as we've been asked that in the past when forged spams have come to the attention of higher level executives.

If it becomes a more wide-spread problem, we will be told to Do Something by the powers that be. Unfortunately, there isn't a lot that can be done. Blocking these sorts of DSNs is doable, but that's an expensive thing to manage in terms of people time. In 6-12 months we can expect the big anti-spam vendors to include options to just block DSN's uniformly, but until that time comes (and we have the budget for the added expenses) we'd have to do it through dumb keyword filters. Not a good solution. And it would also cause legitimate bounce messages to fail to arrive.

Labels: , ,


Friday, April 11, 2008

On email, what comes in it

A friend recently posted the following:
80-90% of ALL email is directory harvesting attacks. 60-70% of the rest is spam or phishing. 1-5% of email is legit. Really makes you think about the invisible hand of email security, doesn't it?
Those of us on the front lines of email security (which isn't quite me, I'm more of a field commander than a front line researcher) suspected as much. And yes, most people, nay, the vast majority, don't realize exactly what the signal-to-noise ratio is for email. Or even suspect the magnitude. I suspect that the statistic of, "80% of email is crap," is well known, but I don't think people even realize that the number is closer to, "95% of email is crap."

Looking at statistics on the mail filter in front of Exchange, it looks like 5.9% of incoming messages for the last 7 days are clean. That is a LOT of messages getting dropped on the floor. This comes to just shy of 40,000 legitimate mail messages a day. For comparison, the number of mail messages coming in from Titian (the student email system, and unpublished backup MTA) has a 'clean' rate of 42.5%, or 2800ish legit messages a day.

People expect their email to be legitimate. Directory-harvesting attacks do constitute the majority to discrete emails; these are the messages you receive that have weird subjects, come from people you don't know, but don't have anything in the body. They're looking to see which addresses result in 'no person by that name here' messages and those that seemingly deliver. This is also why people unfortunate enough to have usernames or emails like "fred@" or "cindy@" have the worst spam problems of any organization.

As I've mentioned many times, we're actively considering migrating student email to one of the free email services offered by Google or Microsoft. This is because historically student email has had a budget of "free", and our current strategy is not working. The way it is not working is because the email filters aren't robust enough to meet expectation. Couple that with the expectation of effectively unlimited mail quota (thank you Google) and student email is no longer a "free" service. We can either spend $30,000 or more on an effective commercial anti-spam product, or we can give our email to the free services in exchange for valuable demographic data.

It's very hard to argue with economics like that.

One thing that you haven't seen yet in this article are viruses. In the last 7 days, our border email filter saw that 0.108% of incoming messages contain viruses. This is a weensy bit misleading, since the filter will drop connections with bad reputations before even accepting mail and that may very well cut down the number of reported viruses. But the fact remains that viruses in email are not the threat they once were. All the action these days are on subverted and outright evil web-sites, and social engineering (a form of virus of the mind).

This is another example of how expectation and reality differ. After years of being told, and in many cases living through the after-effects of it, people know that viruses come in email. The fact that the threat is so much more based on social engineering hasn't penetrated as far, so products aimed at the consumer call themselves anti-virus when in fact most of the engineering in them was pointed at spam filtering.

Anti-virus for email is ubiquitous enough these days that it is clear that the malware authors out there don't bother with email vectors for self-propagating software any more. That's not where the money is. The threat had moved on from cleverly disguised .exe files to cunningly wrought (in their minds) emails enticing the gullible to hit a web site that will infest them through the browser. These are the emails that border filters try to keep out, and it is a fundamentally harder problem than .exe files were.

The big commercial vendors get the success rate they do for email cleaning in part because they deploy large networks of sensors all across the internet. Each device or software-install a customer turns on can potentially be a sensor. The sensors report back to the mother database, and proprietary and patented methods are used to distill out anti-spam recipes/definitions/modules for publishing to subscribed devices and software. There is nothing saying that an open-source product can't do this, but the mother-database is a big cost that someone has to pay for and is a very key part of this spam fighting strategy. Bayesian filtering only goes so far.

And yet, people expect email to just be clean. Especially at work. That is a heavy expectation to meet.

Labels: , , ,


Tuesday, February 05, 2008

Exchange vs Groupwise

A post on CoolSolutions today quoted another blog about why GroupWise makes sense over Exchange. This is some of the same stuff I've seen over the years. A faaaaavorite theme is to point to mass mailer worms taking out Exchange, leaving everyone else fat and running.

On 1/7/07 I wrote about just this sort of thing. A quote:
The days of viruses and other crud scaring people off of Exchange are long gone. Now the fight has to be taken up on, unfortunately, features and mind-share. In the absence of a scare like Melissa provided, migrations from Exchange to something else will be driven by migration events. Microsoft may be providing just that threshold in the future, as they've said that they will be integrating Exchange in with SharePoint to create the End All Be All of groupware applications. Companies that aren't comfortable with that, or haven't deployed SharePoint for whatever reason may see that as an excuse to jump the Microsoft ship for something else. Unfortunately, it'll be executives looking for an excuse rather than executives seeing much better features in, say, GroupWise.
Which, 13 months later, is still mostly true. Mass mailer worms are no longer the scourge they used to be, and are well handled by commercial AV packages. Mass mailer worms even look different these days, preferring to infest and send mail independent of the mail client directly to the internet, thus neatly bypassing the poor meltable Exchange servers. The fear of mass mailers is FUD leftovers from years ago, not a current threat or reason to get off of the dominant platform.

The other thing I mentioned 13 months ago was 'migration events'. We're coming up on one, in the form of Exchange 2007. As the other blog mentioned, the hardware requirements for Exchange 2007 are a bit higher than for 2003. Speaking as an administrator with a sizable Exchange deployment, the requirement of 64-bit OS is something of a non issue since I'd be using one anyway. For a small office with only 200 users, though, forking out for Windows Server 2003 64 would be expensive.

Another point mentioned is that GroupWise can run on anything, and Exchange (especially Exch2007) won't. Again, as a mail admin for a largish Exchange system that doesn't matter to me since I'll be using newer servers to keep up with the load anyway. Again, for small offices who upgrade their servers whenever the old one completely bakes off, this is a bigger concern.

The other migration point is the Public Folders that Microsoft dropped in Exchange 2007. Or rather, made a lot harder to manage. Their users roasted their account managers hotly enough that Exchange 2007 SP1 reintroduces Public Folder management. We make some use of Public Folders, but I can see an office that makes extensive use of them looking at Exchange 2007 as not a simple plonk-in upgrade that Exchange 2003 was from Exch 2000. GroupWise doesn't have a similar concept to Public Folders (Resources might be, but only sort of), so this doesn't help GW much, but is the sort of event that makes an organization really think about what they're moving to.

As for productivity, we haven't had problems. Our Exchange has about 4300 accounts in it right now. This is supported by three administrators and a lot of automation. That said, during summer vacation season when I'm the only one of us three here I can go whole days without touching anything Exchange. It just works. This is a claim I frequently hear from GroupWise shops, so... Microsoft can do it too eh?

Another thing on CoolSolutions lately has been a few pieces on marketing GroupWise. In short, it makes more sense for Novell to pitch GroupWise as the #2 player than it is to pitch it as fundamentally better than Exchange. This has some good points. There are some markets that GroupWise is a better fit than Exchange, and the small, infrequently upgraded office is one of them. As are organizations looking really closely at Linux. GroupWise can very well be the #1 mail product in the Linux space, so long as Novell can convince people that paying for email services in Linux is a good idea.

I close out my previous post 13 months ago with a paragraph that still stands:
So, Exchange will be with us a long time. What'll start making the throne wobble is if non-Windows desktops start showing up in great numbers in the workplace. THEN we could see some non-MS groupware application threaten Exchange the way that Mac (and Linux) are threatening the desktop.

Labels: , ,


Thursday, March 08, 2007

Spam stats!

Yummy stats! These are from the anti-spam appliance in front of Exchange, for the last 24 hours.



Processed
Spam
Suspected Spam
Attacks
Blocked
Allowed
Viruses
Suspected Virus
Worms
Unscannable





Summary 168,802 85,166 (50%) 544 (<> 4,837 (3%) 0 (0%) 0 (0%) 43 (<> 31 (<> 3,730 (2%) 1,772 (1%)

And now, definitions:
Processed: The number of messages processed. This is unexploded, so that mail sent to 42 people still counts as just 1.
Spam: The number of Spam messages with a confidence of 90% or higher.
Suspected Spam: The number of Spam messages with a user defined confidence of (in this case) 70% or higher.
Attacks: An aggregate statistic, but in this case they're all Directory Harvest Attack messages. A directory-harvest-attack message is one of those messages sent to 20 people at a site with generated names, in an effort to see which addresses don't generate a bounce message.
Allowed/Blocked: We don't use this feature.
Viruses: Viruses that are not mass mailers.
Suspected Viruses: Heuristically detected viruses. Good for picking up permutations of common viri.
Worms: Viruses that are mass mailers.
Unscannable: Messages that are unscannable for whatever reason.

Like my boss, you may be looking at that 50% number and wonder what happened. It is commonly reported in the press that, "90% of all email is now spam," so where are the other 40% going? I looked into where the press were getting their numbers, and most of them get them from MessageLabs. They report their numbers on the Threat Watch. Today, the Spam rate is, "48.43%", so the 50% we're seeing is well within reason. Looking at their historical data the spam rate waxes and wanes on a day to day and week to week basis.

Labels: ,


Saturday, March 03, 2007

Outsourcing student e-mail

I saw on Slashdot today a piece about a University migrating their student email to Windows Live.

There have been high-level discussions about doing the same here at WWU, only we're still trying to figure out if Windows Live or a Google program makes the most sense. No decision has been made, though Windows Live would integrate much better into our environment due to the presence of student accounts in Active Directory. The Google offering has better, 'hearts and mind,' support among us techs, but the Microsoft offering would require less work from us techs to get running.

Last I heard, neither offering supported IMAP. GMail doesn't support IMAP, so I doubt any Google offer would. No idea if Windows Live (general access) even does.

There are a number of reasons why outsourcing email is attractive, and right there at the top is SPAM. We can't afford any commercial product to do student anti-spam, as they all charge per-head and even $2/head gets pretty spendy when you have to cover 18,000 student accounts. Currently, student e-mail anti-SPAM is all open-source and I still hear that the SPAM problem is pretty bad. The most senior of our unix admins spends about half his day dealing with nothing but SPAM related problems, so outsourcing would save us that expense as well.

The number two reason is price. Both the Google offering and Microsoft offering are free. Both have promised that they won't put advertising in their web portals for active students, but the usage data may be used to tailor advertising programs targeted (elsewhere) at the high-profit college-age population. Both offerings permit the student to maintain the address after graduation, though in that case they would get advertising in their web portals.

There are a number of problems that outsourcing introduces.
  • Identity synchronization. MS is easiest, Google will require some custom code.
  • Password synchronization. Do we even want to do it? If so, how? If not, why not?
  • Account enable/disable. How do we deactivate accounts?
  • Single sign-on. Is it possible to integrate whichever we use into CAS? Can we integrate it into the WWU Portal?
  • Web interface skinning. Will they permit skinning with the WWU style, or will they force their own?
The answers to all of the above are not in yet, which is why a decision hasn't been made on which way we're going. But the decision to outsource at all is all but made at this point.

Update 1 10/13/2007
Update 2 8/1/2008

Labels: ,


Friday, January 12, 2007

MORE SPAM!

On days like this, I really think I should pick up this T-Shirt. I've been tempted by it for a while. Just sayin'.

That said, now that the thingy has been in place for more than 24 hours I have some interesting data to play with. Unlike previous estimates, the appliance has handled 'only' 230,000 emails in the 24 hours period defined as 9am to 9am today. This is about a fifth of previous estimates, which makes me wonder what we were counting.

What's also interesting is how few viruses have been detected. It looks like the era of the mass mailer worm is largely over. Of that 230K odd mails, only 240 viruses were found. Most of them were mass-mailers, of course, but this is not the way things were even 3 years ago. This appliance is an anti-spam appliance that also does anti-virus, not the other way around like some other appliances I can think of.

Labels: ,


Monday, January 01, 2007

Dethroning Exchange

A lot of talk has gone into how to overthrow the Windows lock on the Desktop market. The server market is more fluid, but it STILL dominates that space. Linux and OSX are both making real strides in that space, though Apple's ad campaign focusing on, "Windows is for Work, Mac is for Fun," doesn't exactly improve Mac adoption in the workplace.

There aren't any clear threats to Exchange. The other two big players in the arena, GroupWise, and Lotus Notes, have both been there a long time. Both benefited from what I call, 'the Melissa years defections.' I know for a fact that OldJob stayed with GroupWise precicely because we were still up when Melissa and company nuked most of the Exchange shops in the area.

Melissa introduced the era of the mass-mail worm. The clean up efforts from those worms drove billions of dollars of investment into Exchange recovery tools, Exchange anti-virus tools, and other related technologies. Thanks to that burst of innovation, this is a largely solved problem (given a sufficient investment in 3rd party defensive tools). WWU hasn't had a mass-mail-worm-related Exchange outage since I started here three years ago.

What's also helping is that the mass-mail worm is slowly dying by the side of the road in favor of much more lucrative mails. The current SPAM problem is turning into a sort of global denial-of-service attack against SMTP in general, not just Exchange. Trojan emails that contain images that exploit Windows image handling, not just Outlook's, affect even Pegasus users.

The best defence against the current crudware infecting e-mail these days is to use a non-Windows desktop. If that's not in the cards (it isn't for WWU) then the field opens up much more dramatically. Most larger shops are looking seriously into anti-spam appliances as a load-shedding technology to help their mail-transfer-agent (whatever it is) keep up with legitimate load. Some very minority players in the MTA market only can use appliances, and don't have the option of hooked-in anti-spam software.

The days of viruses and other crud scaring people off of Exchange are long gone. Now the fight has to be taken up on, unfortunately, features and mind-share. In the absence of a scare like Melissa provided, migrations from Exchange to something else will be driven by migration events. Microsoft may be providing just that threshold in the future, as they've said that they will be integrating Exchange in with SharePoint to create the End All Be All of groupware applications. Companies that aren't comfortable with that, or haven't deployed SharePoint for whatever reason may see that as an excuse to jump the Microsoft ship for something else. Unfortunately, it'll be executives looking for an excuse rather than executives seeing much better features in, say, GroupWise.

Exchange isn't as dominant as Windows-on-Desktop is, but its market-share isn't exactly declining the way Windows desktop ownership is (really! It is declining! Minuscule amounts, but it is there!). New deployments of Notes or GroupWise, which is different from migrations, are due largely to geeks or management familiar with either technology requesting it specifically. The default is still Exchange when it comes to a big-boy groupware application. That'll take real time to change.

So, Exchange will be with us a long time. What'll start making the throne wobble is if non-Windows desktops start showing up in great numbers in the workplace. THEN we could see some non-MS groupware application threaten Exchange the way that Mac (and Linux) are threatening the desktop.

Labels: ,


Thursday, December 21, 2006

2GB Exchange mailboxes? Owie.

http://slashdot.org/article.pl?sid=06/12/21/1655243

MS Fights GMail with 2GB Exchange Mailboxes

Yeesh. OldJob was on GroupWise, and we didn't have mail quotas in place. The largest mailbox I saw (not including archives) was about 900MB. These days that'd probably translate to a 2.5GB mailbox. So yeah, they can get that big.

When I started here the standard Exchange mailbox settings were set to start complaining when the 30MB line was crossed. We've upped it to 46MB since then. We manage our large users by having a higher tier quota group with much higher limits. That group is currently set to start warning at 200MB. Our largest mailbox right now is 233MB.

The problem with mailboxes that large is, of course, backing it all up. The article goes on to say that Exchang 2007 will have features that will help mitigate that. What I suspect that means is replication to another site, rather than the mail archive features some folk use backup/recovery for.

Setting the max quota to 2GB will result in a LOT more people using email as a filing cabinet. Right now the total size of our Exchange system is around 310GB, which is a direct result of those mail quotas I mentioned above. Additionally, we're backing up around 100GB of .PST files on the Novell cluster; this of course does not include those PST files located on PCs. Taking the breaks off the mail quotas would expand our mail significantly faster than its expanding now. Those folk who legitimately deal with huge files will be less inclined to delete redundant copies of Monster Attachments.

One of the more annyoing problems with just taking the breaks off is how long it'll take to sanity-check a bad mail database. The last time we did a round of that the data files were in the 28-30GB range, and it took about eight hours per mail-store to clean the database files. Exchange could handle that no problem, but that did result in an extensive downtime. Two servers, four large mail-stores, meant that once we started the repair process it was a minimum of 16 hours before everything was back up.

It'll be interesting to see the Exchange 2007 guidance for designing enterprises with that much storage.

Labels: ,


Tuesday, October 31, 2006

Spam numbers

The following came out in the "Academic Technology News" yesterday:
To put the new spam filtering in perspective, consider the following: WWU errs on the side of caution to ensure that we do not filter any legitimate email; even with this 'cautionary' configuration more than 80% of all inbound email to campus is filtered out of our email system as known spam, compared to around 65% with our previous solution. In terms of numbers, that means that the staggering number of 1.3 million spam emails are filtered from your incoming mail each day.

Number of emails received: 1.6 million
Number of messages filtered: 1.3 million
Number of messages delivered: 0.3 million
There you have it. 1.6 million messages a day! Our Exchange system has around 6000 email accounts.


Tags:

Labels: ,


Thursday, February 09, 2006

Tracking storage

My storage-tracker has a year and a half worth of data in it. Last night we processed the winter-quarter student deletes. It did drop the student data a bit, but not anywhere near the level that the Fall delete did.

Chart here

The Fall delete happened about 11/20/05, and you can see the big hunk of space liberated by that action. Another thing of note is that the slope of the 'wuf-students' line is steeper than the Wuf-FacStaff line, so student data is growing faster that non-student data. But, thanks to regular user purges, overall growth is closer to normal.

The big anomaly of this summer on the facstaff line is due to the migration of three Shared volumes to one monolithic Shared volume. We were running in parallel for a while. What I found interesting is that the line either side of the big bump seems to match up well with growth patterns.

The Exchange line is pretty short as well. We're a bunch of quota-scrooges around here when it comes to mail, so mail growth is kept in check far better than file-storage. The mid-december blip is due to a realign of Exchange drives that the monitor didn't catch in time. The reallign also got rid of an annoying data-artifact we had; about 20GB of space on one Exchange drive was flagged (in error) bad-cluster, and this reported as 'used data'.

Labels: , ,


Wednesday, October 12, 2005

exchange issues

We have 'em. We know.

Short version:

The transaction logs for the two EVS systems attempted to cohabitate, resulting in log-file pollution. Cleaning it is taking a ruddy long time.

Labels:


Tuesday, December 07, 2004

Exchange front-end thing

As I talked about recently, we've been having some oddities lately. We found out it wasn't logfiles that were killing us (though that was part of it), it was that the priv1.stm file has grown to a bit over 13GB for no known reason. I grabbed a copy offline and ran BinText on it, which revealed that it is chock full of virus mails going back as far as September.

I don't know why they're parking there. GroupShield probably has something to do it with, but I couldn't tell you what. System Manager doesn't show any mailboxes with that kind of size on that server (only System boxes exist anyway), and the mail queues don't have anything that large. September is long enough ago that it should have been purged by Queue clean-up and mailstore cleanups. No go.

Odd.

Labels:


Tuesday, November 23, 2004

E-mail delays recently

Stuff inbound to the Exchange cluster had some delays recently. Saturday night, one of the two front-end servers that accept traffic from the internet got plugged with logfiles. All in-bound e-mail then arrived on the server and sat there. As in, not delivered.

We discovered it yesterday afternoon. There wasn't enough space on the volume to commit the transation logs, so we had to take steps to get things back into working. Around 10:30pm last night, the second node came back online and the backlog started hitting queue.

Users these days assume that e-mail should transfer more or less instantly. Or failing that, within a very few minutes. SMTP wasn't designed for that. It's a best-effort thing. And in this case, mail that arrived early Saturday finally got here late Monday. It happens. Yes, the US Postal Service could have gotten some of this here earlier, but 80% of it was spam anyway.

Labels:


Thursday, October 21, 2004

Exchange features

Today we finally solved a mystery that had been plaguing us since we started moving some accounts from the old Exch2000 to the new Exch2003 servers. Suddenly some users were no longer able to "Send as" other users, even when they had full rights to the other user's box. This was sub-optimal.

I won't go into the troubleshooting steps because its embarrassing. But what we discovered is that in order to "send as" without having "on behalf of" show up in the From line, you need to grant the relevant group the "Send As" right in object-security. The Exchange rights have no bearing here at all, which is what threw us.

I can understand why this is the case. It is my experience that the higher up in an organization you get, the less direct interaction you have with your own mail and calendar. There is a 'people filter' in place to prioritize what you need to even notice. In order to facilitate that, groupware provides the ability to allow other users into your mailbox. No biggie. What is also needed is that even if they have 'full' rights to everything, they can't send mail AS you and thereby steal your identity. This is why there is a separate right for this feature.

For things like group mailboxes (e.g. "alumni", or "NewStudents") this is something we needed to know, since we do that far more often than the executive kind. No we'll get to see how many of our 'group' accounts have been using "send as" all along.

Labels:


Friday, September 10, 2004

Happy news on the Exchange front

GroupShield is in, SpamKiller is running, public folders replicated. We're good for live testing!

And the backup agents are in! It is very, very nice to see a backup roll in at 650 megs/minute in real time. Woo! Now once we get the gig ports in, we can really fly!

Labels:


Thursday, August 19, 2004

Exchange Project

Delayed on account of bad cables. Two of the three short-wave fibre cables connecting the future storage servers to the fibre-channel SAN ended up bad. On one I can see light, so the return-leg is broken. The other gave me the amber light on the switch, which usually means either one channel is working or the pairs are swapped; as this is a prebuilt cable swapped-pairs is unlikely.

We had a spare cable available which greatly helped in proving the broken cable problem. But it also means that only two of the three future storage servers is able to be clusterable.

I have to assume that installation killed the cables. And we didn't do the install. Aie. Fragile stuff, fibre.

Labels:


This page is powered by Blogger. Isn't yours?