More security fun in .edu-land

| 1 Comment
Two things.

FrSIRT announces a vulnerability in BackupExec Remote Agent that currently (as of this posting) has no patch. This will be a problem! Mark my words.

And next, from a SANS mailing I get:
Editor's Note (Pescatore): There has been a flood of universities acknowledging data compromises and .edu domains are one of the largest sources of computers compromised with malicious software. While the amount of attention universities pay to security has been rising in the past few years, it has mostly been to react to potential lawsuits do to illegal file sharing and the like - universities need to pay way more attention to how their own sys admins manage their own servers.
Hi, that's me. As I covered a couple of days ago, we have some challenges that corps don't have. For one, we have no firewall, just router filtering rules. And today I learned more about our security posture campus-wide.

It seems the buildings have some pretty restrictive filters on them at the router level, but our servers don't have much at all. This seems to be driven by a need to be good netizens rather than a need to prevent security intrusions. End-user systems are hideously hard to patch, spyware is rampant, and it doesn't take much to turn a WinXP machine that someone isn't paying attention to into a botnet drone.

Servers, on the other hand, are professionally managed. We pay attention to those. Security is a priority very near the top! Therefore, we don't have to be as strict (from a network point of view) with them as we do end-user systems.

Because of the firewall-free nature of the vast majority of our datacenter (more on that later), any application we buy that runs on a server has to be able to run in a hostile network. This has caused real problems. A lot of programs assume that the datacenter is a 'secure' environment and that hackers will be rattling door-knobs very infrequently. BackupExec comes to mind here. Add into that independent purchase authority, and you get departments buying off-the-shelf apps without considering their network security requirements in the context of the WWU environment.

Every single server I've had to unhack since 1/1/2005 has been due to:
  • Non-Microsoft patches that got missed (Veritas)
  • Microsoft patches that didn't get applied correctly as part of our standard update procedure. This is the classic, "the dialog said 'applied', but it really wasn't," problem.
  • Zero-day exploits (Veritas, others) where the vulnerability is not formally acknowledged by the vendor
This is the point where I say that Microsoft is no longer the bad boy of the bunch. Their patching process and built in tools are rich enough that they're no longer the #1 vector of attack. Yes, these are all on Windows, but it isn't Windows getting hacked most of the time. It's the apps that sit on it. We have now hit the point where we expect Windows Update-like updating for all of our apps, and forget to check vendors weekly.

Heck, weekly is too long! Take this new Remote Agent exploit. When the last Remote Agent exploit was released in June, it was less than 6 days after the patch was made available that the exploits started. We took 9 days to apply it since it needs reboots. Too long!

We now have to have a vendor and a patch-source for each and every program installed on a server. And even that isn't enough. Take HP. They just announced several bugs in their Server Management products, but I saw the notice on Bugtraq, not from any notice from HP. They offer a wide enough variety of programs that it is difficult to determine if the broken bits are the bits I installed on my servers or if I'm safe.

We have a Tuesday night regular downtime arranged so we can get the MS patches in. For things like the Veritas Remote Agent, we'd have to apply a patch 'out of cycle', and that's tough. It took 6 days for the last Remote Agent but to lead to hacked servers. For something like this, where there may already be a metasploit widget created, we need to apply ASAP after the patch releases. So a weekly patch application interval is not longer good enough, we need to be able to do it in 24 hours.

Presuming we are even aware the patch exists in the first place. From the same e-mail:
Editor's Note (Paller): Sadly many of the people who bought BrightStor packages have no idea the vulnerability exists. Computer Associates, like other larger vendors, sold through resellers to customers who never bothered to register. Those organizations, large and small, are at extreme risk and are completely unaware of the risk.
Which is precicely the problem. Heck, we're registered with Veritas and HP, but we were not notified of the recent problems. We had to find them out for ourselves. This is why auto-patching products that come with patch-feeds charge such extortionist amounts of money. It is ALMOST worth it to pay 'em.

Really, we're like an ISP that has far more official responsibility over the machines on our network. A traditional ISP has terms of service and a pretty 'hey, whatever,' attitude, and then harden the crap out of their own internal servers. We have to run business apps in an ISP environment. And if one of our workstations get hacked and becomes a drone that participates in a DoS, we get sued not the owner of the PC (...which is.. us.. unlike an ISP).

A final case in point and then I'll sign off. We recently reviewed a Point-of-sale application that an organization on campus will be using. It took about 45 seconds after the presentation began before we identified the glaring hole in their security setup. Sadly, this product was already purchased, and apparently a lot of other higher eds use it too. We just get to try and minimize the hole how we can, without actually fixing it.

1 Comment

Boy, it's too bad that they didn't know about Novell's POS solution:http://www.novell.com/products/linuxpointofservice/