Friday, February 05, 2010

Dealing with User 2.0

The SANS Diary had a post this morning with the same title as this post. The bulk of the article is about how user attitudes have changed over time, from the green-screen era to today where any given person has 1-2 computing devices on them at all times. The money quote for my purposes is this one:

User 2.0 has different expectations of their work environment. Social and work activities are blurred, different means of communications are used. Email is dated, IM, twitter, facebook, myspace, etc are the tools to use to communicate. There is also an expectation/desire to use own equipment. Own phone, own laptop, own applications. I can hear the cries of "over my dead body" from security person 0.1 through to 1.9 all the way over here in AU. But really, why not? when is the last time you told your plumber to only use the tools you provide? We already allow some of this to happen anyway. We hire consultants, who often bring their own tools and equipment, it generally makes them more productive. Likewise for User 2.0, if using Windows is their desire, then why force them to use a Mac? if they prefer Openoffice to Word, why should't they use it? if it makes them more productive the business will benefit.

Here in the office several of us have upgraded to User 2.0 from previous versions. Happily, our office is somewhat accommodating for this, and this is good. I may be an 80% Windows Administrator these days, but that isn't stopping me running Linux as the primary OS on my desktop. A couple of us have Macs, though they both manage non-Windows operating systems so that's to be expected ;). I have seen more than one iPod touch used to manage servers. Self-owned laptops are present in every meeting we have. See us use our own tools for increased productivity.

The SANS Diary entry closed with this challenge:

So here is you homework for the weekend. How will you deal with User 2.0? How are you going to protect your corporate data without saying "Nay" to things like facebook, IM, own equipment, own applications, own …….? How will you sort data leakage, remote access, licensing issues, malware in an environment where you maybe have no control or access over the endpoint? Do you treat everyone with their own equipment as strangers and place them of the "special" VLAN? How do you deal with the Mac users that insist their machines cannot be infected? Enjoy thinking about User 2.0, if you send in your suggestions I'll collate them and update the diary.

Being a University we've always had a culture that was supportive of the individual, that Academic Freedom thing rearing its head again. So we've had to be accommodating to this kind of user for quite some time. What's more, we put a Default-Deny firewall between us and the internet really late in the game. When I got here in 2003 I was shocked and appalled to learn that the only thing standing between my workstation and the Internet were a few router rules blocking key ports; two months later I was amazed at just how survivable that ended up being. What all this means is that end-user factors have been trumping or modifying security decisions for a very long time, so we have experience with these kinds of "2.0" users.

When it comes to end-user internet access? Anything goes. If we get a DMCA notice, we'll handle that when it arrives. What we don't do is block any sites of any kind. Want to surf hard-core porn on the job? Go ahead, we'll deal with it when we get the complaints.

Inbound is another story entirely, and we've finally got religion about that. Our externally facing firewall only allows access to specific servers on specific ports. While we may have a Class B IP block and therefore every device on our network has a 'routable' address, that does not mean you can get there from the outside.

As for Faculty/Staff computer config, there are some limits there. The simple expedient of budget pressure forces a certain homogeneity in hardware config, but software config is another matter and depends very largely on the department in question. We do not enforce central software there beyond anti-virus. End users can still use Netscape 4.71 if they really, really, really want to.

Our network controls are evolving. We've been using port-level security for some time, which eliminates the ability of students to unplug the ethernet cable connected to a lab machine and plug it into their laptop. That doesn't stop conference rooms where such multi-access is expected. And we only allow one MAC address per end-port, which eliminates the usage of hubs and switches to multiply a port (and also annoy VMWare users). We have a 'Network Access Control' client installed, but all we're doing with it so far is monitor; efforts to do something with it have hit a wall. Our WLAN requires a WWU login for use, and nodes there can't get everywhere on the wired side. Our Telecom group has worked up a LimboVLAN for exiling 'bad' devices, but it is not in use because of a disagreement over what constitutes a 'bad' device.

However, if given the choice I can guarantee certain office managers would simply love to slam the bar down on non-work related internet access. What's preventing them from doing so are professors and Academic Freedom. We could have people doing legitimate research that involves viewing hard core porn, so that has to be allowed. So the 'restrict everything' reflex is still alive and strong around here, it has just been waylaid by historic traditions of free access.

And finally, student workers. They are a second class citizen around here, there is no denying that. However, they are the very definition of 'User 2.0' and they're in our offices providing yet another counter-weight to 'restrict-everything'. Our Helpdesk has a lot of student workers, so we end up with a fair amount of that attitude in IT itself which helps even more.

Universities. We're the future, man.

Labels: ,

Thursday, January 28, 2010

Evolving best-practice

As of this morning, everyone's home-directory is now on the Microsoft cluster. The next Herculean task is to sort out the shared volume. And this, this is the point where past-practice runs smack into both best-practice, and common-practice.

You see, since we've been a NetWare shop since, uh, I don't know when, we have certain habits ingrained into our thinking. I've already commented on some of it, but that thinking will haunt us for some time to come.

The first item I've touched on already, and that's how you set permissions at the top of a share/volume. In the Land of NetWare, practically no one has any rights to the very top level of the volume. This runs contrary to both Microsoft and Posix/Unix ways of doing it, since both environments require a user to have at least read rights to that top level for anything to work at all. NetWare got around this problem by creating traverse rights based on rights granted lower down the directory structure. Therefore, giving a right 4 directories deep gave an inplicit 'read' to the top of the volume. Microsoft and Posix both don't do this weirdo 'implicit' thing.

The second item is the fact that Microsoft Windows allows you to declare a share pretty much anywhere, and NetWare was limited to the 'share' being the volume. This changed a bit when Novell introduced CIFS to NetWare, as they introduced the ability to declare a share anywhere; however, NCP networking still required root-of-volume only. At the same time, Novell also allowed the 'map root' to pretend there is a share anywhere but it isn't conceptually the same. The side-effect of being able to declare a share anywhere is that if you're not careful, Windows networks have share-proliferation to a very great extent.

In our case, past-practice has been to restrict who gets access to top-level directories, greatly limit who can create top-level directories, and generally grow more permissive/specific rights-wise the deeper you get in a directory tree. Top level is zilch, first tier of directories is probably read-only, second tier is read/write. Also, we have one (1) shared volume upon which everyone resides for ease of sharing.

Now, common-practice among Microsoft networks is something I'm not that familiar with. What I do know is that shares proliferate, and many, perhaps most, networks have the shares as the logical equivalent of what we use top-level directories for. Where we may have a structure like this, \\cluster-facshare\facshare\HumRes, Microsoft networks tend to develop structures like \\cluster-facshare\humres instead. Microsoft networks rely a lot on browsing to find resources. It is common for people to browse to \\cluster-facshare\ and look at the list of shares to get what they want. We don't do that.

One thing that really gets in the way of this model is Apple OSX. You see, the Samba version on OSX machines can't browse cluster-shares. If we had 'real' servers instead of virtual servers this sort of browse-to-the-resource trick would work. But since we have a non-trivial amount of Macs all over the place, we have to pay attention to the fact that all a Mac sees when they browse to \\cluster-facshare\ is a whole lot of nothing. We're already running into this, and we only have our user-directories migrated so far. We have to train our Mac users to enter the share as well. For this reason, we really need to stick to the top-level-directory model as much as possible, instead of the more commonly encountered MS-model of shares. Maybe a future Mac-Samba version will fix this. But 10.6 hasn't fixed it, so we're stuck for another year or two. Or maybe until Apple shoves Samba 4 into OSX.

Since we're on a fundamentally new architecture, and can't use common-practice, our sense of best-practice is still evolving. We come up with ideas. We're trying them out. Time will tell just how far up our heads are up our butts, since we can't tell from here just yet. So far we're making extensive use of advanced NTFS permissions (those permissions beyond just read, modify, full-control) in order to do what we need to do. Since this is a deviation from how the Windows industry does things, it is pretty easy for someone who is not completely familiar with how we do things to mess things up out of ignorance. We're doing it this way due to past-practice and all those Macs.

In 10 years I'm pretty sure we'll look a lot more like a classic Windows network than we do now. 10 years is long enough for even end-users to change how they think, and is long enough for industry-practice to erode our sense of specialness more into a compliant shape.

In the mean time, as the phone ringing off the hook today foretold, there is a LOT of learning, decision-making, and mind-changing to go through.

Labels: , ,

Thursday, January 21, 2010

Migrating knowledge bases

This morning we moved the main class volume from NetWare to Windows. We knew we were going to have problems with this since some departments hadn't migrated key groups into AD yet, so the rights-migration script we wrote just plain missed bits. Those have been fixed all morning.

However, it is becoming abundantly clear that we're going to have to retrain a large portion of campus Desktop IT in just what it means to be dealing with Windows networking. We'd thought we'd done a lot of it, but it turns out we were wrong. It doesn't help that some departments had delegated 'access control' rights to professors to set up creative permissioning schemes, this morning the very heated calls were coming in from the professors and not the IT people.

There are two things that are tripping people up. One has been tripping people up on the Exchange side since forever, but the second one is new.
  1. In AD, you have to log out and back in again for new group-memberships to take.
  2. NTFS permissions do not grant the pass-through right that NSS permissions do. So if you grant a group rights to \Science\biology\BIOL1234, members of that group will NOT be able to pass through Science and Biology to get to BIOL1234.
We have a few spots here and there where for one reason or another rights were set at the 2nd level directories instead of the top level dirs. Arrangements like that just won't work in NTFS without busting out the advanced permissions.

An area we haven't had problems yet, but I'm pretty certain we will have some are places where rights are granted and then removed. With NSS that could be done two ways: an Inherited Rights Filter, or a direct trustee grant with no permissions. With NTFS the only way to do that is to block rights inheritance, copy the rights you want, and remove the ones you don't. That sounds simple, but here is the case I'm worried about:


At 'HumRes' the group is granted 'read' rights, and the HR director is granted Modify directly to their user (bad practice, I know. But it's real-world).
At 'JobReview' the group is granted 'Modify'
At 'VPIT' Inheritance is Blocked and rights copied.
At 'JohnSmith' the HR user AngieSmith is granted the DENY right due to a conflict of interest.

Time passes. The old director retires, the new director comes in. IT Person gets informed that the new director can't see everything even though they have Modify to the entire \Humres tree. That IT person will go to us and ask, "WTH?" and we will reply with, "Inheritance is blocked at that level, you will need to explicitly grant Modify for the new director on that directory."

So this is a bit of a sleeper issue.

Meanwhile, we're dealing with a community of users who know in their bones that granting access to 'JohnSmith' means they can browse down from \HumRes to that directory just on that access-grant alone. Convincing them that it doesn't work that way, and working with them to rearrange directory structures to accommodate that lack will take time. Lots of time.

Labels: ,

Wednesday, December 09, 2009

How DTV and HD Radio mirror security

One of the complaints levied against the now completed transition of US television broadcasts to pure digital is that the range reduces in a lot of cases. The same thing has been said about HD Radio, where the signal goes from crystal clear to nothing. Both are in the Seattle market, but up here in Bellingham we only have DTV, I don't know of any HD radio stations up here.

Which is sad, since I can occasionally pick up some of the Seattle stations if I'm north of town aways. The Chuckanut mountains (yes, that's their name!) get in the way of line-of-site while in town. However, there is no HD radio to be had. In large part this is because the Canadians don't have an HD radio standard approved and that's where most of our radio comes from.

Which is a long way from security, but the reasons for this are similar to something near and dear to any security maven's heart: two-factor security.

With analog TV and Radio signals, the human brain was very good at filtering out content from the noise. Noise is part and parcel to any analog RF system, even if you can't directly perceive it. Even listening to a very distant AM station, I can generally make out the content if I speak that language, or I already know the song. Those two things allow a much better hit-rate for predicting what sound will come next, which in turn enhances understanding. My assumptions about the communication method create a medium in which large amounts, perhaps a majority, of the consumed bandwidth is used in essentially check-sums.

Listening to a news-reader read text off a page. Call it 80 words per minute, and if you assume 5 character per word, that comes to 400 characters a minute. Add another 80-120 characters for various punctuation and white-space, assume 7-bit ASCII since special characters are generally hard to pronounce, and you have a bit-rate of between 56 and 65 bits per second. On a channel theoretically capable of orders of magnitude more then that. Those extra bits are insulation against noise. This is how you can understand said news-reader when your radio station is drowning in static.

TV is much the same. Back in the rabbit-ear era of my youth, we used to watch UHF stations through a fog of snow. It was just fine, we caught the meaning. It worked even better if the show was one we'd seen before, which helped fill in gaps.

Then along came the digital versions of these formats. And one thing was pretty clear, marginal signal meant a greatly reduced chance of getting anything at all. Instead of a slow fall-off of content, you had a sharp cliff where noise overcame the error correction in the signal processor hardware. However... so long as you were within the error correction thresholds, your listening/watching experience was crystal clear.

The something you are part of the security triumvirate of have/are/know is a lot like the experience with analog to digital conversion of TV and radio. The something you actually are is an analog thing, be it a fingerprint, the irides in your eye, the shape of your face, DNA sequence, or voice. The biometric device encodes this into a digital format that is presumably unique per individual. As we've had good experience with, the analog to digital conversion is a fundamentally noisy one, so this encoding has to include a 'within acceptable equivalency thresholds' factor.

It is this noise factor that is the basis of a whole category of attacks on these sensors. It is not sufficient to ensure that the data is a precise match, for some of these, such as voice or face, can change on a day to day basis, and others, such as finger or iris prints, can be faked very convincingly. The later of these is why the higher priced fingerprint sensors also do skin conductivity tests to ensure it is skin and not a gelatin imprint, among other 'live person' tests.

This makes the 'something you are' part of the triumvirate potentially the weakest. 'Something you know,' your password, is a very few bytes of information that has to be 100% correctly entered every time. 'Something you have' can be anything from a SecureID key-fob, to a smart-chiped card, which also requires 100% correctness. There is a fuzz factor for things like SecureID that use time as part of the process, so this is not quite 100%. However, 'something you are', is potentially quite a lot of data at a much lower precision than the other two.

There is a LOT of effort going in to developing algorithms that can perform the same distillation of content our brains do when listing to a news-reader on a distant AM station. You don't check the whole data returned by the finger reader, you just check (and store) the key identifiers inherent in all fingerprints, identifiers that are distilled from the whole data. The identifiers will get better over time as we gain better understanding of what this kind of data looks like. No matter how good we get with that, they'll still have uncertainty values assigned to them due to the analog/digital conversion.

Labels: ,

Monday, December 07, 2009

Account lockout policies

This is another area where how Novell and Microsoft handle a feature differ significantly.

Since NDS was first released back at the dawn of the commercial internet (a.k.a. 1993) Novell's account lockout policies (known as Intruder Lockout) were set-able based on where the user's account existed in the tree. This was done per Organizational-Unit or Organization. In this way, users in .finance.users.tree could have a different policy than .facilities.users.tree. This was the case in 1993, and it is still the case in 2009.

Microsoft only got a hierarchical tree with Active Directory in 2000, and they didn't get around to making account lockout policies granular. For the most part, there is a single lockout policy for the entire domain with no exceptions. 'Administrator' is subjected to the same lockout as 'Joe User'. With Server 2008 Microsoft finally got some kind of granular policy capability in the form of "Fine Grained Password and Lockout Policies."

This is where our problem starts. You see, with the Novell system we'd set our account lockout policies to lock after 6 bad passwords in 30 minutes for most users. We kept our utility accounts in a spot where they weren't allowed to lock, but gave them really complex passwords to compensate (as they were all used programatically in some form, this was easy to do). That way the account used by our single-signon process couldn't get locked out and crash the SSO system. This worked well for us.

Then the decision was made to move to a true blue solution and we started to migrate policies to the AD side where possible. We set the lockout policy for everyone. And we started getting certain key utility accounts locked out on a regular basis. We then revised the GPOs driving the lockout policy, removing them from the Default Domain Policy, creating a new "ILO polcy" that we applied individually to each user container. This solved the lockout problem!

Since all three of us went to class for this 7-9 years ago, we'd forgotten that AD lockout policies are monolithic and only work when specified in Default Domain Policy. They do NOT work per-user the way they are in eDirectory. By doing it the way we did, no lockout policies were being applied anywhere. Googling on this gave me the page for the new Server 2008-era granular policies. Unfortunately for us, it requires the domain to be brought to the 2008 Functional Level, which we can't do quite yet.

What's interesting is a certain Microsoft document that suggested settings of 50 bad logins every 30 minutes as a way to avoid DoSing your needed accounts. That's way more that 6 every 30.

Getting the forest functional level raised just got more priority.

Labels: , , , , , ,

Tuesday, November 24, 2009

Encryption and key demands, in reality

Two years ago I posted an article that has been fairly popular, Encryption and Key Demands. The phrase 'duress key' seems to drive the traffic there, even though I'm not the one who coined the term. Anyway, a real-life example of that has shown up.

UK jails schizophrenic for refusal to decrypt files

You don't fork over your decryption keys on demand, you get jail time just for that! As I said two years ago, this is a lot harder to pull off in the US due to that whole Bill of Rights thing. Harder, but not impossible.

Labels: ,

Tuesday, November 17, 2009

Restrictive internet policies

A friend of mine griped today:
In a stroke of utter WTF-ness... my workplace has blocked access to
It's not so WTF for me as I can see why it was blocked. LinkedIn is seen as a tool for people looking to transition jobs. So if you're blocking Monster and Dice, then LinkedIn is right up there with it. The fact that it also is a useful way to network for business is beside the point. From earlier gripes, this particular workplace is on a crusade to block all social-networking sites. I only saw this post because of email-to-post gateways, and they haven't blocked gmail yet.

It is situations like these that give rise to the scenario I described back in June: I Want my SSH. Additionally, a lot of social networking sites are publishing apps for the various app-driven smartphones out there. For users willing to invest a bit of money into it, corporate firewalls are no longer the barrier to slacking they once were.

Labels: ,

Monday, November 16, 2009

Packet size and latency

The event-log parser I'm working on has run into a serious performance wall. Parsing 60 minutes worth of security events takes 90 minutes. The bulk of that time is consumed in the 'get the event logs' part of the process, the 'parse the event logs' portion takes 5% of that time. Looking at packets, I see why.

I'm using Powershell2 for this script, as it has the very lovely Get-WinEvents commandlet. It is lovely because I can give it filter parameters, so it'll only give me the events I'm interested in and not all the rest. In practice, this reduces the number of events I'm parsing by 40%. Also nice is that it returns static list of events, not a pointer list to the ring-buffer that is the ususal Windows Event Log, so $Event[12345].TimeCreated() will stay static.

The reason the performance is so bad is that each event-log is individually delivered via RPC calls. Looking at packets, I see that the average packet size is around 200bytes. Happily, the interval between RPC-Response and the next RPC-Request are fractions of a millisecond, and the RPC-Response times are about a half millisecond so at least the network is crazy-responsive. But that packet size and the serial nature of this request means that the overall throughput is really bad.

If there were a way to phrase the Get-WinEvents command to only populate the attributes I'm interested in and not any of the others (such as .Message(), which is the free-text message portion of the event, quite large, and noticibly laggy in retrieving), it could go a LOT faster. Since I don't have powershell installed on my domain controllers right now, I can't see how much running it directly on them would improve things. I suspect it would improve it by a lot since it should be able to use direct-API methods to extract event-log data rather than RPC-based methods.

As it is, we may have to resort to log-shipping. With over a GB of security event-log generated per DC, per day, that log server is going to have to have large logs. It has to be at least Vista or Windows 2008 in order to get shipped logs in the first place. Sad for us, we don't really have any spare hardware or VM space for a new server like that.

And finally, yes. There are 3rd party tools that do a lot of this. We need something both free and scalable, and that's a tricky combination. Windows generates a LOT of login/logout data in a network our size, keeping up is a challenge.

Labels: ,

Thursday, November 12, 2009


Over the years I've heard variations on this complaint:
"I don't need a secure password since everything I work on can be seen with a freedom-of-information-act filing anyway."
In the run up to the internal lobbying effort that allowed us to start password aging and put password complexity rules into place, we ran L0phtcrack against our Windows domain passwords. The results were astounding. A crushingly large percentage of passwords were still set to ones well known to be used by the helpdesk during password resets, users had never gone back and changed their password after having it reset by said helpdesk. A not much surprising but still disheartening number was the percentage of passwords set to either "password" or "p@$$w04D". These results are what convinced upper management to push password complexity policies onto the unwilling masses.

But that doesn't address the complaint above, merely shows the effects of this attitude. While it may be true that you work on nothing confidential, you still have one thing near and dear to your heart that you do care about; Identity. Especially with the advent of web-based Enterprise email, this is a very important thing. While it is trivial to impersonate an email address, it carries far more weight when that email is delivered from our servers. What's more, the ability to reply to legitimate emailas you is something you don't want. And finally, I don't know a single person that fails to have at least some personal correspondance in their work mailboxes, even if it only exists in the trash folder. That information may still be retrievable by an FOIA filing, but the generation of information does not, and generation of information is what you allow by having your password compromised.

We mean that. We don't allow managers to have departed employee's passwords for the same reason. Happily these sorts of gripes are becoming ever less common as the lessons of Phishing come home to more and more people. But this gripe is one that is particular to the public sector, so many of you may not have heard it before.

Labels: ,

Thursday, November 05, 2009

Audit logging

When I first arrived here we used to get this question four or five times a year:
Can you check to see who was logged in to server X at 2:34pm yesterday?
Back in 2003, "Server X" was 98% likely to be a NetWare server. In 2003, Novell hadn't come out with Nsure Audit yet, so the only such logging available were the NW4.11-era text-mode audit logging. Which, to put it politely, didn't even come close to scaling to our levels of access. Logs like that take a lot of space. A LOT of it.

Fast forward a few years, and we're now doing a heck of a lot more Microsoft networking. The domain controllers have security auditing turned on by default. While a day's worth of logs are smaller than the Novell logs would have been (not sure about NSure Audit log sizes, never got a chance to use them), it's still very large. A gig a day is not unreasonable, if not more.

One thing that MS auditing doesn't give us is 'lockout address'. So when a student walks up to the helpdesk and asks, "why am I locked out?" the helpdesk staff and look and see what IP did the locking. We can't do that right now on the Microsoft side. I'm attempting to fix this, which requires creating a log-parser for windows.

Happily, this is doable with PowerShell. Unhappily, it means 1.8 million events to chug through when I parse said log. Even more unhappily, the key data I want (IP, Username) is not in a straight up field and requires parsing the Message text. Any time you parse text like that, you become vulnerable to text format changes. It's not the ideal solution, but its what we have.

Once this is done we'll even have a lockout history which we didn't have before. So we'll be able to tell patterns like having a lockout 7 minutes after turning on their Mac, repeatedly. But first I have to finish writing it.


Thursday, October 29, 2009

A matter of policy

This has been a long standing policy in Technical Services, dating to the previous VP-IT and endorsed by the current one. This policy concerns email like this, generally from a manager of some kind:
"[Person X] no longer works here. Please change their password and give it to [Person Y] so they can handle email. And please set an out-of-office rule notifiying people of [Person X's] absence."
To which we politely decline. What we will do is set the out-of-office rule, that's just fine. We'll also either give a PST extract of Person X's mailbox, or if there really is no other way (the person was the Coordinator of the Z's for 20+ years and handled all the communications themselves before retiring/dying) we'll grant read-access to the mailbox to another person, and effectively turn the Person X account into a group account but lacking send-as rights.

What we will categorically not do is change a password for an inactive user and give the login to someone else. It comes down to identity theft. If we give Person Y the login info for Person X, Person Y can send email impersonating Person X. And that is wrong on a number of levels.

We resist giving access to the mailbox as well, since a non-trivial proportion of end-users give their work email as the email address for web-registration pages all over the internet. And thus that's where the "password reminder" emails get sent. Having access to someone else's mailbox is a good way to start the process of hacking an identity.

Yes, we do occasionally get a high level manager pushing us on this. But once we explain our rationalle, they've backed down so far. There is a reason we say no when we say no.

Labels: ,

Wednesday, October 28, 2009

Filesystem drop-boxes on NTFS

We have a need to provide dropboxes on our file-servers. Some professors don't find Blackboard's dropbox functionality meets their needs, so they rock it 1990's style. On NetWare/OES, this is a simple thing. Take this directory structure:


And a group called PHYS-1234.CLASSES.WWU

Under NetWare, you set an Inherited rights filter or explicitly remove inherited rights, grant the PHYS-1245.CLASSES.WWU group the "C" trustee to the directory, and the professor's user object full rights to it. This allows students to copy files into the directory, but not see anything. On the day the assignment is due, the professor revokes the class-group's rights and tada. A classic dropbox. Dead simple, and we've probably been doing it this way since 1996 if not earlier.

It's not so simple on Windows.

First of all, Windows has different rights for Directories and Files. They use the same bits, but the bits mean different things for files and directories. For instance, one bit means both "write files" for directories, allowing users with this right to create files in the directory (analogus to the "C" NSS trustee right), and "write data" which grants the ability for a user to modify data in a file (analogus to the "M" NSS trustee right). So, this bit on a Directory grants Create, but this bit on a file grants Modify. Right.

To create a dropbox on NTFS, several things need to happen:
  • Inherited rights need to be copied to the directory, and inheritence blocked. (There is no Inherited Rights Filter on NTFS)
  • Extranious rights need to be deleted from the directory. (again with the no IRFs)
  • The class group needs to be granted the 'Read' rights suite to "This Folder Only", as well as "Create files".
    • Traverse Folder
    • List Folder
    • Read Attributes
    • Read Extended Attributes
    • Read Permissions
  • "CREATOR OWNER" (a.k.a. S-1-3-0) needs to be granted the 'Read' rights suite to "Subfolders and files only"
The key thing to remember here is that "Subfolders and files only" is an inheritance setting, where "This Folder Only" is a direct rights grant. Files created in this directory will get the rights defined under 'creator owner'. If the professor wishes to remove student visibility to their files, they'll have to Take Owner each file. I have found that Windows Explorer really, really likes being able to View files it just wrote, and this rights config allows that.

This series of actions will create a drop box in which students can then copy their files and still see them, but then can't do anything with it. This is because Delete is a separate right that is not being granted, and the users are not getting the "Write Data" right either. Once the file is in the directory, it is stuck as far as that user's concerned. If a user attempts to save over the invisible file of another user, perhaps the file names are predictable, they'll get access-denied since they don't have Write Data or Delete to that invisible file.

If you're scripting this, and for this kind of operation I strongly recommend it, use icacls. It'd look something like this:

icacls PHYS-1234 /inheritance:d
icacls PHYS-1234 /remove CAS-Section
icacls PHYS-1234 /grant Classes-PHYS-1234:(rx,wd)
icacls PHYS-1234 /grant ProfessorSmith:(M)
icacls PHYS-1234 /grant *S-1-3-0:(oi)(ci)(rx)

(rx,wd) = Read-Execute & Write-Data
(M) = The "Modify" simple right. Essentially Read/Write without access-control.
(oi) = Object-Inherit, a.k.a. Files
(ci) = Container-Inherit, a.k.a. Directories
(rx) = Read-Execute
*S-1-3-0 = The SID of "CREATOR OWNER". An explicit grant to this SID works better than using the name, in my experience.

This hasn't been battle tested yet, but it seems to work from my pounding on it.

Labels: ,

Friday, October 23, 2009

Insecure applications

Anyone who deals with network security has run into this problem:

Department/powerful-user buys an application for a lot of money. They would like it to work please. Application's requirement state, "disable all security systems so our crappy-app can work unencumbered." Crappy-app runs into network security problems and dies. Department/PU contacts IT and asks to have network security disabled so their expensive crappy-app can run correctly.

What happens next is a very good test of management's commitment to network security. Will management say:
  • Hmm, that's a lot of money. IT, make an exception for this app.
  • Hmm, that's a lot of money. We'll have to make it work somehow.
  • That's a really insecure app, too bad you spent a lot of money. It will not be installed. Let this be an object lesson to you all.
We just got a request for something like this. Apparently the application's requirements include disabling the Windows firewall. We've turned it on by GPO, so it will always be on. This is the secure way to live. Whether or not we get told to make an exception, make it work somehow, or ignore it remains to be seen.


Thursday, September 24, 2009

Very handy but terrible plugin

Yes, this plugin is a terrible idea.

But then, so are appliances with built in self-signed SSL certificates you can't change. You take what you can get.

Labels: ,

Friday, September 18, 2009

It's the little things

My attention was drawn to something yesterday that I just hadn't registered before. Perhaps because I see it so often I didn't twig to it being special in just that place.

Here are the Received: headers of a bugzilla message I got yesterday. It's just a sample. I've bolded the header names for readability:
Received: from ( by ( with Microsoft SMTP Server (TLS) id 8.1.393.1; Tue, 15 Sep 2009 13:58:10 -0700
Received: from ( by ( with Microsoft SMTP Server (TLS) id 8.1.393.1; Tue, 15 Sep 2009 13:58:09 -0700
Received: from mail97-va3 (localhost.localdomain []) by (Postfix) with ESMTP id 6EFC9AA0138 for me; Tue, 15 Sep 2009 20:58:09 +0000 (UTC)
Received: by mail97-va3 (MessageSwitch) id 12530482889694_15241; Tue, 15 Sep 2009 20:58:08 +0000 (UCT)
Received: from ( []) by (Postfix) with ESMTP id 5F7101A58056 for me; Tue, 15 Sep 2009 20:58:07 +0000 (UTC)
Received: from ([]) by with ESMTP; Tue, 15 Sep 2009 14:57:58 -0600
Received: from (localhost []) by (Postfix) with ESMTP id A56EECC7CE for me; Tue, 15 Sep 2009 14:57:58 -0600 (MDT)
For those who haven't read these kinds of headers before, read from the bottom up. The mail flow is:
  1. Originating server was, which mailed to...
  2. running Postfix, who forwarded it on to Novell's outbound mailer...
  3., who attempted to send to us and sent to the server listed in our MX record...
  4. running Postfix, who forwarded it on to another mailer on the same machine...
  5. mail97-ca3-r running something called MessageSwitch, who sent it on to the internal server we set up...
  6. running Exchange 2007, who send it on to the Client Access Server...
  7. for 'terminal delivery'. Actually it went on to one of the Mailbox servers, but that doesn't leave a record in the SMTP headers.
Why is this unusual? Because steps 4 and 5 are at Microsoft's Hosted ForeFront mail security service. The perceptive will notice that step 4 indicates that the server is running Postfix.

Postfix. On a Microsoft server. Hur hur hur.

Keep in mind that Microsoft purchased the ForeFront product line lock stock and barrel. If that company had been using non-MS products as part of their primary message flow, then Microsoft probably kept that up. Next versions just might move to more explicitly MS-branded servers. Or not, you never know. Microsoft has been making placating notes towards Open Source lately. They may keep it.

Labels: , , ,

Thursday, August 20, 2009

On databases and security

Charles Stross has a nice blog post up about the UK DNA database, database security, and the ever dropping price of gene sequencing and replication. The UK has a government DNA database of anyone ever booked by anything by the police. Because of how these things work, lots of entities have access to it for good reasons. Like the US No Fly List, being on it is seen as a black mark on your trustability. He posits some scenarios for injecting data into the DNA database through wireless and other methods.

Another thing he points out is that the gear required to reproduce DNA is really coming down in price. In the not too distant future, it is entirely possible that the organized criminal will be able to plant DNA on the scene of a crime. This could result in pranks ("How'd the Prime Minister get to Edinburgh and back to London in time to jiz on a shop window?") to outright frame jobs.

Which is to say, once DNA reproduction gets into the hands of the criminal elements, it'll no longer be a good single-source biometric identifier. Presuming of course that the database backing it hasn't been perved.

Labels: ,

Monday, August 17, 2009

SANS Virtualization

Mr. Tom Liston of ISC Diary fame is at the SANS Virtualization Summit right now. He has been tweeting it. I wish I was there, but there is zero chance of me convincing my boss to send me. Even if it was a year in which out of state travel was allowed.

Mostly just interesting quotes so far, but there have been a few interesting ones.

"When your server is a file, network access equals physical access" - Michael Berman, Catbird

From earlier: "You can tell how entrenched virtualization has become when the VM admin has become the popular IT scapegoat" - Gene Kim

On VMsprawl: "The 'deploy all you want, we'll right click and make more' mentality." Herb Goodfellow, Guident.

I expect to see more as the week progresses.

Labels: ,

Wednesday, August 12, 2009

Legal key recovery

Remember this? About the UK's new laws stating that failing to reveal decryption codes on-demand could result in jail sentences?

Well, it happened. We have yet to see what size of rubber hose is being used, but these two are being sized up.

Labels: ,

Tuesday, August 11, 2009

Changing the CommandView SSL certificate

One of the increasingly annoying things that IT shops have to put up with is web based administration portals using self-signed SSL certificates. Browsers are increasingly making this setup annoying, and for a good reason. Which is why I try and get these pages signed with a real key if they allow me to.

HP's Command View EVA administration portal annoyingly overwrites the custom SSL files when it does an upgrade. So you'll have to do this every time you apply a patch or otherwise update your CV install.
  1. Generate a SSL certificate with the correct data.
  2. Extract the certificate into base-64 form (a.k.a. PEM format) in separate 'certificate' and 'private key' files.
  3. On your command view server overwrite the %ProgramFiles%\Hewlett-Packard\sanworks\Element Manager for StorageWorks HSV\server.cert file with the 'certificate' file
  4. Overwrite the %ProgramFiles%\Hewlett-Packard\sanworks\Element Manager for StorageWorks HSV\server.pkey file with the 'private key' file
  5. Restart the CommandView service
At that point, CV should be using your generated certificates. Keep these copied somewhere else on the server so you can quickly copy them back in when you update Command View.

Labels: , , ,

Thursday, August 06, 2009

Permission differences

In part, this blog post could have been written in 1997. We haven't exactly beaten down the door migrating away from NetWare.

Anyway, there are two areas that are vexing me regarding the different permissioning models between how Novell does it, and how Microsoft does it. The first has been around since the NT days, and relates to the differences (vast differences) between NTFS and the Trustee model. The second has to do with Active Directory permissions.

First, NTFS. As most companies contemplating a move from NetWare to Microsoft undoubtedly find out, Microsoft does permissions differently. First and foremost, NTFS doesn't have the concept of the 'visibility list', which is what allows NetWare to do this:

Grant a permission w-a-y down a directory tree.
Members of that rights grant will be able to browse from volume-root to that directory. They will see each directory entry along the path, and nothing else. Even if they have no rights to the intervening directories.

NTFS doesn't do that. In order to fake it you need two things:
  • Access Based Enumeration turned on on the share (default in Server 2008, and add-on option in Server 2003)
  • A specific rights grant on each directory between the share and the directory with the rights grant. The "Read" simple right granted to "this directory only".
Unfortunately, the second one is tricky. In order to grant it you have to add an Advanced right, because the "read" simple right grants read to, "This directory, files, and subdirectories," when what you want is, "this directory only". What this does is grant you the right to see that directory-entry in the previous directory's list.

Example: if I grant the group "StateAuditors" the write access to this directory:


If I just grant the right directly on "Procedures", the StateAuditors won't be able to get to that directory by way of that share. I could just create a new share on that spot, and it'd work. Otherwise, I'll have to grant the above mentioned rights to each of DocTeam, StandardsOffice, and Accounting.

It can be done, and it can even be scripted, but it represents a significant change in thinking required when it comes to handling permissions. As most permissions are handled by our Desktop group, this will require retraining on their part.

Second, AD permissions. AD, unlike eDirectory, does not allow the permissions short-cut of assigning a right to an OU. In eDirectory, this allowed anything in that OU access to the whatever. In AD, you can't grant the permission in the first place without a lot of trouble, and it won't work like you expect even if you do manage to assign it.

This is going to be a problem with printers. In the past, when creating new print objects for Faculty/Staff printers, I'd grant the users.wwu OU rights to use the printer. As students aren't in the access list, they can't print to it unless they're in a special printer-access group. All staff can print, but only special students can. As it should be. No biggie.

AD doesn't allow that. In order to allow "all staff but no students" to print to a printer, I'd have to come up with a group of some kind that contains all staff. That's going to be too unwieldy for words, so we have to go to the 'printer access group for everyone' model. Since I'm the one that sets up printer permissions, this is something *I* have to keep in mind.

Labels: , ,

Friday, July 24, 2009

Whoa! Another out-of-band patch from Microsoft!

TWO Updates!

One for Visual Studio, and a second for Internet Explorer. Due to the relative lack of IE use (okay, downright zero) on our Servers, we'll probably not hustle this one out the door. Our developers, on the other hand, should pay attention.

Labels: ,

Tuesday, June 30, 2009

Super users

Having been a 'super user' for most of my career, I do not have the same perspective other people do when it comes to interacting with corporate IT. Because of what I do, I see everything. That's part of my job, so that's what I see. I have to know it is there.

However, how each company handles their elevated privilege accounts varies. Some of it depends on what system you're working in, of course.

Take a Windows environment. I see three big ways to handle the elevated user problem:
  1. One Administrator account, used by all admins. Each admin has a normal user account, and log in as Administrator for their adminly work.
    • Advantages Only one elevated account to keep track of.
    • Disadvantages Complete lack of auditing if there is more than one admin around. Also, unless said admin has two machines, or has a VM for adminly work, they're logged in as Administrator more often than they're logged in as themselves.
  2. One Administrator account, admins user accounts are elevated to Administrator. Each admin's normal account is elevated. Administrator is relegated to a glorified utility account, useful for backups, other automation, or if you need to leave a server logged in for some reason.
    • Advantages Audit trail. Changes are done in the name of the actual admin who performed the change.
    • Disadvantages These users really need to be exempted from any Identity Management system. Since there are only going to be a few of them, this may not matter. Also, these users need to treat these passwords like the Administrator password.
  3. Each admin gets two accounts, normal and elevated As with the above, Administrator is a glorified utility account. But each admin gets two accounts; a normal account for every day use (me.normal) and an elevated account (me.super) for functions that need that kind of access.
    • Advantages Provides audit trail, and allows the admin's normal account to be subject to identity-management safely. Easy availability of 'normal' account allows faster troubleshooting of permissions issues (hard to check when you can see everything)
    • Disadvantages Admin users are juggling two accounts again, with the same problems as option 1.
I personally haven't seen the third option in actual use anywhere, even though that's my favorite one. Unixy environments are a bit different. The ability to 'sudo' seems to be the key determiner of elevated access, with ultimate trust granted to those who learn the root password outright. Sudo is the preferred method of doing elevated functions due to its logging capability.

What other methods have you seen in use?

Labels: , ,

Monday, June 22, 2009

IPv6 and the PCI DSS standards

The Payment Card Industry Data Security Standard (PCI DSS) applies to a couple of servers we manage. In those standards is section 1.3.8. It reads

Implement IP masquerading to prevent internal addresses from being translated and revealed on the Internet, using RFC 1918 address space. Use network address translation (NAT) technologies—for example, port address translation (PAT).

With the testing procedure listed as:

For the sample of firewall and router components, verify that NAT or other technology using RFC 1918 address space is used to restrict broadcast of IP addresses from the internal network to the Internet (IP masquerading).

Which is sound practice, really. But we're running into an issue here that may become more of an issue once IPv6 gets deployed more widely. We're a University that received it's netblock back when they were still passing out Class B networks to the likes of us ( in case you care). IPv4 address starvation is not something we experience. Because of this, NAT and IP-Masq have very little presence on our network.

We also believe in firewalls. Just because the address of my workstation is not in an RFC 1918 netblock, doesn't mean you can get uninvited packets to me. This is even more the case for the servers that handle credit-card data.

It is my belief that the intent of this particular standard-line is to prevent scouting of internal networks in the aid of directed penetration attempts. Another line that should probably be in this standard to support this, would be something similar to:
Implement separate DNS servers for public Internet and Internal usage, and prevent public Internet access to the internal DNS servers.
Because the same DNS servers we use internally are the same ones that are in our Name Server records for the WWU.EDU domain, you can do a lot of recon of our internal networks from home. We don't allow zone transfers, of course, but enough googling around our various sites and reverse-IP-lookups will reveal the general structure of our network, such as which subnets contain most of our servers and which are behind the innermost firewalls.

This is a long way of saying that our IPv4 network functions a lot like the network envisioned when IPv6 was first ratified. Because of this, we're running into some problems with the PCI standards that IPv6 will probably run into as well.

Take the requirement to have the PCI-subject servers on an RFC1918 IP number. RFC1918 only applies to IPv4. IPv6's version of that is RFC4193, so the standard will have to be modified to mandate IPv6 numbers be on RFC4193 numbers. Therefore, for strictest compliance no PCI servers can move to pure IPv6. Servers that have both IPv4 and v6 numbers on them are an interesting case, where the v4 number may be an RFC1918 number, but the v6 number is NOT private. To my knowledge, the standards are unclear on this topic.

We had to create NAT gateways for our PCI servers, and create RFC1918 addresses for them just for PCI compliance. The NAT gateway is behind the innermost firewall. These are our only servers behind a NAT gateway of any kind.

In the beginning, IPv6 expressly did NOT have NAT; it was designed to get rid of NAT. However, in recent years the case for IPv6 NAT has been pressed, and there is movement to get something like that working. In my opinion, a lot of that push was to allow NAT to continue as an obscuration-gateway (or low-cost stateless 'firewall') between internal resources and external hostile actors. I strongly suspect that when the PCI DSS standard gets it IPv6 update, they will continue to mandate some form of IP Masquerade.

Labels: ,

Tuesday, June 16, 2009

I want my SSH

It seems that anything that runs over TCP can be tunneled over HTTPS. This is not a good thing when it comes to network security. As an example of the kind of whack-a-mole this can result in, I give you the tale of SSH. With a twist.

Power User: Surfs web freely. It is good.

Corporate Overlords: "In the interests of corporate productivity, we will be blocking certain sites." Starts blocking Facebook, MySpace, Twitter, and all sorts of other popular time-wasters like

Power User: Is thwarted. Hunts up an open proxy server on the net. Surfs Facebook in the clear again. It is good.

Corporate Overlords: Informs network security office, creating it from scratch if need be, that it has come to their attention that the blocks are being circumvented, and that This Needs To Stop. Make it no longer so.

Network Security: "Yessir, will do sir. Will need funds for this."

Corporate Overlords: Supplies funds.

Network Security: Adds known-proxies to the firewall block list.

Power User: Is thwarted. Googles harder, finds an open proxy not on the list. Unrestricted internet access restored. It is good.

Network Security: Subscribes to a service that supplies open-proxy block lists.

Power User: Is thwarted. Googles harder. Can't find accessible open proxy anywhere. Decides to make their own. Downloads and installs Squid on their home Linux server. Connects to home server over SSH, tunnels to squid over SSH. Unrestricted internet access restored. It is good.

Network Security: Notices uptick in TCP/22 traffic. Helpdesk tech gets busted for surfing YouTube while on the job. Machine dissection reveals SSH tunnel. Blocks TCP/22 at the router.

Power User: Is thwarted. When home next, moves SSH port to TCP/8080. Gets to work, uses TCP/8080 for SSH session. Unrestricted internet access restored. It is good.

Corporate Overlords: "In the interests of productivity, instant messaging clients not on the corporate approved lists are now banned."

Power User: Is not affected. Continues surfing in the clear. It is good.

Corporate Overlords: "In the interests of productivity, all unapproved non-HTTP off-network internet access is now banned."

Power User: Is thwarted. Moves SSH to TCP/80. Unrestricted internet access restored. It is good.

Network Security: Implements deep packet inspection on the firewall to make sure TCP/80 and TCP/443 traffic really is HTTP.

Power User: Is thwarted. Spends a week, gets SSH-over-HTTP working. Unrestricted internet access restored. It is good.

Network Security: Implements mandatory HTTP proxy, possibly enforcing it via WCCP.

Power User: Is thwarted, cache mucks up ssh sessions. Moves to HTTPS. Unrestricted internet access restored. It is good.

Network Security: Subscribes to a firewall block-list that lists broadband DHCP segments. Blocks all unapproved access to these IP blocks.

Power User: Is thwarted. Buys ClearWire WiMax modem. Attaches to work machine via 2nd NIC. Unrestricted internet access restored. It is very good, as access is faster than crappy corporate WAN. Should have done this much earlier.

Network Security: Developer busted for having a Verizon 3G USB modem attached to machine. Buys desktop inventorying software. Starts inventorying all workstations. Catches several others.

Power User: Sees notice about inventorying. Starts bringing home laptop to work to attach to ClearWire modem. Workstation is squeaky clean when inventoried. Uses USB stick to transfer files between both machines. Unrestricted internet access maintained. It is good.

Network Security: Starts random inspections of cubes for unauthorized networking and computing gear. Catches wide array of netbooks and laptops.

Power User: Hides ClearWire modem in cunningly constructed wooden plant-stand. Buys hot-key selectable KVM switch to hide in desk. Hides netbook in back of file-drawer, runs KVM cable to workstation keyboard/mouse/monitor. Runs USB hub to netbook, hides hub in plain sight next to keyboard. Is smug. Unrestricted internet access maintained. It is good.

Now that 3G and WiMax are coming out, it is a lot harder to maintain productivity-related network blocks. The corporate firewall is no longer the sole gateway between users and their productivity destroying social networking sites. A Netbook with an integrated 3G modem in it will give them their fix. As will most modern SmartPhones these days.

As for information leakage, that's another story all together. The defensive surface in an environment that includes ubiquitous wireless networking now includes the corporate computing hardware, not just the border network gear. This is where USB/FireWire attachment policies come into play. A workstation with two NICs can access a second network, so the desktop asset inventorying software needs to alarm when it discovers desktop machines with more than one IPed interface.

And yet... the only way to be sure to catch the final end-game I lined out above, an air-gapped external network connection, is through physical searches of workspaces. That's a lot of effort to go to prevent information leakage, but if you're in that kind of industry where that's really important the effort will be invested. In such environments being caught breaking network policy like that can be grounds for termination. And yes, this is a lot of work for the security office.

All in all, it is easier to prevent information leakage than it is to prevent productivity losses due to internet-based goofing off. Behold the power of slack.

What does this have to do with SSH, which is what I titled this post about? You see, SSH is just a tool. It is a very useful tool for dealing with abstract policies of the http-restricting kind, but just a tool. It can get around all sorts of half-assed corporate block attempts. It has been the focus of many security articles over the years, and because of this it is frequently specifically included in corporate security policies.

Focusing policies on banning tools is short-sighted, as evidenced by the 3G/WiMax end-run around the corporate firewall. Since technology moves so fast, policies do need to be somewhat abstract in order to not be rendered invalid due to innovation. A policy banning the use of SSH to bypass the web filters does nothing for the person caught surfing using their own Netbook and their own 3G modem. A policy banning the use of any method of circumventing corporate block policy does. A block list is an implementation of policy, not the policy itself.


Thursday, May 28, 2009

The security of biometrics

A question was asked recently about how secure those finger-print readers you find on laptops really are. As with all things security, it depends on what you're defending against. Biometrics have some fundamental problems that make them a bit less secure than passwords in many cases.

For an example of how to fool a fingerprint reader, here is a MythBusters clip where they do just that.

Biometrics measure something you are, which is one of the know/have/are triad of authentication. Two factor authentication has two of these three, which is why some banks are using secure tokens (have) in addition to passwords (know) for online banking. In the abstract, biometrics should be the most secure of the lot since you are the only you in existence.

In practice, however, it is a lot more fuzzy. Fingerprints are shared by one in umpty million people, but can change on a day to day basis (band-aids, paper-cuts, table-saws). Voice prints change from day to day (colds) and year to year (age). Also, biometrics involve a lot more data than the few bytes of 7-bit ascii that is the normal western-alphabet password. Unlike passwords which have to be exact every time, biometric sensors have to allow for levels of uncertainty in measurement lest they permit false negatives. It is the uncertainty that allows attacks against biometrics.

Take the fingerprint reader in the MythBusters clip. As it turns out fingerprints are easy to replicate, which is why the high end readers attempt to determine if the fingerprint is actually attached to a person in some way. The ways of accomplishing this are typically pulse and skin conductivity; two things a xeroxed fingerprint couldn't have. The MythBusters defeated that particular lock by putting a thumb behind the paper which provided the pulse, and licked the paper which provided conductivity. Tada! Open door. No gelatin moulds needed.

Biometrics are problematical in another sense, you can't change them once they're compromised. This can be done for the other legs of the authentication triad, but not biometrics. Because of this, I find them fundamentally unsuited for sole-source authentication; they really need to be used with something else in a two-factor setup.

Biometric systems of the future may end up using more than one biometric. Fingerprint AND iris scan AND face scan. That kind of thing, which would make it a lot harder to fuzz all the methods well enough at the same time to get through. That kind of thing is tricky with laptops, though may come with ever increasing camera sensitivity.


Friday, May 08, 2009

Password stealing

There has been some press lately about the University of California, Santa Barbara having assumed control of a Torpig botnet. They've put out a report, and it has been getting some attention. There is some good stuff in there, but I wanted to highlight one specific thing.

The Ars Technica review of it says it pretty directly:
The researchers noted, too, that nearly 40 percent of the credentials stolen by Torpig were from browser password managers, and not actual login sessions
The 'browser password managers' are the password managers built in to your browser-of-choice for the ease of logging in to sites. I have personally never, ever used them because the idea of saving my passwords like that gives me the creeps. Even if it is AES encrypted. However, the way to attack those repositories is not through grabbing the file it is through the browser itself. File-level security is only part of the game, even if it is the easiest to secure.

This extends to other areas as well. I exceedingly rarely click the, "remember this password," button in anything I'm on. This includes things like the Gnome keyring. That kind of thing is not a good idea in general.

The closest I get is a text-file on one of these (now with linux support!), and even that is a compromise between having to memorize a lot of cryptographically secure passwords (long AND complex) and a least wince-worthy memory jogging method. I can still describe several attack methods that could compromise that file, not the least of which is a clipboard/keylogger, or even a simple file-sniffer running in the background that drives through any mounted USB sticks. But... for long work passwords I'll use maybe four or five times a year, but still have to know, it's a compromise.

There are still some passwords I'll never write down outside of a password field. Such as the god passwords, any password I use on a daily or even weekly basis (I use those often enough for true memorization), or passwords used for any kind of financial transaction. For those kinds of high-value passwords, convenience of memory prosthetics doesn't enter in to it.

Labels: ,

Wednesday, May 06, 2009

Windows 7 and NetWare CIFS

Now that RC1 is out we're trying things. Aaaaaand it doesn't work, even when set to the least restrictive Lan Man authentication level. Also? Windows 7 has a lot more NTLM tweakables in the policy settings that we don't understand. But one thing is clear, Windows 7 will not talk to NetWare CIFS out of the box. The Win7 workstation will need some kind of tweaks.

I may need to break out Wireshark and see what the heck is going on at the packet level.

Life on the bleeding edge, I tell you.

Update: Know what? It was a name-resolution issue. It seems that once you went to the resource with its FQDN rather than the short name, the short names started working. Kind of odd, but that's what did it. A bit of packet sniffing may illuminate why the short-name method didn't work at first (it should) which just might illuminate either a bug in Win7, or a simple feature of Windows name-resolution protocols.

The only change that needed to be made was drop the LAN Manager Authentication Level to not offer NTLMv2 responses unless negotiated for it.

Labels: , , ,

Monday, May 04, 2009

Cooperative multitasking and security, a browser perspective

An article over on Ars Technica goes into some detail about a dispute between two Firefox extensions that has gotten nasty. It seems that the extension environment inside Firefox (and that other Mozilla browser, SeaMonkey) is not sandboxed to any significant degree. The developer of NoScript was able to write in a complete disabling of the AdBlock Pro extension.

In some ways this reminds me strongly of how NetWare works. NetWare uses cooperative multitasking, rather than the preemptive multitasking used is pretty much every other modern server-class OS. This is part of the reason NetWare can squeeze out the performance it does under high load. The Firefox extensions run as children of the main Firefox process, can freely interact with each other, and when they crash hard can take the whole environment down with it.

Another way it reminds me of NetWare is the seeming lack of memory protection. In NetWare, unless you specifically declare that a module is to run in a protected memory space, it runs in the Kernel's memory space. This means that one program can access the memory of another program, so long as they're in the same memory space. This stands in contrast to other operating systems which provide a protected memory space for each process. It sounds to me like Firefox has a single memory space equivalent, and all process have access to it. This allows the extension-war described above to be possible without resorting to outright hacking around security features.

Moving away from a cooperative multitasking model made Windows at lot more stable (Win3.11 to Win95) as did the introduction of true memory protection (Win98 to Win2K). Memory protection is a major security barrier, and is something that Firefox seems to lack. If a Firefox install is unlucky enough to have an evil extension added to it, all the data that passes through that browser could be copied to persons nefarious, much like the Browser Helper Objects in IE.

Does it seem odd to you that I'm talking about Operating System protection features being built into browsers? These days browsers are in large part operating systems on their own, albeit ones missing the key feature of having exclusive control of the hardware.

These problems to appear to be on the verge of changing. Both Chrome and IE8 launch each browser tab in its own process, which adds a barrier between processes spawned in those tabs. That way when a flash game starts consuming all available CPU, you can kill the tab and keep the rest of your browser session running. Unfortunately, process separation still won't stop NoScript from killing AdBlocker. For that, more work needs to be done.

Labels: ,

Thursday, April 16, 2009

A Mac botnet?

Ars Technica has an article up about a detected botnet based on Mac OSX machines. This is interesting stuff since you don't SEE this kind of thing all that often. OSX is the #2 operating system after Windows, but it is a distant #2. Also interestingly the infection vector appears to be pirated software, a vector that bring a tear of nostalgia to my eye for its sheer antiquity. Clearly this would be a slow growing botnet, but that's OK since a large percentage of Mac users don't bother with AV software since they're not running Windows and "don't need it".

What would be more impressive would be a drive-by downloader ala IE, but with Safari instead. I don't remember hearing any press about anything other than proof-of-concept with that, though.

Labels: ,

Tuesday, October 28, 2008

The gift of security

Last Christmas my parents bought me a 4GB IronKey. This is a nifty little device! And really, the gift of data security is rather thoughtful. And yesterday, it finally got the new firmware that enables Linux support!

Between then and now I haven't really been able to use it at work. And since transporting files between work and home is one of the nicer features of it, it has largely sat unused. But right this moment it is mounted to my openSUSE 10.3 workstation. This beats a floppy disk for transporting pgp/gpg keys.

Labels: , , ,

Thursday, November 15, 2007

Encryption & key demands

As some of you know, the UK has passed a law which authorizes jail time for people who refuse to turn over encryption keys. If I'm remembering right, 2-3 years. This is a bill that's been making the rounds for quite some time, and got passage as a terror bill. Nefarious elements have figured out that modern encryption technologies really can flummox even the US National Security Agency deep crack mainframes and they therefore use it. There was a reason that encryption technologies were classified a munition and therefore export-restricted.

Those of you who've been with Novell/NetWare long enough remember this. Back in the day the NICI and other PKI components came in three flavors, Domestic (strong, 128bit), International (weak, 40bit? 56bit?), and basic (none). Things have loosened up since then.

Part of the problem of encryption is that while the private keys may be strong, securing them is tricky. When the feds raid your house and grab every device capable of both digital computation and communication to throw into the evidence locker, their computer forensics people can get your private keys. However, if your private keys are further locked away, such as PGP, it won't do them much good. To gain access to your key-ring they'll need the pass-phrase.

That's where the new law in the UK comes in. Police have two options to figure out your pass-phrase. They can intercept it somehow, or they can point a jail term at your head and demand the the pass-phrase.

That doesn't work in the US thanks to the Bill of Rights, and the 5th Amendment. This is the amendment that states that you have a right to not self incriminate, and by extension this means that police can't force you to divulge information that could be detrimental to you. As it happens, the people who wrote this amendment had the English legal system in mind when they came up with the idea, what with us being an ex-colony and all that. So if you performed safe encryption handling, didn't write the pass phrase anywhere and made a point of making sure it never hit disk in the clear, the US Government can't penalize you for not telling them the pass phrase. A US law similar to the UK law would face a much harder judicial battle than it got in the UK.

Which isn't the case in the UK. As one crypto expert I spoke with once said, the UK law amounts to, "rubber-hose cryptography." Which is an allusion to the fact that a sufficient application of pain (i.e. torture) can cause someone to fork over their own encryption keys, which is a concern in certain totalitarian regimes.

The accepted response to 'rubber-hose' crypto methods is to use a 'duress key'. This key will either destroy the crypted data, or reveal harmless data (40GB of soft porn!). The problem with such a key is that it works best if such a key is not known to exist. Forensics analysis can show what kind of crypto is in use, and if that particular type supports the use of a duress key, the interrogators can work that into their own information extraction methods. Also, any forensics person worth their salt works on a COPY of the data (as the RIAA knows all too freaking well, digital data is very easy to duplicate), so having the duress key destroy the data isn't a loss. In a judicial framework, having the key given destroy the (copy of the) data can earn the person a, "hampering a lawful investigation," charge and even more jail time.

All that said and done, there are still PLENTY of ways for the US Government to gain access to pass-phrases. I've heard of at least one case where a key-logger was installed on a machine for the express purpose of intercepting the key-strokes of the pass-phrase. If the pass phrase exists in the physical realm in any way (outside of your head), they can execute search warrants on it. Some crypto programs don't handle pass-phrase handling well. Also, if you have a Word document that was crypted, then decrypted so you could view it, the temp files Word saves ever 5-10 minutes are in-the-clear and recoverable through sufficient disk analysis. The end-user needs to know about safe handling of in-the-clear data.

All of which is expensive work. If the Government can save several thousand dollars in tech time by simply asking you the pass phrase and throwing you in the clink if you don't give it, that's what they'll do. If the person under investigation is known to be very crypto savvy (uses a Linix machine, with an encrypted file-system that requires a hand-entered password to even load, and uses PGP or similar on top of that to defend against attacks when the file-system is mounted) it becomes WAY cheaper to go the Judicial route than the tech route.

Yeah, 2-3 years may be much better than the 20-life you'd face on a terrorism charge. But you'd be in custody the whole time, and they'll be spending that 2-3 years going over your encrypted data the hard way. And if actual actionable evidence surfaces to support a terrorism charge, you can bet your bippy that you'd be hauled into a court-house for a new trial, only this time facing 20-life. If you're in the UK. Here in the US they'll just keep you under surveillance until they get the pass phrase or enough other evidence to hold you down in custody and give them an excuse to throw everything you've ever touched into evidence lock-up.

Labels: , ,

Thursday, October 25, 2007

Virtualization and security

I've known for a while now that virtualization as it exists in x86 space is not a security barrier. Heck, it was stated outright at BrainShare 2006 when Novell started pushing AppArmor. The Internet Storm Center people had an article on it a month ago. And now we have an opinion from the OpenBSD creator about it, which you can read here.

It sounds like the main reason virtualization isn't a security barrier is because of the CPU architecture. Intel is making advances with this, witness the existence of VT extensions. Also, as virtualization becomes more ubiquitous in the marketplace Intel will start making their CPUs more virtualization-friendly. Which is to say that they're not very VM friendly now.

And as Theo stated in his thread, "if the actual hardware let us do more isolation than we do today, we would actually do it in our operating system." Process separation is its own form of 'virtualization', and is something that is handled in software right now. Anything in software can be subverted by software, so having a hardware enforceable boundary makes things stay where they are put.

Which is why I hold the opinion that you should group virtual-machines with similar security requirements on the same physical hardware, but separate machines subject to different regulations and requirements. Or put another way, do not host the internal Time Card web-server VM on the same hardware as your public web-server, even if they're on completely different networks. Or, do not host HIPPA-subjected VM's on the same ESX cluster as your Blackberry Enterprise Server VM.

Virtualization as it exists now in x86-space does provide a higher barrier to get over to fully subvert the hardware. Groups only interested in the physical resources of a server, such as CPU or disk, may not even need to subvert the hypervisor to get what they want; so no need to break out of jail. Groups intent on thievery of information may have to break out of jail to get what they want, and they'll invest in the technology to do just that.

Warez crews don't give a damned about virtualization, they just want an internet-facing server with lots of disk space they can subvert. That can be a VM or physical server for all they care. They're not the threat, though the resource demands they can place on a physical server may cause problems on on unrelated VM's due to simple resource starvation.

The threat are cabals looking to steal information for resale. They are the ones who will go to the effort to bust out of the VM jail. They're a lot harder to detect since they don't cause huge bandwidth spikes the ways the warez crews do. They've always been our worst enemy, and virtualization doesn't do much at all to prevent them gaining access. In fact, virtualization may ease their problem as we group secure and unsecure information on the same physical hardware.

Labels: , ,

Monday, September 17, 2007

Email encryption

The last time I seriously took a look at email encryption was at my old job, using GroupWise 5.5. I did some poking around here with Exchange/Outlook and made it work, but it wasn't a serious look. Back then there was still real doubt about which standard would reign supreme: PGP (or GPG) vs S/MIME. PGP had been around for ages, where S/MIME used the same PKI infrastructure used by banks for secure online banking.

Outlook and GroupWise both had S/MIME built in. Both used the Microsoft crypto API. Remember, this was GW 5.5 so there was no Linux version yet.

If you look at posts on Bugtraq, clearly PGP is reigning supreme. A lot of posts there tend to be signed, and almost all of the signatures are GPG (the GnuPGP) or PGP. So that would tend to suggest that PGP-style stuff is winning. Except... bugtraq is primarily a Linux list that also bashes Microsoft, so the preference for the old school secure email (PGP) is easy to understand.

Yet why are the major email systems shipping with S/MIME built in?

There are several reasons why digitally signed email hasn't caught on. First and foremost it requires active use on the part of the user, in the form of explicitly stating "I trust this user and their certificate". Second, managing certificate renewals and changes adds work. Third, certificates for S/MIME are subject to the same SSL problems web-site certificates are, price. Fourth, the certificates (be it PGP or S/MIME) generally are only usable on a single operating system instance, which makes portability challenging. still offers free email SSL certificates for personal use. I haven't read the details, but I suspect that 'professional use' is invalidated, which would prevent WWU from going to them whole-sale. I'll have to look.

The very nature of secure email makes it something only people who want it will strive for. This is not something that can be pushed down from On High unto the masses for enterprise deployment. Like sites with bad SSL certificates, Outlook will throw a Warning! message when it receives an email signed by a certificate it doesn't trust or know about. End users are notorious for being annoyed by pop-ups they view as superfluous. As with SSL certificates we have the Trusted Certificate Authority problem, which means that any external signed communication needs to be signed with a certificate already known by everyone (i.e. VeriSign, or similar).

And ALL of this doesn't look at the problem of digitally signed email in web clients like gmail. I have many friends who use their web browser as their primary email interface. AJAX can do a lot, but I don't know if it can do secure decryption/validation of email. I'm pretty sure AJAX can do insecure decryption/validation, which sort of violates the point. Right now, in order to do actual secure email you have to use a full mail client with support for the relevant protocol(s). Which means that, as above, only people serious about email security will take the steps to do email securely; it can't be mandated and invisible to the user.

So, things haven't changed much in the 4-5 years since I last looked at it.

Portability could be solved through creative use of a directory-service. I know for sure that eDir can store SSL certificates just peachy, the trick is getting them out and integrated into a mail client by way of LDAP. Active Directory has similar capabilities, but even Microsoft hasn't implemented AD/SMIME integration.

That said, directory integration provides its own problems. I, with my god like powers, can create and export private keys for generic users and through that securely impersonate them. This creates a non-repudiation problem, and is the reason that Microsoft's SecureAPI has a setting to require a password to be entered before using a certificate for signing. That password is currently set on the local machine, not in AD, which is how god-like-me can be foiled in my quest to forge emails.

Still, email security remains the purview of those to whom it is important. Lawyers and security professionals are the groups I run into most often that use it. I know some hobbyists that use the technology between themselves, but that's all it is, a way to prove that they can make the technology work in the first place. It still isn't ready for "the masses".

Labels: , ,

Tuesday, November 21, 2006

Virtual Machines are not a security barrier

Several of the sessions I attended at BrainShare this year were on AppArmor. The project lead for that product presented several times, and several times he repeated this mantra. A Virtual Machine is not a security barrier. This is true for full-virtualization products such as VMWare, and paravirtualization such as Xen.

Yesterday's SANS diary had an entry about VM detection on the part of malware. As you can well imagine, spyware and adware researchers make great use of products like VMWare to analyze their prey. VMWare has handy snapshoting abilities, which makes reverting your VM to pre-infection state easy. Unfortunately for them, "3 out of 12 malware specimens recently captured in our honeypot refused to run in VMware." The bad-ware authors are also aware of this and program their stuff to not run.

What's more insidious is that there are cases where the malware doesn't use the VMware detection to not run, but to infect the HOST machine instead. While this may not affect something like ESX Server which is a custom OS, other products like Xen in full virtualization mode or VMWare Server running on Windows or Linux would be exposed this way. Figuring out that your malware process is running in a virtual machine is easy and scriptable, and breaking out of the VM is just as scriptable.

Virtual Machines are not a security barrier, nor do they make you significantly safer. They're just different.

Tags: , ,

Labels: , , ,

Monday, October 16, 2006

Strong passwords in a multiple authentication environment

One of the challenges of coming up with a reasonable password complexity policy is taking into account the relative strengths and weaknesses of the operating environments those passwords will be used in. Different operating systems have different strengths and weaknesses when it comes to password strength. Different environments have different threat exposures.

The two biggest things to worry about for brute-force password problems are random guessing, and hash-grab-and-crack. I'm ignoring theft or social engineering for the moment, as plain old password complexity doesn't do a lot to address those issues. Random guessing is the reason intruder lockout was created. Hash-grab-and-crack is what pwdump1/2/3/4 was created to do, with offline processing.

Password guessing will work on any system, given sufficient time. Not all systems even permit grabbing the password hashes, like NDS passwords, where others are rather well known (/etc/shadow). Grabbing the password hashes is preferred, since it permits offline guessing of passwords that won't trip any intruder-lockout alarms.

As for OS-specific password issues, we have three different systems here at WWU. Our main student shell server is running Solaris8, so passwords longer than 8 characters are meaningless; only the first 8 characters count. Our eDirectory tree is running Universal passwords, so passwords of any length are usable. Our Windows environment is not restricted to NTLM2 which means we have NTLM password hashes stored; and in this era of RainbowTables any password shorter than 16 characters (of ANY character, regardless of char-set) is laughably easy to crack if you have the hash.

This leads us to strange cases. This password:
Is very, very secure in Solaris, but laughably easy in Windows. And this password:
Is a very good Windows password, but laughably easy on Solaris.

So, what are we to do? That's a good question. Solaris passwords prefer complexity over length, and Windows passwords prefer length over complexity. This would imply that the optimal password policy is one that mandates long (longer than 16 characters) complex (the usual rules) passwords. Solaris will only take the first 8 characters, so the complexity requirement needs to be beefy enough that the first 8 characters are cryptographically strong.

One of the first things a hacker does once they gain SYSTEM access on a windows box is dump the SAM list on that server. I've seen this done every time I've had to investigate a hacked server. When the machine being hacked is a domained machine, the threat to the domain as a whole increases. So far I haven't seen a hacker successfully dump the entire AD domain. On the other hand, one memorable case saw the SAM dump at 12:06am and a text-file containing the cleartext passwords was dumped including the local-administrator account (a password 10 characters long, three character sets, no dictionary words, in other words a good Solaris password) at 12:17am; clearly a Rainbow Table had been used to crack it that fast. This was almost two years ago.

One problem with long, complex passwords that are complex enough in the first 8 characters is the human memory. 8-10 characters is about as long as anyone can remember a gibberish password like "{BJ]+5Bf", and it'll take that person a while to learn it. Going the irregular-case and number-substitution route can add complexity, but cryptographically speaking not a lot. Password crackers like John the Ripper contains algorithms to replace "a" with "4" and "A", to make sure your super secret password "P45$w0r|)" is cracked within 1 minute. Yet something like "FuRgl1uv2" works out, as it contains bad spelling.

Never underestimate the cryptographic potential of bad spelling. Especially creative bad spelling.

We still haven't solved this one. We're working on upgrading Solaris to a version that'll take longer passwords, and the resultant migration that'll required. We know where we need to go, but getting the culture at WWU shifted so that such requirements won't end up with a user revolt and passwords on post-its is the problem. Two-factor is not viable for a number of reasons (cost, and web-access being the two top ones). Mandatory password rotation is something we only do in the 'high security' zone (banner), not something we do for our regular ole systems. It's a bad habit we're trying to break, but institutional inertia is in the way and that takes time to overcome.

If Microsoft decided to salt their NTLM hashes, and therefore render Rainbow Tables mostly useless, we wouldn't be in this mess. They've seen the light (NTLM2, and whatever Vista-server will bring out), but that won't help all the legacy settings out there. NTLM is already legacy, yet we have to keep it around for a number of reasons, right up there being Samba doesn't speak NTLM2.

Who knows, it may end up that what solves this for us is getting Solaris to take long passwords, rather than educating all of our users on what a good password looks like.

Tags: ,

Labels: , ,

Monday, October 03, 2005

Static Kernel

I just spent more time than I care to think about compiling a staticly-linked kernel for the one Linux server I manage. It's a server that does one and only one thing, so I can afford to crank it down pretty hard. This step should make root-kitting it a little harder.

But it took a l-o-n-g time to compile a kernel that'd work. I thought I could get away with getting a dynamic kernel that showed no modules in 'lsmod', and then flinking the 'use modules' switch. But that just changed everything listed as "m" to "y" in the .config file, and that, as you might expect, didn't work out so good. I ended up with a kernel that was about 4.5megs, and it complained, "Kernel is too big, consider using modules or bzImage". And since modules was out of the question and I was already using bzImage, I had to see what I could whack out.

Round two worked better, but took a lot of tweaking. I took the config file that worked for the modules-none-loaded build, and did a find and replace on "=m" with "=n", then set it to not use loadable modules. It wouldn't compile, since there were dependancies in crypto and a few other areas.

About 15 compiles later I now have a kernel that works. The big problem I had to figure out was why eth0 kept giving me a SIDIOINUSE or something like that. Turned out that a touchpad driver was attempting to load on the IRQ for eth0. Removed the touchpad driver from the .config, and now I have both ethernet cards working. Yay!

Still took too long.


Friday, August 12, 2005

More security fun in .edu-land

Two things.

FrSIRT announces a vulnerability in BackupExec Remote Agent that currently (as of this posting) has no patch. This will be a problem! Mark my words.

And next, from a SANS mailing I get:
Editor's Note (Pescatore): There has been a flood of universities acknowledging data compromises and .edu domains are one of the largest sources of computers compromised with malicious software. While the amount of attention universities pay to security has been rising in the past few years, it has mostly been to react to potential lawsuits do to illegal file sharing and the like - universities need to pay way more attention to how their own sys admins manage their own servers.
Hi, that's me. As I covered a couple of days ago, we have some challenges that corps don't have. For one, we have no firewall, just router filtering rules. And today I learned more about our security posture campus-wide.

It seems the buildings have some pretty restrictive filters on them at the router level, but our servers don't have much at all. This seems to be driven by a need to be good netizens rather than a need to prevent security intrusions. End-user systems are hideously hard to patch, spyware is rampant, and it doesn't take much to turn a WinXP machine that someone isn't paying attention to into a botnet drone.

Servers, on the other hand, are professionally managed. We pay attention to those. Security is a priority very near the top! Therefore, we don't have to be as strict (from a network point of view) with them as we do end-user systems.

Because of the firewall-free nature of the vast majority of our datacenter (more on that later), any application we buy that runs on a server has to be able to run in a hostile network. This has caused real problems. A lot of programs assume that the datacenter is a 'secure' environment and that hackers will be rattling door-knobs very infrequently. BackupExec comes to mind here. Add into that independent purchase authority, and you get departments buying off-the-shelf apps without considering their network security requirements in the context of the WWU environment.

Every single server I've had to unhack since 1/1/2005 has been due to:
  • Non-Microsoft patches that got missed (Veritas)
  • Microsoft patches that didn't get applied correctly as part of our standard update procedure. This is the classic, "the dialog said 'applied', but it really wasn't," problem.
  • Zero-day exploits (Veritas, others) where the vulnerability is not formally acknowledged by the vendor
This is the point where I say that Microsoft is no longer the bad boy of the bunch. Their patching process and built in tools are rich enough that they're no longer the #1 vector of attack. Yes, these are all on Windows, but it isn't Windows getting hacked most of the time. It's the apps that sit on it. We have now hit the point where we expect Windows Update-like updating for all of our apps, and forget to check vendors weekly.

Heck, weekly is too long! Take this new Remote Agent exploit. When the last Remote Agent exploit was released in June, it was less than 6 days after the patch was made available that the exploits started. We took 9 days to apply it since it needs reboots. Too long!

We now have to have a vendor and a patch-source for each and every program installed on a server. And even that isn't enough. Take HP. They just announced several bugs in their Server Management products, but I saw the notice on Bugtraq, not from any notice from HP. They offer a wide enough variety of programs that it is difficult to determine if the broken bits are the bits I installed on my servers or if I'm safe.

We have a Tuesday night regular downtime arranged so we can get the MS patches in. For things like the Veritas Remote Agent, we'd have to apply a patch 'out of cycle', and that's tough. It took 6 days for the last Remote Agent but to lead to hacked servers. For something like this, where there may already be a metasploit widget created, we need to apply ASAP after the patch releases. So a weekly patch application interval is not longer good enough, we need to be able to do it in 24 hours.

Presuming we are even aware the patch exists in the first place. From the same e-mail:
Editor's Note (Paller): Sadly many of the people who bought BrightStor packages have no idea the vulnerability exists. Computer Associates, like other larger vendors, sold through resellers to customers who never bothered to register. Those organizations, large and small, are at extreme risk and are completely unaware of the risk.
Which is precicely the problem. Heck, we're registered with Veritas and HP, but we were not notified of the recent problems. We had to find them out for ourselves. This is why auto-patching products that come with patch-feeds charge such extortionist amounts of money. It is ALMOST worth it to pay 'em.

Really, we're like an ISP that has far more official responsibility over the machines on our network. A traditional ISP has terms of service and a pretty 'hey, whatever,' attitude, and then harden the crap out of their own internal servers. We have to run business apps in an ISP environment. And if one of our workstations get hacked and becomes a drone that participates in a DoS, we get sued not the owner of the PC (...which is.. us.. unlike an ISP).

A final case in point and then I'll sign off. We recently reviewed a Point-of-sale application that an organization on campus will be using. It took about 45 seconds after the presentation began before we identified the glaring hole in their security setup. Sadly, this product was already purchased, and apparently a lot of other higher eds use it too. We just get to try and minimize the hole how we can, without actually fixing it.


This page is powered by Blogger. Isn't yours?