Monday, December 07, 2009
Account lockout policies
Since NDS was first released back at the dawn of the commercial internet (a.k.a. 1993) Novell's account lockout policies (known as Intruder Lockout) were set-able based on where the user's account existed in the tree. This was done per Organizational-Unit or Organization. In this way, users in .finance.users.tree could have a different policy than .facilities.users.tree. This was the case in 1993, and it is still the case in 2009.
Microsoft only got a hierarchical tree with Active Directory in 2000, and they didn't get around to making account lockout policies granular. For the most part, there is a single lockout policy for the entire domain with no exceptions. 'Administrator' is subjected to the same lockout as 'Joe User'. With Server 2008 Microsoft finally got some kind of granular policy capability in the form of "Fine Grained Password and Lockout Policies."
This is where our problem starts. You see, with the Novell system we'd set our account lockout policies to lock after 6 bad passwords in 30 minutes for most users. We kept our utility accounts in a spot where they weren't allowed to lock, but gave them really complex passwords to compensate (as they were all used programatically in some form, this was easy to do). That way the account used by our single-signon process couldn't get locked out and crash the SSO system. This worked well for us.
Then the decision was made to move to a true blue solution and we started to migrate policies to the AD side where possible. We set the lockout policy for everyone. And we started getting certain key utility accounts locked out on a regular basis. We then revised the GPOs driving the lockout policy, removing them from the Default Domain Policy, creating a new "ILO polcy" that we applied individually to each user container. This solved the lockout problem!
Since all three of us went to class for this 7-9 years ago, we'd forgotten that AD lockout policies are monolithic and only work when specified in Default Domain Policy. They do NOT work per-user the way they are in eDirectory. By doing it the way we did, no lockout policies were being applied anywhere. Googling on this gave me the page for the new Server 2008-era granular policies. Unfortunately for us, it requires the domain to be brought to the 2008 Functional Level, which we can't do quite yet.
What's interesting is a certain Microsoft document that suggested settings of 50 bad logins every 30 minutes as a way to avoid DoSing your needed accounts. That's way more that 6 every 30.
Getting the forest functional level raised just got more priority.
Wednesday, November 04, 2009
Novell federates with Google
That said, with Pulse offering interoperability with Wave, it is entirely possible for extra-organizational users to collaborate with in-organization users on specific items. Sort of an Open-ID enabled version of SharePoint perhaps. This could be good.
Thursday, October 22, 2009
Windows 7 releases!
Don't use it yet. We're not ready. Things will break. Don't call us when it does.
But as with any brand new technology there is demand. Couple that with the loose 'corporate controls' inherent in a public Higher Ed institution and we have it coming in anyway. And I get calls when people can't get to stuff.
The main generator of calls is our replacement of the Novell Login Script. I've spoken about how we feel about our login script in the past. Back on July 9, 2004 I had a long article about that. The environment has changed, but it still largely stands. Microsoft doesn't have a built in login script the same way NetWare/OES has had since the 80's, but there are hooks we can leverage. One of my co-workers has built a cunning .VBS file that we're using for our login script, and does the kinds of things we need out of a login script:
- Run a series of small applications we need to run, which drive the password change notification process among other things.
- Maps drives based on group membership.
- Maps home directories.
- Allows shelling out to other scripts, which allows less privileged people to manage scripts for their own users.
Which brings me to XYZ and Win7.
The main incompatibility has to do with the NetWare CIFS stack. Which I describe here. The NetWare CIFS stack doesn't speak NTLMv2, only LM and NTLM. In this instance, it makes it similar to much older Samba versions. This conflicts with Vista and Windows 7, which both default their LAN Manager Authentication Level to "NTLMv2 Responses Only." Which means that out of the box both Vista and Win7 will require changes to talk to our NetWare servers at all. This is fine, so long as they're domained we've set a Group Policy to change that level down to something the NetWare servers speak.
That's not all of it, though. Windows 7 introduced some changes into the SMB/CIFS stack that make talking to NetWare a bit less of a sure thing even with the LAN Man Auth level set right. Perhaps this is SMB2 negotiations getting in the way. I don't know. But for whatever reason, the NetWare CIFS stack and Win7 don't get along as well as the Vista's SMB/CIFS stack did.
The main effect of this is that the user's home-directory will fail to mount a lot more often on Win7 than on Vista. Also, other static drive mappings will fail more often. It is reasons like these that we are not recommending removing the Novell Client and relying on our still in testing Windows Login Script.
That said, I can understand why people are relying on the crufty script rather than the just-works Novell Login Script. Due to how our environment works, The Vista/Win7 Novell Client is dog slow. Annoyingly slow. So annoyingly slow that not getting some drives when you log in is preferable to dealing with it.
This will all change once we move the main file-serving cluster to Windows 2008. At that point, the Windows script should Just Work (tm). At that point, getting rid of the Novell Client will allow a more functional environment. We are not at that point yet.
Tuesday, October 06, 2009
BrainShare returns for 2010?
Posting will be light. I was out sick last week, and I have family arriving later this week and in to next week.
Wednesday, September 30, 2009
I have a degree in this stuff
- A career in academics
- A career in programming
Anyway. Every so often I stumble across something that causes me to go Ooo! ooo! over the sheer computer science of it. Yesterday I stumbled across Barrelfish, and this paper. If I weren't sick today I'd have finished it, but even as far as I've gotten into it I can see the implications of what they're trying to do.
The core concept behind the Barrelfish operating system is to assume that each computing core does not share memory and has access to some kind of message passing architecture. This has the side effect of having each computing core running its own kernel, which is why they're calling Barrelfish a 'multikernel operating system'. In essence, they're treating the insides of your computer like the distributed network that it is, and using already existing distributed computing methods to improve it. The type of multi-core we're doing now, SMP, ccNUMA, uses shared memory techniques rather than message passing, and it seems that this doesn't scale as far as message passing does once core counts go higher.
They go into a lot more detail in the paper about why this is. A big one is hetergenaity of CPU architectures out there in the marketplace, and they're not just talking just AMD vs Intel vs CUDA, this is also Core vs Core2 vs Nehalem. This heterogenaity in the marketplace makes it very hard for a traditional Operating System to be optimized for a specific platform.
A multikernel OS would use a discrete kernel for each microarcitecture. These kernels would communicate with each other using OS-standardized message passing protocols. On top of these microkernels would be created the abstraction called an Operating System upon which applications would run. Due to the modularity at the base of it, it would take much less effort to provide an optimized microkernel for a new microarcitecture.
The use of message passing is very interesting to me. Back in college, parallel computing was my main focus. I ended up not pursuing that area of study in large part because I was a strictly C student in math, parallel computing was a largely academic endeavor when I graduated, and you needed to be at least a B student in math to hack it in grad school. It still fired my imagination, and there was squee when the Pentium Pro was released and you could do 2 CPU multiprocessing.
In my Databases class, we were tasked with creating a database-like thingy in code and to write a paper on it. It was up to us what we did with it. Having just finished my Parallel Computing class, I decided to investigate distributed databases. So I exercised the PVM extensions we had on our compilers thanks to that class. I then used the six Unix machines I had access to at the time to create a 6-node distributed database. I used statically defined tables and queries since I didn't have time to build a table parser or query processor and needed to get it working so I could do some tests on how optimization of table positioning impacted performance.
Looking back on it 14 years later (eek) I can see some serious faults about my implementation. But then, I've spent the last... 12 years working with a distributed database in the form of Novell's NDS and later eDirectory. At the time I was doing this project, Novell was actively developing the first version of NDS. They had some problems with their implementation too.
My results were decidedly inconclusive. There was a noise factor in my data that I was not able to isolate and managed to drown out what differences there were between my optimized and non-optimized runs (in hindsight I needed larger tables by an order of magnitude or more). My analysis paper was largely an admission of failure. So when I got an A on the project I was confused enough I went to the professor and asked how this was possible. His response?
"Once I realized you got it working at all, that's when you earned the A. At that point the paper didn't matter."Dude. PVM is a message passing architecture, like most distributed systems. So yes, distributed systems are my thing. And they're talking about doing this on the motherboard! How cool is that?
Both Linux and Windows are adopting more message-passing architectures in their internal structures, as they scale better on highly parallel systems. In Linux this involved reducing the use of the Big Kernel Lock in anything possible, as invoking the BKL forces the kernel into single-threaded mode and that's not a good thing with, say, 16 cores. Windows 7 involves similar improvements. As more and more cores sneak into everyday computers, this becomes more of a problem.
An operating system working without the assumption of shared memory is a very different critter. Operating state has to be replicated to each core to facilitate correct functioning, you can't rely on a common memory address to handle this. It seems that the form of this state is key to performance, and is very sensitive to microarchitecture changes. What was good on a P4, may suck a lot on a Phenom II. The use of a per-core kernel allows the optimal structure to be used on each core, with changes replicated rather than shared which improves performance. More importantly, it'll still be performant 5 years after release assuming regular per-core kernel updates.
You'd also be able to use the 1.75GB of GDDR3 on your GeForce 295 as part of the operating system if you really wanted to! And some might.
I'd burble further, but I'm sick so not thinking straight. Definitely food for thought!
Friday, September 25, 2009
More thoughts on the Novell support change
Novell spent quite a bit of time attempting to build up their 'community' forums for peer-support. Even going so far as to seed the community with supported 'sysops' who helped catalyze others into participating, and creating a vibrant peer support community. This made sense because it built both goodwill and brand loyalty, but also reduced the cost-center known as 'support'. All those volunteers were taking the minor-issue load off of the call-in support! Money saved!
Fast forward several years. Novell bought SuSE and got heavily into Open Source. Gradually, as the OSS products started to take off commercially, the support contracts became the main money maker instead of product licenses. Just as suddenly, this vibrant goodwill-generating peer-support community is taking vital business away from the revenue-stream known as 'support'. Money lost!
Just a simple shift in the perception of where 'support' fits in the overall cost/revenue stream makes this move make complete sense.
Novell will absolutely be keeping the peer support forums going because they do provide a nice goodwill bonus to those too cheap to pay for support. However.... with 'general support' product-patches going behind a pay-wall, the utility of those forums decreases somewhat. Not all questions, or even most of them for that matter, require patches. But anyone who has called in for support knows the first question to be asked is, "are you on the latest code," and that applies to forum posts as well.
Being unable to get at the latest code for your product version means that the support forum volunteers will have to troubleshoot your problem based on code they may already be well past, or not have had recent experience with. This will necessarily degrade their accuracy, and therefore the quality of the peer support offered. This will actively hurt the utility of the peer-support forums. Unfortunately, this is as designed.
For users of Novell's active-development but severe underdog products such as GroupWise, OES2, and Teaming+Conferencing, the added cost of paying for a maintenance/support contract can be used by internal advocates of Exchange, Windows, and SharePoint as evidence that it is time to jump ship. For users of Novell's industry-leading products such as Novell Identity Management, it will do exactly as designed and force these people into maintaining maintenance contracts.
The problem Novell is trying to address are the kinds of companies that only buy product licenses when they need to upgrade, and don't bother with maintenance unless they're very sure that a software upgrade will fall within the maintenance period. I know many past and present Novell shops who pay for their software this way. It has its disadvantages because it requires convincing upper management to fork over big bucks every two to five years, and you have to justify Novell's existence every time. The requirement to have a maintenance contract in order for your highly skilled staff to get at TIDs and patches, something that used to be both free and very effective, is a real-world major added expense.
This is the kind of thing that can catalyze migration events. A certain percentage will pony up and pay for support every year, and grumble about it. Others, who have been lukewarm towards Novell for some time due adherence to the underdog products, may take it as the sign needed to ditch these products and go for the industry leader instead.
This move will hurt their underdog-product market-share more than it will their mid-market and top-market products.
If you've read Novell financial statements in the past few years you will have noticed that they're making a lot more money on 'subscriptions' these days. This is intentional. They, like most of the industry right now, don't want you to buy your software in episodic bursts every couple years. They want you to put a yearly line-item in your budget that reads, "Send money to Novell," that you forget about because it is always there. These are the subscriptions, and they're the wave of the future!
Wednesday, September 23, 2009
Novell Support: Now even MORE behind a pay-wall!
So if you're a NetWare customer that hasn't paid maintenance in umpteen years since your server Just Works (TM), you'll now have to buy maintenance if you want to apply the latest Service Pack. Or if your server is throwing abends that can be fixed with a patch that you learned about in the peer support forums, you'll need a contract to be able to access it. This was done intentionally to pull in these free-loaders into paid support, but it does represent a potentially steep cost that can catalyze more migrations off of Novell products. This will hurt the shoe-string IT departments more than the big-bucks one. And since that describes a goodly percentage of 'small businesses' this could be a major problem in the future.
What's causing some confusion is their intent to put some of the KB articles behind the pay-wall as well. As described by Novell's support-community coordinator:
FACT: Only about 8% of the TIDs in the knowledgebase will be closed offSo the 20+ year history of NetWare TIDs will still be there as NetWare is nolonger on general support per-se, but TIDs about currently in support closed-source items like Novell Identity Manager and the entire ZenWorks line is another story. One beef I have about this is that even if you do have a maintenance contract, it means that anyone who could possibly search the KB for articles has to have:
for entitled customers. Those are the TIDS for the products under "General
Support" ( http://support.novell.com/lifecycle ). All other TIDS will
remain open to the general public. As products move from general support
to extended and self support, all TIDS will become public.
- A novell.com login
- Their novell.com login associated with a maintenance contract
The end effect of all of this is that the value of 'peer support' is markedly reduced for currently-shipping products. Once upon a time Novell was a company that really encouraged peer support since it took load off of their support engineers, customers liked it since it was free, and it encouraged quite a lot of goodwill. Now they seem to have realized that this was a drain on the bottom line and are dismantling the system in favor of everyone paying for support. This destroys goodwill, as they're now learning in the support forums.
Tuesday, September 01, 2009
NetWare and Snow Leopard
What's interesting is that they'll be releasing a fix for NetWare too, not just OES. This suggests that the breakage isn't something like depreciating older authentication protocols, rather changing how such protocols are handled. That way the amount of engineering required is a lot less than trying to get Diffie Helman into NetWare.
Thursday, August 06, 2009
Another nice How-To
Setting up Novell LDAP Libraries for C#
Another one of those things I went, "Ooh! USEFUL! Oh wait, we don't care any more. Drat. I bet I can blog that, though." So I am.
Use VisualStudio for developing applications (we did)? Need to talk to eDirectory? Why not use LDAP and Novell's tools for doing so! We've used their elderly ActiveX controls to do great things, and this should to about half of what we do with those. File manipulations will need to be another library, though.
Update: And how to set it up to use SSL. It requires Mono.
Thursday, July 23, 2009
Fixing links and history
- Post-pages, the per-post link for direct linking to posts
- Tags, or labels as they call it
- Subjects, though it may have been there and I didn't elect to use it.
Tuesday, July 21, 2009
Digesting Novell financials
Yeah, well. Wrong. Novell managed to turn the corner and wean themselves off of the NetWare cash-cow. Take the last quarterly statement, which you can read in full glory here. I'm going to excerpt some bits, but it'll get long. First off, their description of their market segments. I'll try to include relevant products where I know them.
The reduction in 'services' revenue is, I believe, a reflection in a decreased willingness for companies to pay Novell for consulting services. Also, Novell has changed how they advertise their consulting services which seems to also have had an impact. That's the economy for you. The raw numbers:
We are organized into four business unit segments, which are Open Platform Solutions, Identity and Security Management, Systems and Resource Management, and Workgroup. Below is a brief update on the revenue results for the second quarter and first six months of fiscal 2009 for each of our business unit segments:
Within our Open Platform Solutions business unit segment, Linux and open source products remain an important growth business. We are using our Open Platform Solutions business segment as a platform for acquiring new customers to which we can sell our other complementary cross-platform identity and management products and services. Revenue from our Linux Platform Products category within our Open Platform Solutions business unit segment increased 25% in the second quarter of fiscal 2009 compared to the prior year period. This product revenue increase was partially offset by lower services revenue of 11%, such that total revenue from our Open Platform Solutions business unit segment increased 18% in the second quarter of fiscal 2009 compared to the prior year period.
Revenue from our Linux Platform Products category within our Open Platform Solutions business unit segment increased 24% in the first six months of fiscal 2009 compared to the prior year period. This product revenue increase was partially offset by lower services revenue of 17%, such that total revenue from our Open Platform Solutions business unit segment increased 15% in the first six months of fiscal 2009 compared to the prior year period.
[sysadmin1138: Products include: SLES/SLED]
Our Identity and Security Management business unit segment offers products that we believe deliver a complete, integrated solution in the areas of security, compliance, and governance issues. Within this segment, revenue from our Identity, Access and Compliance Management products increased 2% in the second quarter of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 45%, such that total revenue from our Identity and Security Management business unit segment decreased 16% in the second quarter of fiscal 2009 compared to the prior year period.
Revenue from our Identity, Access and Compliance Management products decreased 3% in the first six months of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 40%, such that total revenue from our Identity and Security Management business unit segment decreased 18% in the first six months of fiscal 2009 compared to the prior year period.
[sysadmin1138: Products include: IDM, Sentinal, ZenNAC, ZenEndPointSecurity]
Our Systems and Resource Management business unit segment strategy is to provide a complete “desktop to data center” offering, with virtualization for both Linux and mixed-source environments. Systems and Resource Management product revenue decreased 2% in the second quarter of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 10%, such that total revenue from our Systems and Resource Management business unit segment decreased 3% in the second quarter of fiscal 2009 compared to the prior year period. In the second quarter of fiscal 2009, total business unit segment revenue was higher by 8%, compared to the prior year period, as a result of our acquisitions of Managed Object Solutions, Inc. (“Managed Objects”) which we acquired on November 13, 2008 and PlateSpin Ltd. (“PlateSpin”) which we acquired on March 26, 2008.
Systems and Resource Management product revenue increased 3% in the first six months of fiscal 2009 compared to the prior year period. The total product revenue increase was partially offset by lower services revenue of 14% in the first six months of fiscal 2009 compared to the prior year period. Total revenue from our Systems and Resource Management business unit segment increased 1% in the first six months of fiscal 2009 compared to the prior year period. In the first six months of fiscal 2009 total business unit segment revenue was higher by 12% compared to the prior year period as a result of our Managed Objects and PlateSpin acquisitions.
[sysadmin1138: Products include: The rest of the ZEN suite, PlateSpin]
Our Workgroup business unit segment is an important source of cash flow and provides us with the potential opportunity to sell additional products and services. Our revenue from Workgroup products decreased 14% in the second quarter of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 39%, such that total revenue from our Workgroup business unit segment decreased 17% in the second quarter of fiscal 2009 compared to the prior year period.
Our revenue from Workgroup products decreased 12% in the first six months of fiscal 2009 compared to the prior year period. In addition, services revenue was lower by 39%, such that total revenue from our Workgroup business unit segment decreased 15% in the first six months of fiscal 2009 compared to the prior year period.
[sysadmin1138: Products include: Open Enterprise Server, GroupWise, Novell Teaming+Conferencing,
| || ||Three months ended|| |
| || ||April 30, 2009|| || ||April 30, 2008|| |
| ||Net revenue|| ||Gross |
| || ||Operating |
| || ||Net revenue|| ||Gross |
| || ||Operating |
Open Platform Solutions
| ||$||44,112|| ||$||34,756|| || ||$||21,451|| || ||$||37,516|| ||$||26,702|| || ||$||12,191|| |
Identity and Security Management
| || ||38,846|| || ||27,559|| || || ||18,306|| || || ||46,299|| || ||24,226|| || || ||12,920|| |
Systems and Resource Management
| || ||45,354|| || ||37,522|| || || ||26,562|| || || ||46,769|| || ||39,356|| || || ||30,503|| |
| || ||87,283|| || ||73,882|| || || ||65,137|| || || ||105,082|| || ||87,101|| || || ||77,849|| |
Common unallocated operating costs
| || ||—|| || ||(3,406||)|| || ||(113,832||)|| || ||—|| || ||(2,186||)|| || ||(131,796||)|
| || || |
Total per statements of operations
| ||$||215,595|| ||$||170,313|| || ||$||17,624|| || ||$||235,666|| ||$||175,199|| || ||$||1,667|| |
| || || |
| || ||Six months ended|| |
| || ||April 30, 2009|| || ||April 30, 2008|| |
| ||Net revenue|| ||Gross |
| || ||Operating |
| || ||Net revenue|| ||Gross |
| || ||Operating |
Open Platform Solutions
| ||$||85,574|| ||$||68,525|| || ||$||40,921|| || ||$||74,315|| ||$||52,491|| || ||$||24,059|| |
Identity and Security Management
| || ||76,832|| || ||52,951|| || || ||35,362|| || || ||93,329|| || ||52,081|| || || ||29,316|| |
Systems and Resource Management
| || ||90,757|| || ||74,789|| || || ||52,490|| || || ||90,108|| || ||74,847|| || || ||58,176|| |
| || ||177,303|| || ||149,093|| || || ||131,435|| || || ||208,840|| || ||173,440|| || || ||155,655|| |
Common unallocated operating costs
| || ||—|| || ||(7,071||)|| || ||(228,940||)|| || ||—|| || ||(4,675||)|| || ||(257,058||)|
| || || |
Total per statements of operations
| ||$||430,466|| ||$||338,287|| || ||$||31,268|| || ||$||466,592|| ||$||348,184|| || ||$||10,148|| |
So, yes. Novell is making money, even in this economy. Not lots, but at least they're in the black. Their biggest growth area is Linux, which is making up for deficits in other areas of the company. Especially the sinking 'Workgroup' area. Once upon a time, "Workgroup," constituted over 90% of Novell revenue.
Revenue from our Workgroup segment decreased in the first six months of fiscal 2009 compared to the prior year period primarily from lower combined OES and NetWare-related revenue of $13.7 million, lower services revenue of $10.5 million and lower Collaboration product revenue of $6.3 million. Invoicing for the combined OES and NetWare-related products decreased 25% in the first six months of fiscal 2009 compared to the prior year period. Product invoicing for the Workgroup segment decreased 21% in the first six months of fiscal 2009 compared to the prior year period.Which is to say, companies dropping OES/NetWare constituted the large majority of the losses in the Workgroup segment. Yet that loss was almost wholly made up by gains in other areas. So yes, Novell has turned the corner.
Another thing to note in the section about Linux:
The invoicing decrease in the first six months of 2009 reflects the results of the first quarter of fiscal 2009 when we did not sign any large deals, many of which have historically been fulfilled by SUSE Linux Enterprise Server (“SLES”) certificates delivered through Microsoft.Which is pretty clear evidence that Microsoft is driving a lot of Novell's Operating System sales these days. That's quite a reversal, and a sign that Microsoft is officially more comfortable with this Linux thing.
Thursday, July 09, 2009
Using GroupWise as a generic mail client
As near as I can figure, this is a GroupWise client tweaked for use without a GroupWise server. This'll allow you to do IMAP/POP email and have all those other nifty GroupWise features like richly featured rules. I haven't tried it myself, but I am sorely tempted.
Monday, May 11, 2009
Rebuilds and nwadmin
Happily for me, this is a procedure I can do without having to look things up in the Novell KB. This is part of the reason the letters "CNE" follow my name. The procedure is pretty straight-forward and I've done it before.
- Remove dead server's objects from the tree
- Designate a new server as the Master for any replica this server was the master of (all of them, as it happened)
- Install server fresh
As for the install... this server is an HP BL20P G3, which means I used the procedure I documented a while back (Novell, local copy). A few minor steps changed (the INSERT Linux I used then now correctly handles SmartArray cards), but otherwise that's what I did. Still works.
For a wonder, our SSL administrator still had the custom SSL certificate we created for this server three years ago. That saved me the step of creating a CSR and setting up all the Subject Alternate Names we needed.
And today I fired up NWADMIN for the first time in not nearly long enough to associate the SLP scope to this server, since it was one of our two DA's. I could probably have done the same thing in iManager with "Other" attributes, but... why risk not getting all the right attributes associated when I have a tool that has all the built-in rules. This is the one thing that I still have NWAdmin around for. SLP-on-NetWare management.
Wednesday, May 06, 2009
Windows 7 and NetWare CIFS
I may need to break out Wireshark and see what the heck is going on at the packet level.
Life on the bleeding edge, I tell you.
Update: Know what? It was a name-resolution issue. It seems that once you went to the resource with its FQDN rather than the short name, the short names started working. Kind of odd, but that's what did it. A bit of packet sniffing may illuminate why the short-name method didn't work at first (it should) which just might illuminate either a bug in Win7, or a simple feature of Windows name-resolution protocols.
The only change that needed to be made was drop the LAN Manager Authentication Level to not offer NTLMv2 responses unless negotiated for it.
Thursday, April 30, 2009
Windows 7 RC is out
The biggest concern is that Microsoft still hasn't fixed the issue that makes the Novell Client for Vista so darned slow. This is a major deal-breaker for us, so we've been informed from on high to Do Something so our Vista/Win7 clients can have fast file-serving and printing.
That "Something" has been to turn on the CIFS stack on our NetWare servers, with domain integrated login. The Vista and Win7 clients will have to turn their LanMan Authentication Level from the default (and secure) setting of, "Send NTLMv2 Response Only" to at most, "Send NTLM Response Only." The NetWare CIFS stack can't handle NTLMv2, nor will it ever. Those people who have been suffering through the NCV get downright bouncy when they see how fast it is.
Printing... we'll see. A LOT of the printing in fac/staff land is direct-IP which has no Novell dependencies. There are a few departments out there that have enough print volume that a print-server is a good idea, so I'm hoping there is an iPrint client for Win7 out pretty fast.
All in all, we're expecting uptake of Win7 to be a lot faster than Vista ever was. In this sense Win7 is a lot like Win98SE. All the press saying that Win7 is a lot better than Vista will help drive the push away from WinXP.
Wednesday, April 22, 2009
Novell wants your BrainShare input
Novell BrainShare 2010 Advisory Board
Since BrainShare took 2009 off, they're planning on bringing it back in 2010. And they're looking for end user input into what it should look like. Should it stay in Salt Lake City? Should events be dropped? Should events be added? This looks to be an online colaboration rather than physical presence, so proximity to Provo, UT shouldn't be a problem. Though, proximity to the US Mountain Timezone may be a good idea.
If you get selected for the board, a perk is a pass for BrainShare 2010.
Tuesday, April 21, 2009
Zen Asset Inventory
Novell said that the reason for the crashes was excessive duplicate workstations. ZAM is supposed to handle this, but it seems 2 years of quarterly lab reimaging seems to have finally overwhelmed the de-dup process. The fix is fairly straight forward, but very labor intensive:
- Clean out the Zenworks database
- Force a WorkstationOID change on all workstations
- Stop the Collection Client service
- Delete a specific registry key
- Start the Collection Client service
Cleaning out the database proved to be more complicated than I thought. At first I thought I just had to delete all the workstations from the Manager tool. But that would be wrong. Actually looking at the database tables showed a LOT of data in a supposedly clean database.
The very first thing I tried was to remove all the workstations from the database by way of the manager, and restart inventory. The theory here is that this would eliminate all the duplicate entries, so we'd just start the clock ticking again until the imaging caught us out. Since I had modified our imaging procedures, this shouldn't happen again any way. Tada!
Only the inventory process started crashing. Crap.
The second thing I tried was to strobe through the Lab workstations with the WorkstationOID-reset script I worked up in PowerShell (this is not something I could have done without an Active Directory domain, by the way). These are the stations with the most images, and getting them reset should clear the problem. Couple that with a clearing of the database by way of the Manager, and we should be good!
Only the inventory process started crashing. It took a bit longer, but it still crashed pretty quickly.
Try number three... run the powershell script across the ENTIRE DOMAIN. This took close to four days. Empty the database via Manager again, restart.
It crashed. It took until the second day to crash, but it still crashed.
As I had reset the WorkstationOID on all domained machines (or at least a very large percentage of them), the remaining dups were probably in the non-domained labs I have no control over. So why the heck was I still getting duplication problems with a supposedly clean database? So I went into SQL Studio to look at the database tables themselves. The NC_Workstation table itself had over 15,000 workstations in it. Whaaa?
However, this would explain the duplication problems I'd been having! If it had been doing the de-dup processing on historical data that included a freighter full of duplicates already, it was going to crash. Riiiiight. So. How do I clean out the tables? Due to foreign key references and full tables elsewhere, I had to build a script that would purge leaf tables, then core tables. The leaf tables (things like NC_BIOS) could be Truncated, handy when a table contains over a million rows. Core tables (NC_Component) have to be deleted line-by-line, which for the 2.7 million row NC_Component table took close to 24 hours to fully delete and reindex.
With a squeaky clean database, and the large majority of WorkstationOID values reset enterprise wide, I have restarted the inventory process. The Zenworks database is growing at a great pace as the Component tables repopulate. This morning we have 3,750 workstations and growing. We inventoried close to 3,300 stations yesterday and didn't get a single inventory crash. This MAY have fixed it!
I'm keeping these SQL scripts for later use if I need 'em.
They key learning here? Removing the workstations from the Manager doesn't actually purge the workstation from the database itself.
Wednesday, April 15, 2009
Windows 7 forces major change
The Vista client has been vexing in this regard since it is so painfully slow in our clustered environment. The reason it is slow is the same reason the first WinXP clients were slow, the Microsoft and Novell name-resolution processes conmpete in bad ways. As each drive letter we map is its own virtual-server, every time you attempt to display a Save/Open box or open Windows Explorer it has to resolve-timeout-resolve each and every drive letter. This means that opening a Save/Open box on a Vista machine running the Novell client can take upwards of 5 minutes to display thanks to the timeouts. Novell knows about this issue, and has reported it to Microsoft. This is something Microsoft has to fix, and they haven't yet.
This is vexing enough that certain highly influential managers want to make sure that the same thing doesn't happen again for Windows 7. As anyone who follows any piece of the tech media knows, Windows 7 has been deemed, "Vista done right," and we expect a lot faster uptake of Win7 than WinVista. So we need to make sure our network can accommodate that on release-day. Make it so, said the highly placed manager. Yessir, we said.
So last night I turned CIFS on for all the file services on the cluster. It was that or migrate our entire file-serving function to Windows. The choice, as you can expect, was an easy one.
This morning our Mac users have been decidedly gleeful, as CIFS has long password support where AFP didn't. The one sysadmin here in techservices running Vista as his primary desktop has uninstalled the Novell Client and is also cheerful. Happily for us, the directive from said highly placed manager was accompanied by a strong suggestion to all departments that domaining PCs into the AD domain would be a Really Good Idea. This allows us to use the AD login-script, as well as group-policies, for those Windows machines that lack a Novell Client.
Ultimately, I expect the Novell Client to slowly fade away as a mandatory install. So that clientless-future I said we couldn't take part in? Microsoft managed to push us there.