Getting ready for a benchmark

| 1 Comment
Last January I did a benchmark of OES-Linux versus OES-NetWare performance for NCP and CIFS sharing. That was done on OES SP1 due to SP2's relatively recent release. SP2 has now been out for quite some time, and both platforms have seen significant improvements with regards to NSS and NCP.

Right now I'm looking to test two things:
  • NCP performance to an NSS volume from a Windows workstation (iozone)
  • Big directory (10,000+ entries) performance over NCP (tool unknown)
I'm open to testing other things, but my testing environment is limited. There are a few things I'd like to test, but don't have the material to do:
  • Large scale concurrent connection performance test. Essentially, the NCP performance test done massively parallel. Over 1000 simultanious connections. Our cluster servers regularly serve around 3000 simultanious connections during term, and I really want to know how well OES-Linux handles that.
  • Scaled AFP test. This requires having multiple Mac machines, which I don't have access to. We have a small but vocal Mac community (all educational institutions do, I believe), and they'll notice if performance drops as a result of a theoretical NetWare to Linux kernel change.
  • Any AFP test at all. No mac, means no testy testy.
  • NCP performance to an NSS volume from a SLED10 station. I don't have a reformatable test workstation worth beans that can drive a test like this one, and I don't trust a VM to give consistent results.
The large directory test is one that my co-workers pointed to after my last test. The trick there will be finding a tool that'll do what I need to do. IOZONE comes with one that comes kinda close, but isn't right. I need to generate X sub-directories, and time how long it takes to enumerate those X sub-directories. Does it scale linearly, or is there a threshold where the delay goes up markedly?

This may require me to write custom code, which I'm loth to do but will do if I have to. Especially since different API calls can yield different results on the same platform, and I'm not programmer enough to be able to be certain which API call I'm hooking is the one we want to test. This is why I'd like to find a pre-built tool.

If you have something that you'd like tested, post in the comments. It may actually happen if you include a pointer to a tool that'll measure it. Who knows?

Tags: ,

1 Comment

After the benchmarking, wondering for your results...