return of differences.

| 1 Comment
The big iozone test has kicked off about 45 minutes ago. When I did this on NetWare, CPU hung at around 60-65% or so, and the Telecom guys made happy noises as their new monitoring software turned colors they'd never seen before.

Okay, it turned 'warning'. Before it was either green/working, or red/broken. They'd never seen yellow/high-load before. They were quite happy.

Anyway... the 1GB link between the lab with all the workstations and the router core was running 79-81% utilization. Nice!

On the san link we had around 20% utilization, the highest I'd ever seen the EVA drive before.

Right now I can't tell what that link is running, but the link into the server itself is running in the 50-60% range. Better analysis will occur tomorrow when I can ask the Telecom guys how that link behaved overnight. As for the server, load-levels are well above 3.0 again. Right at this moment it's at 14ish, with ndsd being the prime process.

At this point I'm begining to question what unix load-averages mean when compared to the cpu-percentage reported by NetWare. Are they comparable? How does one compare? Anyway, the dir-create and file-create tests did showed to be much more cpu-bound on Linux than NetWare, and this sort of bulk I/O seems to have a similar binding. Late in the test CPU on NetWare was fairly low, 20% range, with the prime teller of loading being allocated Service Processes. So I'm pretty curious as to what load will look like when all the stations get into the 128MB file sizes and larger.

Tags: ,

1 Comment

I'm not sure if you can numerically compare CPU % to load average.Once I got used to load average, I like it as a metric a lot more. Seeing CPU usage hover at 100% doesn't tell you as much as knowing that the load average is 8, vs 15 or whatever.