I have an NCP and CIFS benchmark under my belt against a NW65SP4a box, and the results are weird.
First, the environment:
The thing that stands out most clearly is that CIFS/SMB's caching mechanism is far better than NCP's. In several of the test types, througputs were reported in the 'jaw dropping' range for CIFS, and that can only be attributed to pretty agressive caching. Though, once file-sizes get much above 128M, caching only goes so far and you start getting the feel for the efficiency of the base filesystem and network I/O.
That said, probably the best way to test the base system is what IOZONE calls the 'Backward Read' test. The test consists of the file being read backward, so caching mechanisms have to be designed to handle that case. This is the only test where NCP-on-NW stomped CIFS-on-NW across the board (mostly), and even there the performance increase was on the order of 5-15%. The one area on that test that CIFS-on-NW beat out NCP-on-NW was at the 64K file-size with 4K records, where the performance increase for using CIFS-on-NW was on average 13% better.
The performance of the network caching is interesting. It is STILL a common thread in the support forums for the Sysops to recommend turning off NetWare's file-caching features due to continuing and ongoing bugs. Yet in a benchmark I read that compared NetWare against the just-released Windows 2003 Server a couple years ago, in order for NetWare to beat out the Windows server on file-system performance caching and oplocks had to be turned on. At the time, that configuration was a known unstable one in the support forums.
Another area to note in the data I have now, is that network I/O is more of a bottleneck than raw disk I/O. Performances in the graph that are higher than the theoretical 100Mb ethernet max have to be, by definition, the result of client-side caching. This is an important distinction, since our file-servers performance will be judged by how zippy they seem to end-users on mapped drives, not the performance of web/db-applications hosted on the file-server.
Keep in mind, this is just the very early look at the data. I haven't done nearly enough work to draw conclusions. For instance, our Novell Client build may turn off client-side caching in a way I'm not familiar with. These things need checking.
First, the environment:
The Server | The Client |
OES SP1 a.k.a NW65SP4a | WinXPsp2 |
NCP Caching Enabled | Novell Client, Caching Enabled |
OPLOCK 2 Enabled | 1GB RAM |
CIFS with Oplock2 enabled | 1x 3.0GHz Intel CPU |
2GB RAM | Hyperthreading ON |
2x 3.2GHz Intel CPU | 100MB Ethernet |
Hyperthreading OFF | |
100MB Ethernet |
The thing that stands out most clearly is that CIFS/SMB's caching mechanism is far better than NCP's. In several of the test types, througputs were reported in the 'jaw dropping' range for CIFS, and that can only be attributed to pretty agressive caching. Though, once file-sizes get much above 128M, caching only goes so far and you start getting the feel for the efficiency of the base filesystem and network I/O.
That said, probably the best way to test the base system is what IOZONE calls the 'Backward Read' test. The test consists of the file being read backward, so caching mechanisms have to be designed to handle that case. This is the only test where NCP-on-NW stomped CIFS-on-NW across the board (mostly), and even there the performance increase was on the order of 5-15%. The one area on that test that CIFS-on-NW beat out NCP-on-NW was at the 64K file-size with 4K records, where the performance increase for using CIFS-on-NW was on average 13% better.
The performance of the network caching is interesting. It is STILL a common thread in the support forums for the Sysops to recommend turning off NetWare's file-caching features due to continuing and ongoing bugs. Yet in a benchmark I read that compared NetWare against the just-released Windows 2003 Server a couple years ago, in order for NetWare to beat out the Windows server on file-system performance caching and oplocks had to be turned on. At the time, that configuration was a known unstable one in the support forums.
Another area to note in the data I have now, is that network I/O is more of a bottleneck than raw disk I/O. Performances in the graph that are higher than the theoretical 100Mb ethernet max have to be, by definition, the result of client-side caching. This is an important distinction, since our file-servers performance will be judged by how zippy they seem to end-users on mapped drives, not the performance of web/db-applications hosted on the file-server.
Keep in mind, this is just the very early look at the data. I haven't done nearly enough work to draw conclusions. For instance, our Novell Client build may turn off client-side caching in a way I'm not familiar with. These things need checking.